diff --git a/2dshapleyaframeworkforfragmenteddatavaluation/60ac8018-66b7-4c21-ba56-8dfa341aa736_content_list.json b/2dshapleyaframeworkforfragmenteddatavaluation/60ac8018-66b7-4c21-ba56-8dfa341aa736_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..2bbea9e7fe90e74b2d623fa7d0a12d1071cc674c --- /dev/null +++ b/2dshapleyaframeworkforfragmenteddatavaluation/60ac8018-66b7-4c21-ba56-8dfa341aa736_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2823f34a9c6d4e72b7b35af3f475f07eb728409916b7cbf9515e8bea0abaafa9 +size 158949 diff --git a/2dshapleyaframeworkforfragmenteddatavaluation/60ac8018-66b7-4c21-ba56-8dfa341aa736_model.json b/2dshapleyaframeworkforfragmenteddatavaluation/60ac8018-66b7-4c21-ba56-8dfa341aa736_model.json new file mode 100644 index 0000000000000000000000000000000000000000..736ab0a1302785fc5fc63e00b23a936de9a3225c --- /dev/null +++ b/2dshapleyaframeworkforfragmenteddatavaluation/60ac8018-66b7-4c21-ba56-8dfa341aa736_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f9d14c1c28b815ca0ef20fea1454aaf5c3e4755a670a0986d928c490bca8218 +size 186302 diff --git a/2dshapleyaframeworkforfragmenteddatavaluation/60ac8018-66b7-4c21-ba56-8dfa341aa736_origin.pdf b/2dshapleyaframeworkforfragmenteddatavaluation/60ac8018-66b7-4c21-ba56-8dfa341aa736_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..92ae22e56f4e98a017785c11bbd8a823e9f26f8c --- /dev/null +++ b/2dshapleyaframeworkforfragmenteddatavaluation/60ac8018-66b7-4c21-ba56-8dfa341aa736_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:848f3462deaa8d533273859e0bf1a71891c9aae4dce7dde257815a76a50247d0 +size 5428123 diff --git a/2dshapleyaframeworkforfragmenteddatavaluation/full.md b/2dshapleyaframeworkforfragmenteddatavaluation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..862ca748270105f1da350b67bf4094e4e4fdb063 --- /dev/null +++ b/2dshapleyaframeworkforfragmenteddatavaluation/full.md @@ -0,0 +1,851 @@ +Zhihong Liu $^{*1}$ Hoang Anh Just $^{*2}$ Xiangyu Chang $^{1}$ Xi Chen $^{3}$ Ruoxi Jia $^{2}$ + +# Abstract + +Data valuation—quantifying the contribution of individual data sources to certain predictive behaviors of a model—is of great importance to enhancing the transparency of machine learning and designing incentive systems for data sharing. Existing work has focused on evaluating data sources with the shared feature or sample space. How to value fragmented data sources of which each only contains partial features and samples remains an open question. We start by presenting a method to calculate the counterfactual of removing a fragment from the aggregated data matrix. Based on the counterfactual calculation, we further propose 2D-Shapley, a theoretical framework for fragmented data valuation that uniquely satisfies some appealing axioms in the fragmented data context. 2D-Shapley empowers a range of new use cases, such as selecting useful data fragments, providing interpretation for sample-wise data values, and fine-grained data issue diagnosis. + +# 1. Introduction + +Data are essential ingredients for building machine learning (ML) applications. The ability to quantify and measure the value of data is crucial to the entire lifecycle of ML: from cleaning poor-quality sam + +*Equal contribution. Code repository publicly available: https://github.com/ruoxi-jia-group/2dshapley1Center for Intelligent Decision-Making and Machine Learning, Department of Information Systems and Intelligent Business, School of Management, Xi'an Jiaotong University, Xi'an, 710049, China.2Bradley Department of Electrical and Computer Engineering, Virginia Tech, Virginia, USA.3Department of Technology, Operations, and Statistics, Stern School of Business, New York University, New York, 10012, USA. Correspondence to: Xiangyu Chang < xiangyuchang@xjtu.edu.cn>, Xi Chen , Ruoxi Jia . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +![](images/c693a4c5b1e17c0883d736bff2aee209edc3863f255dd087d8703bfd76017c93.jpg) +(a) Horizontal Valuation + +![](images/1ca114a6b20d4ca5547d1d4cd88245df2900d00edc2822f0e22a736fae4de2b8.jpg) +(b) Vertical Valuation +Figure 1: Illustration of different data valuation settings based on how training set is partitioned among different data contributors. + +![](images/2cf66b551e22fc05fb6dbd7ee6b41309423519467f4f7128fc6b91ec3bfceea8.jpg) +(c) Fragmented Valuation + +plies and tracking important ones to be collected during data preparation to setting proper proprieties over samples during training to interpret why certain behaviors of a model emerge during deployment. Determining the value of data is also central to designing incentive systems for data sharing and implementing current policies about the monetarization of personal data. + +Current literature of data valuation (Jia et al., 2019b; Ghorbani & Zou, 2019) has exclusively focused on valuing horizontally partitioned data—in other words, each data source to be valued shares the same feature space. How to value vertically partitioned data, where each data source provides a different feature but shares the same sample space, has been studied in the context of ML interpretability (Covert et al., 2020). However, none of these abstractions could fully capture the complexity of real-world scenarios, where data sources can have non-overlapping features and samples (termed as fragmented data sources hereinafter). + +Example 1. Consider two banks, $B_{1}$ and $B_{2}$ , and two e-commerce companies, $E_{1}$ and $E_{2}$ , located in Region 1 and 2. These four institutions are interested in collaboratively building an ML model to predict users' credit scores with their data. Due to the geographical difference, $B_{1}$ and $E_{1}$ have a different user group from $B_{2}$ and $E_{2}$ . Also, due to the difference in business, $B_{1}$ and $B_{2}$ provide different features than what $E_{1}$ and $E_{2}$ can offer. Overall, the four institutions partition the aggregated data horizontally and vertically, as illustrated by Figure 1(c). How to quantify each institution's contribution to the joint model training? + +Example 2. Due to inevitable errors occurring during the data generation and collection processes, real-world data are seldom high quality. Suppose that a + +data analyst is interested in identifying some potentially erroneous entries in a dataset. Existing horizontal and vertical data valuation tools can help locate the rows or columns that could contain errors by returning the ones with the lowest values. Nevertheless, can we perform more fine-grained detection—e.g., how to pinpoint the coordinate of erroneous entries? + +Example 3. Horizontal data valuation is now widely used to explain the importance of each sample to a learning outcome (Tang et al., 2021; Karlas et al., 2022). But how can a data analyst further explain these sample importance scores—why a sample receives a certain importance score? Is a sample "low-quality" because it contains several "moderate low quality" features or an "exceptionally low quality" feature? + +Answering the above questions calls for a quantitative understanding of how each block in the data matrix (e.g. a sub-matrix as in Ex. 1 or a single entry as in Ex. 2 and 3) contributes to the outcome of learning. + +Technical Challenges. The problem of block valuation requires rethinking about fundamental aspects of data valuation. Existing data valuation theory consists of two basic modules at a conceptual level: (1) Counterfactual Analysis, where one calculates how the utility of a subset of data sources would change after the source to be valued is removed; and (2) Fair Attribution, where a data source is valued based on a weighted average of its marginal utilities for different subsets and the weights are set for the value to satisfy certain fairness properties. The fairness notion considered by the past valuation schemes requires that permuting the order of different data sources does not change their value. + +For horizontal and vertical valuation, the counterfactual can be simply calculated by taking the difference between the model performance trained on a subset of columns or rows and the performance with one column or row being removed. However, it is unclear how to calculate the counterfactual when a block is excluded because the remaining data matrix could be incomplete. Besides, the fairness notion of existing data value notions is no longer appropriate in the context of block valuation. As a concrete example to illustrate this point, consider Figure 1(c) and suppose the two blocks on the left provide temperature measurements as features and the ones on the right are humidity measurements. In this case, one should not expect the value to be unchanged when two blocks with different physical meanings (e.g., yellow and pink) are swapped. + +Contributions. This paper presents the first focused study on data valuation without assuming shared fea + +ture space or sample space. Toward that end, we make the following contributions. + +- We present an approach that enables evaluation of the marginal contribution of a block within the data matrix to any other block with non-overlapping sample and feature spaces. +- We abstract the block valuation problem into a two-dimensional (2D) cooperative game, where the utility function is invariant to column permutations and row permutations but not to any arbitrary entry permutations. +- We propose axioms that a proper valuation scheme should satisfy in the 2D game and show that the axioms lead to a unique representation of the value assignment (referred to as 2D-Shapley). Particularly, this representation is a natural generalization of the Shapley value (Shapley, 1997)—a celebrated value attribution scheme widely used in data valuation among other applications. +- We demonstrate that 2D-Shapley enables new applications, including selecting useful data fragments, providing interpretation for sample-wise data values, and fine-grained data issue diagnosis. + +# 2. Background and Related Work + +In a typical setting, a set of data sources are used to learn an ML model, which achieves a certain performance score. The goal of data valuation is to quantify the contribution of each data source toward achieving the performance score. The definition of a data source depends on the context in which the data valuation results are utilized. For instance, when using data valuation to interpret how the global behavior of the ML model depends on individual samples or individual features, a sample or a feature in the training data is regarded as a data source; when using data valuation to inform the reward design for data sharing, the collection of all samples or all features contributed by the same entities is regarded as a data source. + +Formally, let $N = \{1,\dots ,n\}$ denotes the index set of $n$ training data sources. A data valuation scheme assigns a score to each training data source in a way that reflects their contribution. These scores are referred to as data values. To analyze a source's "contribution", we define a utility function $U:2^{N}\to \mathbb{R}$ , which maps any subset of the data sources to a score indicating the usefulness of the subset. $2^{N}$ represents the power set of $N$ , i.e., the set of all subsets of $N$ , including the empty set and $N$ itself. For the classification task, a common choice for $U$ is the performance of a model trained on the input subset, i.e., $U(S) = \mathrm{acc}(\mathcal{A}(S))$ , where $\mathcal{A}$ is a learning algorithm that takes a set $S\subseteq N$ of sources as + +input and returns a model, and acc is a metric function to evaluate the performance of a given model, e.g., the accuracy of a model on a hold-out validation set. + +Past research has proposed various ways to characterize data values given the utility function, among which the Shapley value is arguably the most widely used scheme for data valuation. The Shapley value is defined as + +$$ +\psi_ {i} ^ {1 d} (U) := \frac {1}{n} \sum_ {k = 1} ^ {n} \binom {n - 1} {k - 1} ^ {- 1} \sum_ {\substack {S \subseteq N \setminus i \\ | S | = k - 1}} [ U (S \cup i) - U (S) ]. \tag{1} +$$ + +To differentiate from the proposed work, we will refer to the Shapley value defined in Eq. (1) as 1D-Shapley. 1D-Shapley is popular due to its unique satisfaction of the following four axioms (Shapley, 1953): + +- Dummy: if $U(S \cup i) = U(S) + c$ for any $S \subseteq N \setminus i$ and some $c \in \mathbb{R}$ , then $\psi_i^{1d}(U) = c$ . +- Symmetry: let $\pi : N \to N$ be any permutation of $N$ and $\pi U(S) \coloneqq U(\pi(S))$ , then $\psi_{\pi(i)}^{1d}(\pi U) = \psi_i^{1d}(U)$ . +- Linearity: For utility functions $U_{1}, U_{2}$ and any $\alpha_{1}, \alpha_{2} \in \mathbb{R}$ , $\psi_{i}^{1d}(\alpha_{1}U_{1} + \alpha_{2}U_{2}) = \alpha_{1}\psi_{i}^{1d}(U_{1}) + \alpha_{2}\psi_{i}^{1d}(U_{2})$ . +- Efficiency: for every $U, \sum_{i \in N} \psi_i^{1d}(U) = U(N)$ . + +The symmetry axiom embodies fairness. In particular, $\pi U$ arises upon the reindexing of data sources $1,\ldots ,n$ with the indices $\pi (1),\dots ,\pi (n)$ ; the symmetry axiom states that the evaluation of a particular position should not depend on the indices of the data sources. + +Although the Shapley value was justified through these axioms in prior literature, the necessity of each axiom depends on the actual use case of data valuation results. Recent literature has studied new data value notions obtained by relaxing some of the aforementioned axioms and enabled improvements in terms of accuracy of bad data identification (Kwon & Zou, 2022), robustness to learning stochasticity (Wang & Jia, 2023; Wu et al., 2022a), and computational efficiency (Yan & Procaccia, 2021). For instance, relaxing the efficiency axiom gives rise to semi-values (Kwon & Zou, 2022; Wang & Jia, 2023); relaxing the linearity axiom gives rise to least cores (Yan & Procaccia, 2021). This paper will focus on generalizing 1D-Shapley to block valuation. As we will expound on later, 1D-Shapley faces two limitations to serve a reasonable notion for blockwise values. Note that 1D-Shapley and the aforementioned relaxed notions share a similar structure: all of them are based on the marginal utility of a data source. Hence, our effort to generalize the 1D-Shapley to new settings can be adapted to other more relaxed notions. + +Another line of related work focuses on developing ef + +ficient algorithms for data valuation via Monte Carlo methods (Jia et al., 2019b; Lin et al., 2022), via surrogate utility functions such as $K$ -nearest-neighbors (Jia et al., 2019a), neural tangent kernels (Wu et al., 2022b), and distributional distance measures (Just et al., 2023; Tay et al., 2022), and via reinforcement learning (Yoon et al., 2020). These ideas can also benefit the efficient computation of the proposed 2D-Shapley. As a concrete example, this paper builds upon Monte Carlo simulation and surrogate model approaches to improve the efficiency of 2D-Shapley. + +Beyond data valuation, 1D-Shapley has been extensively used to gain feature-based interpretability for black-box models locally and globally. The local interpretability methods (Lundberg & Lee, 2017; Strumbelj & Kononenko, 2010) focus on analyzing the relative importance of features for each input separately; therefore, the importance scores of features across different samples are not comparable. By contrast, our work allows the comparison of feature importance across different samples. The global interpretability methods (Covert et al., 2020), on the other hand, explain the model's behavior across the entire dataset. In the context of this paper, we consider them vertical data valuation. Compared to global interpretability methods, our work provides a more fine-grained valuation by associating each entry of the feature with an importance score. Our work improves the interpretability of the global feature importance score in the sense that it reveals the individual sample's contribution to the importance of a feature. + +# 3. How to Value a Block? + +This section starts with formulating the block valuation problem. Then, we will discuss the challenges of using 1D-Shapley to tackle the block valuation problem in terms of both counterfactual analysis and fair attribution. At last, we will present our proposed framework for solving the block valuation problem. + +# 3.1. Problem Formulation + +Let $N = \{1,2,\dots ,n\}$ and $M = \{1,2,\ldots ,m\}$ , indexing $n$ disjoint collection of samples and $m$ disjoint collection of features contributed by $nm$ sources (or blocks). Each data source can be labeled by $(i,j)$ for $i\in N$ and $j\in M$ , where we call $i$ the sample-wise index and $j$ the feature-wise index. To measure the contribution of a data source, we need to define a utility function, which measures the usefulness of a subset of data sources. The utility function $h(S,F)$ takes in two separate sets $S\subseteq N$ and $F\subseteq M$ as the variables and returns a real-valued score indicating the utility of $\{(i,j)\}_{i\in S,j\in F}$ . Note that + +this paper focuses on valuing the relative importance of feature blocks; that is, we assume that each data contributor provides a block of features and then the aggregation of features will be annotated by a separate entity (e.g., a data labeling company) that does not share the profit generated from joint training. More formally, we define the utility function as follows: + +$h(S,F)\coloneqq$ Performance of the model trained on the feature blocks $\{(i,j)\}_{i\in S,j\in F}$ after annotation. + +One can potentially generalize our framework to jointly value feature and label blocks by redefining the utility function to be non-zero only when feature and label are both included in the input block, like (Jia et al., 2019a; Yona et al., 2021), but an in-depth investigation is deferred to future work. + +The benefit of this utility function definition is twofold. First, its two-dimensional index always corresponds to a data fragment with the same feature space for all samples inside. As a result, one can calculate the utility in a straightforward manner by training on the matrix and evaluating the corresponding performance. This is an essential advantage over the one-dimensional index utilized by 1D-Shapley, as will be exemplified later. Second, created this way, the utility function is invariant to permutations of sample-wise indices in $S$ for any given $F$ and permutations of feature-wise indices in $F$ for any given $S$ , but not to permutations of the sample-wise and feature-wise indices combined. This is a desirable property as for many data types in ML, such as tabular data, one would expect that swapping samples or swapping features $^1$ does not change the model performance, yet swapping any two entries in the matrix may lead to arbitrary errors and thus alter the model performance significantly. + +Our goal is to assign a score to each block in $\{(i,j)\}_{i\in N,j\in M}$ that measures its contribution to the outcome of joint learning $h(N,M)$ . + +# 3.2. A Naive Baseline: 1D-Shapley + +One idea to tackle the block valuation problem is to flatten the indices of blocks into one dimension and leverage 1D-Shapley to value each block. Specifically, we can reindex $\{(i,j)\}_{i\in N,j\in M}$ by $T = \{1,\dots ,nm\}$ . Note that this step discards the structural information contained in the two-dimensional indices. Then, one can utilize Eq. (1) to value each $i\in T$ . + +The second step of applying Eq. (1) requires calculat- + +ing $U(S \cup i) - U(S)$ for any $S \subseteq T \setminus i$ . Both $S$ and $S \cup i$ could correspond to a data fragment with samples differing in their feature space (see example in Figure 2); nevertheless, how to evaluate the utility of such a fragment is unclear. An ad hoc way of addressing this problem is to perform missing value imputation, e.g., filling out the missing values of a feature using the average of the feature values present. + +In addition to the difficulty of evaluating the counterfactual, the symmetry axiom satisfied by 1D-Shapley no longer has the correct fairness interpretation when the input indices are flattened from 2D ones. In that case, $1,\ldots ,nm$ , carry specific meanings entailed by the original 2D structure; e.g., some indices might correspond to temperature features, and others might correspond to humidity. Hence, the symmetry axiom that requires unchanged data values after permuting the data sources' indices is not sensible and necessary, as the permutation might map the content of a data source from one meaning to an entirely different one. + +We will use 1D-Shapley with missing value imputation as a baseline for our proposed approach. This simple baseline is still a useful benchmark to assess the extra (non-trivial) gains in different application scenarios that our approach can attain. + +![](images/12a1dc255539215a9215558eb17177d30b3e857314ccfddf71ad990d93db3552.jpg) +Figure 2: A visualization of 1D-Shapley marginal contribution applied to sample-feature valuation. + +# 3.3. Our Approach: 2D-Shapley + +Here, we will describe 2D-Shapley as a principled framework for valuing data blocks. We will emphasize how 2D-Shapley overcomes the challenges of the 1D-Shapley baseline in terms of (1) calculating the counterfactual, (2) framing the correct fairness principles, and then derive the representation of the data values based on the new counterfactual analysis and principle. At last, we will show efficient algorithms to compute 2D-Shapley. + +# 3.3.1. Two-Dimensional Counterfactual Analysis + +Given a two-dimensional utility function $h(\cdot, \cdot)$ , we will define the marginal contribution of a block $(i,j)$ to the collection of blocks $\{(i,j)\}_{i \in S, j \in F}$ as + +$$ +\begin{array}{l} M _ {h} ^ {i, j} (S, F) := h (S \cup i, F \cup j) + h (S, F) \\ - h (S \cup i, F) - h (S, F \cup j). \tag {2} \\ \end{array} +$$ + +The rationality of the definition of $M_h^{i,j}(S,F)$ can be shown by Figure 3. The area corresponding to $h(S\cup i,F\cup j)$ can be viewed as the area $(S\cup i,F\cup j)$ , which subtracts these two areas of $(S\cup i,F)$ and $(S,F\cup j)$ , plus the $(S,F)$ area that is subtracted twice, the remaining area is shown in Figure 3 as "marginal", which corresponds to the marginal influence of the block $(i,j)$ . + +The unique advantage is that each individual utility is well-defined as it takes as input a collection of blocks within which the samples all share same feature space. + +![](images/2a211156d7b8689720eb6a33973efe29ee25053970365619ce13b74b86527e9c.jpg) +Figure 3: Removal process and marginal influence of $(i,j)$ . + +# 3.3.2. Axioms for Block Valuation + +We start by redefining "dummy" for block valuation, where the underlying utility function is 2D. + +Definition 3.1. (2D-Dummy) We call a block $(i,j)$ a 2D-dummy under utility function $h$ if for all $S\subseteq N\backslash i$ and $F\subseteq M\backslash j$ + +$$ +M _ {h} ^ {i, j} (S, F) = c, c \in \mathbb {R}. \tag {3} +$$ + +2D-dummy implies the canonical (one-dimensional) dummy mentioned in Section 2. Specifically, if sample $i$ is a sample dummy which satisfies $h(S \cup i, F) = h(S, F) + c_1$ and $h(S \cup i, F \cup j) = h(S, F \cup j) + c_2$ for $S \subseteq N \setminus i, F \subseteq M \setminus j$ like the dummy defined in 1D-Shapley, then Eq. (3) is satisfied with $c := c_2 - c_1$ , and similarly, if feature $j$ is a feature dummy which satisfies $h(S, F \cup j) = h(S, F) + c_1'$ and $h(S \cup i, F \cup j) = h(S \cup i, F) + c_2'$ for $S \subseteq N \setminus i, F \subseteq M \setminus j$ , then Eq. (3) is also satisfied with $c := c_2' - c_1'$ . However, Eq. (3) can not imply sample $i$ is a sample dummy or feature $j$ is a feature dummy. + +We first define the utility function set $G$ which contains all possible utility functions, and define a value function $\psi : G \to \mathbb{R}^{n \times m}$ and denote the value of block $(i,j)$ as $\psi_{ij}(h)$ which is the $ij$ th element in matrix $\psi(h)$ . In order to build an equitable evaluation system, we provide the following axioms. + +Axiom 1. (2D-Linearity) For any two utility functions $h_1, h_2 \in G$ and any $\beta_1, \beta_2 \in \mathbb{R}$ , + +$$ +\psi_ {i j} \left(\beta_ {1} h _ {1} + \beta_ {2} h _ {2}\right) = \beta_ {1} \psi_ {i j} \left(h _ {1}\right) + \beta_ {2} \psi_ {i j} \left(h _ {2}\right). \tag {4} +$$ + +Axiom 2. (2D-Dummy) If the block $(i,j)$ is a dummy of $h$ which satisfies Eq. (3), then $\psi_{ij}(h) = c$ . + +Axiom 3. (2D-Symmetry) Let $\pi_1: N \to N$ and $\pi_2: M \to M$ be two permutations, then: + +$$ +\psi_ {\pi_ {1} (i) \pi_ {2} (j)} [ (\pi_ {1} \pi_ {2}) h ] = \psi_ {i j} (h), \tag {5} +$$ + +where for all $S\subseteq N,F\subseteq M$ + +$$ +[ (\pi_ {1} \pi_ {2}) h ] (S, F) := [ (\pi_ {2} \pi_ {1}) h ] (S, F) := h (\pi_ {1} (S), \pi_ {2} (F)). \tag {6} +$$ + +Axiom 4. (2D-Efficiency) For every utility function $h \in G$ , + +$$ +\sum_{\substack{i\in N\\ j\in M}}\psi_{ij}(h) = h(N,M). \tag{7} +$$ + +Let us discuss the rationality of the four axioms. + +The 2D-linearity axiom is inherited from 1D-Shapley, which implies that the value of the $(i,j)$ -th block under the sum of two ML performance measures is the sum of the value under each performance measure. + +The 2D-dummy axiom can be interpreted by taking $c = 0$ . If a block has no contribution to the ML task, no matter what the situation (i.e., for any $S \subseteq N \setminus i$ and $F \subseteq M \setminus j$ ), then its value is zero. + +In the 2D-symmetry axiom, the rows and columns are permuted independently. As a result, the entries from the same feature will always remain in the same column. The axiom state that such permutations would not change the value for individual data blocks, which is what we would expect in many ML applications. In Appendix A, we proved that Axiom 3 is implied by explanation here. + +The 2D-efficiency axiom is inherited from 1D-Shapley, requiring that the sum of the values of all the data blocks equals the performance of the whole data set. + +Based on the axioms, we provide a definition: + +Definition 3.2. The value $\psi_{ij}(h)$ with respect to the utility function $h$ is a two-dimensional Shapley value (2D-Shapley for short) if $\psi_{ij}$ satisfies the 2d-linearity, 2d-dummy, 2d-symmetry and 2d-efficiency axioms, denoting as $\psi_{ij}^{2d}$ . + +2D-Shapley can be seen as the two-dimensional extension of Shapley values, which inherits its advantage with a natural adaptation of the dummy and symmetry axiom to the two-dimensional utility function scenario. + +# 3.3.3. Representation Theory + +We will show that there exists an analytic and unique solution for 2D-Shapley. + +Theorem 3.3. (Representation Theory of 2D-Shapley) The $\psi_{ij}^{2d}$ has a unique solution: + +$$ +\psi_ {i j} ^ {2 d} = \frac {1}{n m} \sum_ {s = 1} ^ {n} \sum_ {f = 1} ^ {m} \Delta_ {s f}, \tag {8} +$$ + +where $i\in N$ $j\in M$ + +$$ +\Delta_ {s f} = \frac {1}{\binom {n - 1} {s - 1} \binom {m - 1} {f - 1}} \sum_ {(S, F) \in D _ {s f} ^ {i j}} M _ {h} ^ {i, j} (S, F), \tag {9} +$$ + +$D_{sf}^{ij} = \{(S,F):S\subseteq N\backslash i,F\subseteq M\backslash j,|S| = s - 1,|F| = f - 1\}$ and $M_h^{i,j}(S,F)$ defined in Eq. (2). + +Theorem 3.3 indicates that $\psi_{ij}^{2d}$ is a weighted average of the two-dimensional counterfactual in Eq. (2). Theorem 3.3 is referred to as the representation theory of 2D-Shapley, because the proof procedure shows that $\psi_{ij}^{2d}$ has a basis expansion formulation (see Eq. (15) in Appendix B). To show the basis expansion, a series of basic utility functions in $G$ needs to be defined (e.g., Eq. (13)). Compared with the representation theory of 1D-Shapley by Roth (1988), one technical challenge is to define the basis and basic utility functions for the 2D case to handle the 2D counterfactual. Furthermore, the proof of the uniqueness of 2D-Shapley has to solve a complex high-dimensional linear system (see Eq. (19) in Appendix B). Our proof incorporates new techniques, unseen in the classic proof of 1D-Shapley, to deal with these unique technical challenges arising in the 2D context. + +Moreover, the representation theory also implies that 2D-Shapley can be reduced to 1D-Shapley. The following corollary shows that summing up the block values over all rows gives 1D-Shapley of features, and summing up the block values over all columns gives 1D-Shapley of samples. Corollary 3.4 does not only indicate that the 2D-Shapley is a natural generalization of 1D-Shapley, but also is useful for discussing the experimental results of how 2D values can explain 1D values (see Subsection 4.1). + +Corollary 3.4. For any $h\in G$ , let $\psi_{i}^{1d}(h)\coloneqq \sum_{j\in M}\psi_{ij}^{2d}(h)$ and $\psi_{,j}^{1d}(h)\coloneqq \sum_{i\in N}\psi_{ij}^{2d}(h)$ , then + +$$ +\psi_ {i} ^ {1 d} (h) = \frac {1}{n} \sum_ {\substack {S \subseteq N \setminus i \\ | S | = s}} \frac {1}{\binom {n - 1}{s}} [ h (S \cup i, M) - h (S, M) ], \tag{10} +$$ + +and + +$$ +\psi_ {. j} ^ {1 d} (h) = \frac {1}{m} \sum_ {\substack {F \subset M \setminus j \\ | F | = f}} \frac {1}{\binom {m - 1}{f}} [ h (N, F \cup j) - h (N, F) ], \tag{11} +$$ + +which are in the form of 1D-Shapley. + +Finally, having the analytical expression Eq. (8) of 2D-Shapley at hand will provide us with great convenience in designing efficient algorithms. + +# 3.3.4. Efficient Algorithm + +The computational complexity of exactly calculating 2D-Shapley is exponential in $mn$ due to the summation over all possible rows and columns. To overcome this challenge, we develop a Monte Carlo approach to approximating 2D-Shapley. The key idea is that 2D-Shapley can be rewritten as an expectation of the marginal contribution of $(i,j)$ to the blocks indexed by row indices before $i$ and column indices before $j$ over random permutations of rows and columns. As a result, we can approximate 2D-Shapley by taking an average over randomly sampled rows and columns. We also design the algorithm in ways that can reuse utility function evaluations across different permutations, which gives rise to significant efficiency gains. The full details of the algorithm design are provided in Appendix E, and the pseudo-code is shown in Algorithm 1. + +Evaluating the utility function requires retraining a model. For small-scale datasets, it might be possible to evaluate the utility function within a reasonable time multiple times, but for large-scale datasets, even evaluating it once might require days to finish. This would deem our method impractical for any applications. Nonetheless, we can even obviate all model training to compute our values when using $K$ -nearest-neighbor (KNN) as a surrogate model. KNN-surrogate-based data valuation has shown great computational advantage while providing effective data quality identification (Jia et al., 2019a). In this work, we leverage a similar idea to reduce the computational complexity of 2D-Shapley for large models. First, let us observe from Eq. (8) and Corollary. D.1 that after rearranging inner terms, we have: + +$$ +\psi_ {i j} ^ {2 d} = \frac {1}{n ! m !} \sum_ {\substack {\pi_ {1} \in \Pi (N) \\ \pi_ {2} \in \Pi (M)}} \left[ h \left(P _ {i} ^ {\pi_ {1}} \cup i, P _ {j} ^ {\pi_ {2}} \cup j\right) - \right. \tag{12} +$$ + +$$ +\left. h \left(P _ {i} ^ {\pi_ {1}}, P _ {j} ^ {\pi_ {2}} \cup j\right) \right] - \left[ h \left(P _ {i} ^ {\pi_ {1}} \cup i, P _ {j} ^ {\pi_ {2}}\right) - h \left(P _ {i} ^ {\pi_ {1}}, P _ {j} ^ {\pi_ {2}}\right) \right], +$$ + +where $\Pi(X)$ is a set of all permutations of $X$ , $\pi \in \Pi(X)$ is a permutation of $X$ , and $P_{i}^{\pi}$ is a set of elements preceding $i$ in $\pi$ . The expression in the first bracket is the 1D marginal contribution of sample $i$ and is valid since both utilities are trained on same features, $P_{j}^{\pi_{2}} \cup j$ . Similarly, the second bracket also represents a valid 1D marginal contribution of the sample $i$ but with features $P_{j}^{\pi_{2}}$ . From this observation, we can apply the results of 1D-Shapley value approximated with nearest neighbors, $\phi^{\mathrm{KNN}}$ , defined recursively in Theorem 1 (Jia et al., 2019a), and the 2D-Shapley under KNN surrogates can be therefore expressed as + +$$ +\psi_ {i j} ^ {2 \mathrm {d} - \mathrm {K N N}} = \frac {1}{m !} \sum_ {\pi_ {2} \in \Pi (M)} [ \phi^ {\mathrm {K N N}} (i, P _ {j} ^ {\pi_ {2}} \cup j) - \phi^ {\mathrm {K N N}} (i, P _ {j} ^ {\pi_ {2}}) ]. +$$ + +This new formulation is efficient as it requires no more model training and removes the summing over all possible permutations of samples. We can further approximate the sum over all possible permutations over features with the average over sampled permutations. Our final complexity becomes $\mathcal{O}(PT|M||N|^2\log |N|)$ , where $P$ is the number of sampled feature permutations, $T$ is the number of test points used for evaluating model performance, and $|N|, |M|$ are the cardinality of $N$ and $M$ respectively, and the pseudo-code for the overall KNN-based approximation is provided in Algorithm 2. + +# 4. Experiments + +This section covers the two general application scenarios of 2D-Shapley. (1) Cell valuation, where each cell in the training data matrix is considered a data source and receives a score indicating its contribution to a learning task performed on the matrix. We mainly demonstrate this application scenario's benefits in fine-grained data debugging and interpreting canonical sample-wise or feature-wise data values. (2) Sub-matrix valuation, where a sub-matrix containing multiple cells is considered a data source and receives a joint score. This scenario is closely related to data marketplaces, where each entity provides a dataset that appears as a submatrix in the aggregated data. Details about datasets, models, implementations, and ablation studies on a budget of inserted outliers are provided in Appendix F. + +# 4.1. Cell Valuation + +Sanity check of cell-wise values. We first check whether the cell-wise values produced by our method make sense via the data removal experiments commonly used in the data valuation literature. Specifically, we would expect that removing the cells with the highest values from the training set leads to the most significant performance degradation; conversely, removing the cells with the lowest values should barely affect the model performance. To evaluate the model performance after removal, we "remove" a cell by refilling its content with the average of all other cells on the same feature column. In the previous section, we present two algorithms to calculate 2D-Shapley. We will label the values obtained from the Monte Carlo-based method as 2D-Shapley-MC and the ones from the KNN-surrogate-based method as 2D-Shapley-KNN. + +1D-Shapley and random removal are used as our baselines. In particular, 1D-Shapley is estimated by the permutation sampling described in (Jia et al., 2019b). For each baseline, we remove a number of cells at a time based on their sample-feature value ranking in either descending or ascending order; then, we train a model on the reduced dataset and evaluate the model performance. + +As shown in Figure 4, when removing cells in ascending value order, 2D-Shapley can not only maintain the model performance but also improve it by at least $2\%$ for Census, Credit, and Breast Cancer datasets, whereas 1D-Shapley dips the model performance earlier than 2D-Shapley in all three datasets. Considering removal from the highest valued cells, we observe that 2D-Shapley can effectively detect contributing cells, and removing these cells causes the model performance to drop quickly. By contrast, removing cells according to 1D-Shapley is close to random removal. These results indicate that 2D-Shapley is more effective than 1D-Shapley at recognizing the contribution of cells and can better inform strategic data harnessing in ML. + +Fine-Grained Outlier Localization. Existing horizontal data valuation methods have demonstrated promising results in detecting abnormal samples (Ghorbani & Zou, 2019; Kwon & Zou, 2022; Wang & Jia, 2023) by finding lowest-valued samples. However, it is rarely the case that every cell in the sample is abnormal. For instance, a type of error in the Census data is "198x→189x", where the years of birth are wrongly specified; this error could appear on a single feature column and, at the same, only affects partial samples (or users) born in 198x. Existing horizontal valuation remains limited in localizing these erroneous entries. + +To demonstrate the potential of 2D-Shapley in fine-grained entry-wise outlier detection, we first inject outlier cells into the clean dataset, Breast Cancer Dataset. Following a recent outlier generation technique in (Du et al., 2022), we inject low-probability-density values into the dataset as outlier cells. We explain the outlier injection method in detail in Appendix F.3. We randomly place outlier cells in $2\%$ (50 of total cells). Afterward, we compute 2D-Shapley-KNN for each cell in the dataset with inserted outliers, which are shown in Figure 10. Since we expect outliers not to be helpful for the model performance, the values for outlier cells should be low. Therefore, we sort the 2D-Shapley cell values in ascending order and prioritize human inspection towards the ones with the lowest values. We show the detection rate of the inserted outliers in Figure 5A). As we can see, with 2D-Shapley values, we can detect $90\%$ of inserted outliers within the first $5\%$ + +![](images/1567d9121fd66969f4dbde7557231e9198e133750bbdcf880ca28b8ae9044731.jpg) +Figure 4: Performance comparison between 2D-Shapley and baselines on various use cases. + +![](images/fcc1fd6a61b06a8bf682f7a8ed6d09ca444dfc4ef7d7098bd42c91f003ee510c.jpg) +Figure 5: A) Detection of the inserted outliers in the Breast Cancer dataset. B) Detection of the inserted outliers in the Age category of the Census dataset. + +![](images/1692fecfd07bc2f1bd43fbccb46d11babd5bb4024df51d6248e23a4fe42aff5a.jpg) +Figure 6: 2D Shapley vs Model Performance on various dataset splits. + +![](images/c9eaf718075e22690bd3704e26fd199378d7b8f10a12c57f7b5cb969815ce1fd.jpg) +Figure 7: Cell values of samples with similar 1D values in Breast Cancer dataset. + +of all cells. By contrast, based on the values produced by 1D-Shapley, one would need over $90\%$ of cell inspection to screen out all the outlier cells. + +We further examine a practical case of outliers caused by human errors, where the cells have been incorrectly typed, e.g., "18" became "81". In the Census dataset, for the feature "Age", we randomly swap 15 cells between "17" and "71", "18" and "81", "19" and "91". Similarly, we sort the values of all cells in the dataset in ascending order. As we observe in Figure 5B), detection with 2D-Shapley outperforms 1D-Shapley. Particularly, with 2D-Shapley we can detect $80\%$ of added outliers with less than 1800 inspected cells while 1D-Shapley requires 4 times as many cells to achieve a comparable rate. The 1D-Shapley and 2D-Shapley heatmaps are provided in Appendix. The results above demonstrate the effectiveness of 2D-Shapley in locating outlier cells in a dataset. + +Enabling Interpretation of 1D Valuation Results. Apart from outlier detection, 2D-Shapley also brings new insights into horizontal sample valuation or vertical feature valuation, which is referred to as 1D valuation. For instance, 1D sample valuation produces an importance score for each sample, but we lack a deeper understanding of why a sample receives a certain value. + +Recall Corollary 3.4 that the sum of 2D-Shapley over rows or columns gives 1D feature values and 1D sample values, respectively. Hence, 2D-Shapley allows one to interpret the 1D value of a sample by further breaking it down to contributions of different features in that sample. That is, 2D-Shapley gives insights into the relative importance of different features of a sample to the valuation result received by the sample. For example, in Figure 7A), we observe that two samples have similar 1D values and their cell values are also close. However, in Figure 7B), we observe a contrasting case, where although both samples have a close 1D value, their cell values are completely unrelated. More detailed results can be found in Appendix F.3. + +# 4.2. Sub-matrix Valuation + +We turn to the application of 2D-Shapley to inform dataset pricing in the data marketplace. 2D-Shapley enables a principled method to value fragmented data sources as illustrated in Figure 1(c), where each source is a sub-matrix in the aggregated training data matrix. A reasonable measure of a source's value should reflect its usefulness for ML. Hence, to verify the significance of the resulting values for sub-matrix valuation, we measure the model performance trained on a source and examine the correlation between its value and the + +performance. For this experiment, we use the Credit Dataset with sources contributing fragmented data and consider multiple random splits of the dataset. The results are provided in Figure 6, where each line corresponds to a different split of the aggregate data into individual sources. Figure 6 shows that with the increasing model performance trained on the block, its corresponding 2D-Shapley block value also increases. + +# 5. Conclusion + +This work aims to set the theoretical foundation for more realistic data valuation application scenarios. In particular, we investigate the block valuation problem and present 2D-Shapley, a new data value notion that is suitable to solve this problem. 2D-Shapley empowers a range of new use cases, such as informing the pricing of fragmented data, strategic data selection on a fine-grained scale, and interpreting 1D valuation results. Our work opens up many new venues for future investigation. First, we can immediately adapt our proof technique to prove a two-dimensional generalization of other typical data value notions (Kwon & Zou, 2022; Wang & Jia, 2023). Second, it is interesting to build upon our framework to evaluate irregular-shaped data sources (Fang et al., 2019) and incorporate label information for joint valuation in a principled way. + +# Acknowledgements + +Xiangyu Chang's work was partly supported by the National Natural Science Foundation for Outstanding Young Scholars of China under Grant 72122018 and partly by the Natural Science Foundation of Shaanxi Province under Grant 2021JC-01. Xi Chen would like to thank the support from NSF via the Grant IIS-1845444. + +# References + +Covert, I., Lundberg, S. M., and Lee, S.-I. Understanding global feature contributions with additive importance measures. Advances in Neural Information Processing Systems, 33:17212-17223, 2020. +Du, X., Wang, Z., Cai, M., and Li, Y. Vos: Learning what you don't know by virtual outlier synthesis. arXiv preprint arXiv:2202.01197, 2022. +Dua, D. and Graff, C. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ml. +Fang, F., Lan, W., Tong, J., and Shao, J. Model averaging for prediction with fragmentary data. Journal of Business & Economic Statistics, 37(3):517-527, 2019. +Ghorbani, A. and Zou, J. Data shapley: Equitable valuation of data for machine learning. In International Conference on Machine Learning, pp. 2242-2251. PMLR, 2019. +Jia, R., Dao, D., Wang, B., Hubis, F. A., Gürel, N. M., Li, B., Zhang, C., Spanos, C. J., and Song, D. Efficient task-specific data valuation for nearest neighbor algorithms. Proceedings of the VLDB Endowment, 12(11):1610-1623, 2019a. +Jia, R., Dao, D., Wang, B., Hubis, F. A., Hynes, N., Gurel, N. M., Li, B., Zhang, C., Song, D., and Spanos, C. J. Towards efficient data valuation based on the Shapley value. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 1167-1176. PMLR, 2019b. +Just, H. A., Kang, F., Wang, J. T., Zeng, Y., Ko, M., Jin, M., and Jia, R. Lava: Data valuation without pre-specified learning algorithms. In International Conference on Learning Representations, 2023. +Karlas, B., Dao, D., Interlandi, M., Li, B., Schelter, S., Wu, W., and Zhang, C. Data debugging with shapley importance over end-to-end machine learning pipelines. arXiv preprint arXiv:2204.11131, 2022. +Kwon, Y. and Zou, J. Beta Shapley: a unified and noise-reduced data valuation framework for machine learning. In International Conference on Artificial Intelligence and Statistics, pp. 8780-8802. PMLR, 2022. +Lin, J., Zhang, A., Lécuyer, M., Li, J., Panda, A., and Sen, S. Measuring the effect of training data on deep learning predictions via randomized experiments. In + +International Conference on Machine Learning, pp. 13468-13504. PMLR, 2022. +Lundberg, S. M. and Lee, S.-I. A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30:4768-4777, 2017. +Roth, A. E. The Shapley value: essays in honor of Lloyd S. Shapley. Cambridge University Press, 1988. +Shapley, L. S. A value for n-person games. Contributions to the Theory of Games, 2(28):307-317, 1953. +Shapley, L. S. A value for n-person games. Classics in game theory, 69, 1997. +Strumbelj, E. and Kononenko, I. An efficient explanation of individual classifications using game theory. Journal of Machine Learning Research, 11:1-18, 2010. +Tang, S., Ghorbani, A., Yamashita, R., Rehman, S., Dunnmon, J. A., Zou, J., and Rubin, D. L. Data valuation for medical imaging using Shapley value and application to a large-scale chest X-ray dataset. Scientific Reports, 11(1):1-9, 2021. +Tay, S. S., Xu, X., Foo, C. S., and Low, B. K. H. Incentivizing collaboration in machine learning via synthetic data rewards. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp. 9448-9456, 2022. +Wang, J. T. and Jia, R. A robust data valuation framework for machine learning. In International Conference on Artificial Intelligence and Statistics. PMLR, 2023. +Wang, T. and Jia, R. Data banzhaf: A data valuation framework with maximal robustness to learning stochasticity. arXiv preprint arXiv:2205.15466, 2022. +Wu, M., Jia, R., Huang, W., Chang, X., et al. Robust data valuation via variance reduced data shapley. arXiv preprint arXiv:2210.16835, 2022a. +Wu, Z., Shu, Y., and Low, B. K. H. Davinz: Data valuation using deep neural networks at initialization. In International Conference on Machine Learning, pp. 24150-24176. PMLR, 2022b. +Yan, T. and Procaccia, A. D. If you like shapley then you'll love the core. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 5751-5759, 2021. + +Yona, G., Ghorbani, A., and Zou, J. Who's responsible? jointly quantifying the contribution of the learning algorithm and data. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pp. 1034-1041, 2021. +Yoon, J., Arik, S., and Pfister, T. Data valuation using reinforcement learning. In International Conference on Machine Learning, pp. 10842-10851. PMLR, 2020. + +# 2D-Shapley: A Framework for Fragmented Data Valuation Supplementary Materials + +A. Proof of the fact that Axiom 3 is implied by its explanation + +The explanation above is: for $i_1, i_2 \in N$ , $j_1, j_2 \in M$ , if for any $S \subseteq N \setminus \{i_1, i_2\}$ and $F \subseteq M$ , $h(S \cup i_1, F) = h(S \cup i_2, F)$ , and for any $S \subseteq N$ and $F \subseteq M \setminus \{j_1, j_2\}$ , $h(S, F \cup j_1) = h(S, F \cup j_2)$ , then $\psi_{i_1j_1}(h) = \psi_{i_2j_2}(h)$ . + +For the proof, we prove in three steps that the explanation is equivalent to Axiom 3. Note that we should assume Axiom 1, 2 and 4 already exist. For simplicity, we use the lowercase letter to denote the cardinality of a set, for example, $|S| = s$ . + +We want to prove the following proposition. + +Proposition A.1. If Axiom 1, 2 and 4 exist, then Axiom 3 is equivalent to its explanation. + +Proof. For the direction that Axiom 3 is implied by its explanation, we prove in three steps. + +- Step 1: Define a utility function $h_{S,F}$ : + +$$ +h _ {S, F} \left(W _ {1}, W _ {2}\right) = \left\{ \begin{array}{l l} 1, i f S \subseteq W _ {1}, F \subseteq W _ {2}. \\ 0, o t h e r w i s e. \end{array} \right. \tag {13} +$$ + +For fixed $S \subseteq N$ , $F \subseteq M$ and $i_1, i_2 \in S$ , $j_1, j_2 \in F$ and for all $W_1 \subseteq N \setminus \{i_1, i_2\}$ , $W_2 \subseteq M \setminus \{j_1, j_2\}$ , $M_{h_{S,F}}^{i_1,j_1}(W_1, W_2) = M_{h_{S,F}}^{i_2,j_2}(W_1, W_2)$ . It leads to the conclusion that $\psi_{i_1j_1}(h_{S,F}) = \psi_{i_2j_2}(h_{S,F})$ according to the explanation. + +For $i^* \notin S$ , $j \in M$ (or $j^* \notin F$ , $i \in N$ ) and $W_1 \subseteq N \setminus i^*$ , $W_2 \subseteq M \setminus j$ , ( $W_1 \subseteq N \setminus i$ , $W_2 \subseteq M \setminus j^*$ ), $M_{h_{S,F}}^{i^*,j}(W_1, W_2) = 0$ . ( $M_{h_{S,F}}^{i,j^*}(W_1, W_2) = 0$ .) It leads to the conclusion that $\psi_{i^*j}(h_{S,F}) = 0$ , $\forall j \in M$ ( $\psi_{ij^*}(h_{S,F}) = 0$ , $\forall i \in N$ ) according to Axiom 2. + +In summary, we have conclusion that the values $\psi_{ij}$ are the same when $i\in S,j\in F$ , and otherwise zero. According to Axiom 4, + +$$ +1 = h_{S,F}(N,M) = \sum_{\substack{i\in N\\ j\in M}}\psi_{ij}(h_{S,F}) = \sum_{\substack{i\in S\\ j\in F}}\psi_{ij}(h_{S,F}). +$$ + +then $\psi_{ij}(h_{S,F}) = 1 / sf$ , where $i \in S, j \in F$ . + +- Step 2: We have to prove a lemma which shows another formation of a utility $h$ by using $h_{S,F}$ defined above. + +Lemma A.2. + +$$ +h = \sum_{\substack{S\subset N\\ F\subsetneq M}}C_{S,F}(h)h_{S,F}, +$$ + +where $C_{S,F}(h) = \sum_{\substack{S' \subseteq S \\ F' \subseteq F}} (-1)^{s + f - s' - f'} h(S',F')$ . + +Proof. We can directly verify the lemma. + +$$ +\begin{array}{l} h(W_{1},W_{2}) = \sum_{\substack{S\subseteq N\\ F\subseteq M}}C_{S,F}(h)h_{S,F}(W_{1},W_{2}) \\ = \sum_{\substack{S\subseteq W_{1}\\ F\subseteq W_{2}}}\sum_{\substack{S^{\prime}\subseteq S\\ F^{\prime}\subseteq F}}(-1)^{s + f - s^{\prime} - f^{\prime}}h(S^{\prime},F^{\prime}) \\ = \sum_{\substack{S^{\prime}\subseteq W_{1}\\ F^{\prime}\subseteq W_{2}}}\bigl[\sum_{s = s^{\prime}}^{w_{1}}(-1)^{s - s^{\prime}}\binom {w_{1} - s^{\prime}}{s - s^{\prime}}\sum_{f = f^{\prime}}^{w_{2}}(-1)^{f - f^{\prime}}\binom {w_{2} - f^{\prime}}{f - f^{\prime}}\bigr ]h(S^{\prime},F^{\prime}) \\ = h \left(W _ {1}, W _ {2}\right). \\ \end{array} +$$ + +- Step 3: Combine the first two steps, and by Axiom 1, + +$$ +\begin{array}{l} \psi_{ij}(h) = \sum_{\substack{S\subseteq N\\ F\subseteq M}}C_{S,F}(h)\psi_{ij}(h_{S,F}) \\ = \sum_{\substack{i\in S\subset N\\ j\in F\subsetneq M}}C_{S,F}(h) / sf. \\ \end{array} +$$ + +Let $\pi_1, \pi_2$ be two permutations on $N$ and $M$ respectively, then + +$$ +\begin{array}{l} \psi_{\pi_{1}(i)\pi_{2}(j)}(\pi_{1}\pi_{2}h) = \sum_{\substack{\pi_{1}(i)\in S\subseteq N\\ \pi_{2}(j)\in F\subseteq M}}C_{S,F}(\pi_{1}\pi_{2}h) / sf \\ = \sum_{\substack{i\in \pi_{1}(S)\subseteq N\\ j\in \pi_{2}(F)\subseteq M}}C_{\pi_{1}(S),\pi_{2}(F)}(h) / sf \\ = \psi_ {i j} (h). \\ \end{array} +$$ + +For another direction that Axiom 3 implies its explanation, since we already assume Axiom 1, 2, 3 and 4 hold, then we have the formula of 2D-Shapley, that is, Eq. (22). Clearly, we can see the numerator is always the same for both $i_1j_1$ and $i_2j_2$ under the same $S$ and $F$ , hence $\psi_{i_1j_1}(h) = \psi_{i_2j_2}(h)$ . + +# B. Proof of the representation theory of 2D-Shapley + +In this section, we will justify the representation theory by a number of proposed lemmas. The proof process is to add the axioms one by one and try to show what each axiom does for 2D-Shapley. We add linearity and dummy axioms first to get a sum of weighted marginals. + +Lemma B.1. For any value $\psi_{ij}$ satisfying the 2d-linearity and 2d-dummy axioms (Axiom 1 and 2), we have that + +$$ +\begin{array}{l} \psi_ {i j} (h) = \sum_ {S \subseteq N \backslash i F \subseteq M \backslash j} \sum_ {j} p _ {S, F} ^ {i j} [ h (S \cup i, F \cup j) + h (S, F) \\ - h (S \cup i, F) - h (S, F \cup j) ], \tag {14} \\ \end{array} +$$ + +where $\sum_{S\subseteq N\backslash i}\sum_{F\subseteq M\backslash j}p_{S,F}^{ij} = 1$ + +Proof. For any $h \in G$ , + +$$ +h = \sum_ {\substack {S \subseteq N \\ F \subseteq M}} h (S, F) W _ {S, F}, \tag{15} +$$ + +where + +$$ +W _ {S, F} (W _ {1}, W _ {2}) = \left\{ \begin{array}{l l} 1, i f W _ {1} = S, W _ {2} = F. \\ 0, o t h e r w i s e. \end{array} \right. +$$ + +By the 2d-linearity axiom, + +$$ +\psi_{ij}(h) = \sum_{\substack{S\subseteq N\\ F\subseteq M}}h(S,F)\psi_{ij}(W_{S,F}). +$$ + +Now define another utility function $W_{S,F}^{\prime}$ : + +$$ +W _ {S, F} ^ {\prime} (W _ {1}, W _ {2}) = \left\{ \begin{array}{l l} 1, i f S \subseteq W _ {1}, F = W _ {2}. \\ 0, o t h e r w i s e. \end{array} \right. +$$ + +For any $S \subseteq N \backslash i$ and $F \subseteq M \backslash j$ , we can check that block $(i, j)$ is a dummy for $W_{S,F}^{\prime}$ , then by the 2d-dummy axiom, $\psi_{ij}(W_{S,F}^{\prime}) = 0$ . Especially, let $S = N \backslash i$ and any fixed $F^{\prime} \subseteq M \backslash j$ , we have: + +$$ +\psi_ {i j} \left(W _ {N, F ^ {\prime}}\right) + \psi_ {i j} \left(W _ {N \backslash i, F ^ {\prime}}\right) = 0. +$$ + +For inductive purposes, assume it has been shown that $\psi_{ij}(S,F') + \psi_{ij}(S\cup i,F') = 0$ for fixed $F^{\prime}\subseteq M\backslash j$ and every $S\subseteq N\backslash i$ with $|S|\geq k\geq 2$ . (The case $k = n - 1$ has been proved.) Now take fixed $S\subseteq N\backslash i$ with $|S| = k - 1$ , then + +$$ +\begin{array}{l} 0 = \psi_ {i j} \left(W _ {S, F ^ {\prime}} ^ {\prime}\right) = \sum_ {S \subseteq S _ {1} \subseteq N} \psi_ {i j} \left(W _ {S _ {1}, F ^ {\prime}}\right) \\ = \psi_{ij}(W_{S\cup i,F^{\prime}}) + \psi_{ij}(W_{S,F^{\prime}}) + \sum_{\substack{S_{1}\subseteq N\setminus i\\ S\not\subsetneq S_{1}}}[\psi_{ij}(W_{S_{1}\cup i,F^{\prime}}) + \psi_{ij}(W_{S_{1},F^{\prime}})] \\ = \psi_ {i j} \left(W _ {S \cup i, F ^ {\prime}}\right) + \psi_ {i j} \left(W _ {S, F ^ {\prime}}\right). \\ \end{array} +$$ + +Therefore, $\psi_{ij}(W_{S\cup i,F'}) + \psi_{ij}(W_{S,F'}) = 0$ for all $S\subseteq N\backslash i$ and fixed $F^{\prime}\subseteq N\backslash j$ with $0 < |S|\leq n - 1$ and $0 < |F^{\prime}|\leq m - 1$ . Similarly, we have another conclusion that $\psi_{ij}(W_{S',F}) + \psi_{ij}(W_{S',F\cup j}) = 0$ for fixed $S^{\prime}\subseteq N\backslash i$ and all $F\subseteq N\backslash j$ with $0 < |S'| \leq n - 1$ and $0 < |F| \leq m - 1$ by simply defining another similar utility function $W_{S',F}^{\prime}$ and repeating the process above again. + +Using the results above, + +$$ +\begin{array}{l} \psi_{ij}(h) = \sum_{\substack{S\subset N\\ F\subseteq M}}h(S,F)\psi_{ij}(W_{S,F}) \\ = \sum_ {F \subseteq M} \sum_ {S \subseteq N \backslash i} h (S \cup i, F) \psi_ {i j} \left(W _ {S \cup i, F}\right) + h (S, F) \psi_ {i j} \left(W _ {S, F}\right) \\ = \sum_ {S \subseteq N \backslash i} \sum_ {F \subseteq M} h (S \cup i, F) \psi_ {i j} \left(W _ {S \cup i, F}\right) - h (S, F) \psi_ {i j} \left(W _ {S \cup i, F}\right) \\ = \sum_ {S \subseteq N \backslash i} \sum_ {F \subseteq M \backslash j} \psi_ {i j} \left(W _ {S \cup i, F \cup j}\right) [ h (S \cup i, F \cup j) - h (S, F \cup j) ] \\ - \sum_ {S \subseteq N \backslash i} \sum_ {F \subseteq M \backslash j} \psi_ {i j} \left(W _ {S \cup i, F \cup j}\right) [ h (S \cup i, F) - h (S, F) ] \\ = \sum_ {S \subseteq N \backslash i} \sum_ {F \subseteq M \backslash j} \psi_ {i j} \left(W _ {S \cup i, F \cup j}\right) [ h (S \cup i, F \cup j) + h (S, F) \\ \left. - h (S, F \cup j) - h (S \cup i, F) \right]. \\ \end{array} +$$ + +For simplicity, denote $\psi_{ij}(W_{S\cup i,F\cup j})$ as $p_{S,F}^{ij}$ , then + +$$ +\psi_ {i j} (h) = \sum_ {S \subseteq N \backslash i F \subseteq M \backslash j} \sum_ {j} p _ {S, F} ^ {i j} \left[ h (S \cup i, F \cup j) + h (S, F) - h (S \cup i, F) - h (S, F \cup j) \right]. +$$ + +Consider the utility function $h_{ij}$ + +$$ +h _ {i j} (W _ {1}, W _ {2}) = \left\{ \begin{array}{l l} 1, i f i \in W _ {1}, j \in W _ {2}. \\ 0, o t h e r w i s e. \end{array} \right. +$$ + +and we can check that $ij$ is a dummy for $h_{ij}$ , and $\psi_{ij}(h_{ij}) = 1$ . Hence + +$$ +1 = \psi_ {i j} (h _ {i j}) = \sum_ {S \subseteq N \backslash i} \sum_ {F \subseteq M \backslash j} p _ {S, F} ^ {i j}. +$$ + +Next, add the 2d-symmetry axiom to Lemma B.1 and we make the conclusion that $p_{S,F}^{ij}$ is only related to the cardinality of $S$ and $F$ , which is not associated with the name of the blocks. + +Lemma B.2. Assume Lemma B.1 holds. If $\psi_{ij}$ also satisfies the 2d-symmetry axiom, then + +$$ +p _ {S, F} ^ {i j} = p _ {s, f}, +$$ + +where $p_{s,f}$ is some common value for $S \subseteq N \setminus i$ , $F \subseteq M \setminus j$ and $0 \leq |S| = s \leq n - 1$ , $0 \leq |F| = f \leq m - 1$ . + +Proof. Define a utility $\hat{h}_{S,F}$ : + +$$ +\hat {h} _ {S, F} (W _ {1}, W _ {2}) = \left\{ \begin{array}{l l} 1, i f S \subsetneq W _ {1}, F \subsetneq W _ {2}. \\ 0, o t h e r w i s e. \end{array} \right. +$$ + +1. For $i \in N$ and $j \in M$ , let $S_1, F_1$ and $S_2, F_2$ be any two coalitions where $S_1, S_2 \subseteq N \setminus i$ and $F_1, F_2 \subseteq M \setminus j$ with $0 < |S_1| = |S_2| < n - 1$ and $0 < |F_1| = |F_2| < m - 1$ respectively. Consider two permutation $\pi_1$ and $\pi_2$ which satisfy $\pi_1(S_1) = S_2, \pi_1(i) = i$ and $\pi_2(F_1) = F_2, \pi_2(j) = j$ . Then, + +$$ +p _ {S _ {1}, F _ {1}} ^ {i j} = \psi_ {i j} (\hat {h} _ {S _ {1}, F _ {1}}) = \psi_ {i j} (\hat {h} _ {S _ {2}, F _ {2}}) = p _ {S _ {2}, F _ {2}} ^ {i j}, +$$ + +where the central equality is a consequence of the 2d-symmetry axiom. + +2. For distinct $i_1, i_2 \in N$ and $j_1, j_2 \in M$ , let $S \subseteq N \setminus \{i_1, i_2\}$ and $F \subseteq M \setminus \{j_1, j_2\}$ , and the permutations $\pi_1, \pi_2$ respectively interchange $i_1, i_2$ and $j_1, j_2$ while leaving other elements fixed. Then, + +$$ +\pi_ {1} \pi_ {2} \hat {h} _ {S, F} = \hat {h} _ {S, F}, +$$ + +$$ +p _ {S, F} ^ {i _ {1} j _ {1}} = \psi_ {i _ {1} j _ {1}} (\hat {h} _ {S, F}) = \psi_ {i _ {2} j _ {2}} (\hat {h} _ {S, F}) = p _ {S, F} ^ {i _ {2} j _ {2}}, +$$ + +where the central equality is a consequence of the 2d-symmetry axiom. Combining with the previous result in Step 1, we find that for every $0 < s < n - 1$ and $0 < f < m - 1$ , there is a $p_{s,f}$ such that $p_{S,F}^{ij} = p_{s,f}$ for every $i \in N$ and $j \in M$ , $S \subseteq N \setminus i$ and $F \subseteq M \setminus j$ with $|S| = s$ , $|F| = f$ . + +3. Similarly, by using different utility functions, we can find for $\forall i\in N,j\in M$ .. + +- a $p_{n - 1,f}$ such that $p_{N\backslash i,F}^{ij} = p_{n - 1,f}$ for $F\subseteq M\backslash j$ and $0\leq |F| = f < m - 1$ , +- a $p_{s,m-1}$ such that $p_{S,M\backslash j}^{ij} = p_{s,m-1}$ for $S \subseteq N\backslash i$ and $0 \leq |S| = s < n - 1$ , +- a $p_{0,f}$ such that $p_{\emptyset, F}^{ij} = p_{0,f}$ for $F \subseteq M \backslash j$ and $0 < |F| = f < m - 1$ , +- a $p_{s,0}$ such that $p_{S,\emptyset}^{ij} = p_{s,0}$ , for $S \subseteq N \backslash i$ and $0 < |S| = s < n - 1$ , +a $p_{n - 1,m - 1}$ such that $p_{N\backslash i,M\backslash j}^{ij} = p_{n - 1,m - 1},$ +- a $p_{0,0}$ such that $p_{\emptyset, \emptyset}^{ij} = p_{0,0}$ which makes the sum of all the weights equals to 1. + +Finally add the 2d-efficiency axiom and obtain the uniqueness of 2D-Shapley. + +Lemma B.3. Assume Lemma B.1 holds. Then $\psi_{ij}(h)$ satisfies the 2d-efficiency axiom if and only if + +$$ +\sum_ {\substack {i \in N \\ j \in M}} p _ {N \backslash i, M \backslash j} ^ {i j} = 1, \tag{16} +$$ + +$$ +\sum_ {\substack {i \in S \\ j \in F}} p _ {S \backslash i, F \backslash j} ^ {i j} + \sum_ {\substack {i \notin S \\ j \notin F}} p _ {S, F} ^ {i j} - \sum_ {\substack {i \notin S \\ j \in F}} p _ {S, F \backslash j} ^ {i j} - \sum_ {\substack {i \in S \\ j \notin F}} p _ {S \backslash i, F} ^ {i j} = 0, \tag{17} +$$ + +where $S\subsetneq N$ or $F\subsetneq M$ + +Proof. On the one hand, by Eq. (16) and Eq. (17), + +$$ +\begin{array}{l} h(N,M) = \sum_{\substack{S\subseteq N\\ F\subseteq M}}h(S,F)[\sum_{\substack{i\in S\\ j\in F}}p_{S\setminus i,F\setminus j}^{ij} + \sum_{\substack{i\notin S\\ j\notin F}}p_{S,F}^{ij} - \sum_{\substack{i\notin S\\ j\in F}}p_{S,F\setminus j}^{ij} - \sum_{\substack{i\in S\\ j\notin F}}p_{S\setminus i,F}^{ij}] \\ = \sum_{\substack{i\in N\\ j\in M}}\sum_{\substack{S\subseteq N\setminus i\\ F\subseteq M\setminus j}}p_{S,F}^{ij}[h(S\cup i,F\cup j) + h(S,F) - h(S\cup i,F) - h(S,F\cup j)] \\ = \sum_{\substack{i\in N\\ j\in M}}\psi_{ij}(h). \\ \end{array} +$$ + +On the other hand, recall: + +$$ +\hat {h} _ {S, F} (W _ {1}, W _ {2}) = \left\{ \begin{array}{l l} 1, i f S \subsetneq W _ {1}, F \subsetneq W _ {2}. \\ 0, o t h e r w i s e. \end{array} \right. +$$ + +and + +$$ +h _ {S, F} (W _ {1}, W _ {2}) = \left\{ \begin{array}{l l} 1, i f S \subseteq W _ {1}, F \subseteq W _ {2}. \\ 0, o t h e r w i s e. \end{array} \right. +$$ + +Consider two new utility functions + +$$ +\tilde {h} _ {S, F} (W _ {1}, W _ {2}) = \left\{ \begin{array}{l l} 1, i f S \subsetneq W _ {1}, F \subseteq W _ {2}, \\ 0, o t h e r w i s e. \end{array} \right. +$$ + +and + +$$ +\bar {h} _ {S, F} (W _ {1}, W _ {2}) = \left\{ \begin{array}{l l} 1, i f S \subseteq W _ {1}, F \subsetneq W _ {2}, \\ 0, o t h e r w i s e. \end{array} \right. +$$ + +Then for any $S \subseteq N$ , $F \subseteq M$ + +$$ +\begin{array}{l} \sum_{\substack{i\in N\\ j\in M}}\psi_{ij}(h_{S,F}) + \sum_{\substack{i\in N\\ j\in M}}\psi_{ij}(\hat{h}_{S,F}) - \sum_{\substack{i\in N\\ j\in M}}\psi_{ij}(\tilde{h}_{S,F}) - \sum_{\substack{i\in N\\ j\in M}}\psi_{ij}(\bar{h}_{S,F}) \\ = \sum_{\substack{i\in S\\ j\in F}}p^{ij}_{S\setminus i,F\setminus j} + \sum_{\substack{i\notin S\\ j\notin F}}p^{ij}_{S,F} - \sum_{\substack{i\notin S\\ j\in F}}p^{ij}_{S,F\setminus j} - \sum_{\substack{i\in S\\ j\notin F}}p^{ij}_{S\setminus i,F}. \\ \end{array} +$$ + +When $S = N$ and $F = M$ , + +$$ +\begin{array}{l} \sum_{\substack{i\in N\\ j\in M}}\psi_{ij}(h_{N,M}) + \sum_{\substack{i\in N\\ j\in M}}\psi_{ij}(\hat{h}_{N,M}) - \sum_{\substack{i\in N\\ j\in M}}\psi_{ij}(\tilde{h}_{N,M}) - \sum_{\substack{i\in N\\ j\in M}}\psi_{ij}(\bar{h}_{N,M}) \\ = h _ {N, M} (N, M) + \hat {h} _ {N, M} (N, M) - \bar {h} _ {N, M} (N, M) - \bar {h} _ {N, M} (N, M) \\ = 1, \\ \end{array} +$$ + +Otherwise, + +$$ +\begin{array}{l} \sum_{\substack{i\in N\\ j\in M}}\psi_{ij}(h_{S,F}) + \sum_{\substack{i\in N\\ j\in M}}\psi_{ij}(\hat{h}_{S,F}) - \sum_{\substack{i\in N\\ j\in M}}\psi_{ij}(\tilde{h}_{S,F}) - \sum_{\substack{i\in N\\ j\in M}}\psi_{ij}(\bar{h}_{S,F}) \\ = h _ {S, F} (N, M) + \hat {h} _ {S, F} (N, M) - \bar {h} _ {S, F} (N, M) - \bar {h} _ {S, F} (N, M) \\ = 0. \\ \end{array} +$$ + +Hence, Eq. (16) and Eq. (17) can be easily obtained. + +Now, let's prove Theorem 3.3. + +Proof of Theorem 3.3. By Lemma B.2, + +$$ +\begin{array}{l} \psi_{ij}(h) = \sum_{s = 0}^{n - 1}\sum_{f = 0}^{m - 1}\sum_{\substack{S\subseteq N\setminus i\\ |S| = s}}\sum_{\substack{F\subseteq M\setminus j\\ |F| = f}}p_{s,f}[h(S\cup i,F\cup j) + h(S,F) \\ \left. - h (S \cup i, F) - h (S, F \cup j) \right]. \\ \end{array} +$$ + +By Lemma B.1 and Lemma B.3, we have the following equations: + +$$ +\begin{array}{l} \sum_ {s = 0} ^ {n - 1} \sum_ {f = 0} ^ {m - 1} \binom {n - 1} {s} \binom {m - 1} {f} p _ {s, f} = 1, \\ s f \cdot p _ {s - 1, f - 1} + (n - s) (m - f) \cdot p _ {s, f} = (n - s) f \cdot p _ {s, f - 1} \\ + s (m - f) p _ {s - 1, f}, 1 \leq s \leq n - 1, 1 \leq f \leq m - 1, \tag {18} \\ \end{array} +$$ + +$$ +\begin{array}{l} (m - f) \cdot p _ {0, f} = f \cdot p _ {0, f - 1}, 1 \leq f \leq m - 1, \\ (n - s) \cdot p _ {s, 0} = s \cdot p _ {s - 1, 0}, 1 \leq s \leq n - 1, \\ n m \cdot p _ {n - 1, m - 1} = 1. \\ \end{array} +$$ + +Actually, we can omit the first equation and the conditions are: + +$$ +\begin{array}{l} s f \cdot p _ {s - 1, f - 1} + (n - s) (m - f) \cdot p _ {s, f} = (n - s) f \cdot p _ {s, f - 1} \\ + s (m - f) p _ {s - 1, f}, 1 \leq s \leq n - 1, 1 \leq f \leq m - 1, \\ (m - f) \cdot p _ {0, f} = f \cdot p _ {0, f - 1}, 1 \leq f \leq m - 1, \tag {19} \\ \left(n - s\right) \cdot p _ {s, 0} = s \cdot p _ {s - 1, 0}, 1 \leq s \leq n - 1, \\ n m \cdot p _ {n - 1, m - 1} = 1. \\ \end{array} +$$ + +Hence, we have $n \cdot m$ variables and $(m - 1)(n - 1) + (m - 1) + (n - 1) + 1 = n \cdot m$ equations. + +Eq. (19) has a solution: + +$$ +p _ {s, f} = \frac {s ! (n - s - 1) !}{n !} \cdot \frac {f ! (m - f - 1) !}{m !}. \tag {20} +$$ + +Therefore, + +$$ +\begin{array}{l} \psi_{ij}(h) = \sum_{s = 0}^{n - 1}\sum_{f = 0}^{m - 1}\sum_{\substack{S\subseteq N\setminus i\\ |S| = s}}\sum_{\substack{F\subseteq M\setminus j\\ |F| = f}}\frac{s!(n - s - 1)!}{n!}\cdot \frac{f!(m - f - 1)!}{m!}[h(S\cup i,F\cup j) + h(S,F) \\ \left. - h (S \cup i, F) - h (S, F \cup j) \right] \\ = \frac{1}{nm}\sum_{s = 1}^{n}\sum_{f = 1}^{m}\sum_{\substack{S\subseteq N\setminus i\\ |S| = s - 1}}\sum_{\substack{F\subseteq M\setminus j\\ |F| = f - 1}}\frac{(s - 1)!(n - s)!}{(n - 1)!}\cdot \frac{(f - 1)!(m - f)!}{(m - 1)!}[h(S\cup i,F\cup j) + h(S,F) \\ \left. - h (S \cup i, F) - h (S, F \cup j) \right] \\ = \frac {1}{n m} \sum_ {s = 1} ^ {n} \sum_ {f = 1} ^ {m} \frac {1}{\binom {n - 1} {s - 1} \binom {m - 1} {f - 1}} \sum_ {(S, F) \in D _ {s f} ^ {i j}} [ h (S \cup i, F \cup j) + h (S, F) - h (S \cup i, F) - h (S, F \cup j) ] \\ = \frac {1}{n m} \sum_ {s = 1} ^ {n} \sum_ {f = 1} ^ {m} \Delta_ {s f}. \\ \end{array} +$$ + +Now we prove the solution Eq. (20) is unique. + +Convert the Eq. (19) to matrix equations in the form of + +$$ +A \mathbf {x} = \mathbf {b}, +$$ + +where + +$$ +\mathbf {x} ^ {T} = \left(p _ {0, 0}, p _ {0, 1}, \dots , p _ {0, m - 1}, p _ {1, 0}, p _ {1, 1}, \dots , p _ {1, m - 1}, \dots , p _ {n - 1, 0}, \dots , p _ {n - 1, m - 1}\right) _ {1 \times n m}, +$$ + +$$ +\mathbf {b} ^ {T} = (0, 0, 0, \dots , 0, 1) _ {1 \times n m}, +$$ + +and + +$$ +\boldsymbol {A} = \binom {\boldsymbol {A} _ {1}} {\boldsymbol {A} _ {2}} _ {n m \times n m}, \tag {21} +$$ + +where + +$$ +A _ {1} = \left( \begin{array}{c c c c c} A _ {(m - 1) \times m} ^ {0} & O _ {(m - 1) \times m} & \dots & \dots & O _ {(m - 1) \times m} \\ A _ {m \times m} ^ {1} & B _ {m \times m} ^ {1} & O _ {m \times m} & \dots & O _ {m \times m} \\ O _ {m \times m} & A _ {m \times m} ^ {2} & B _ {m \times m} ^ {2} & \dots & O _ {m \times m} \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ O _ {m \times m} & O _ {m \times m} & \dots & A _ {m \times m} ^ {n - 1} & B _ {m \times m} ^ {n - 1} \end{array} \right) _ {(n m - 1) \times n m}, +$$ + +$$ +\boldsymbol {A} _ {2} = \left(0, 0, \dots , 0, n m\right) _ {1 \times n m}. +$$ + +And + +$$ +\boldsymbol {A} _ {(m - 1) \times m} ^ {0} = \left( \begin{array}{c c c c c c} 1 & - (m - 1) & 0 & \dots & \dots & 0 \\ 0 & 2 & - (m - 2) & 0 & \dots & 0 \\ 0 & 0 & 3 & - (m - 3) & \dots & 0 \\ \vdots & \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & 0 & \dots & m - 1 & - 1 \end{array} \right) _ {(m - 1) \times m}, +$$ + +$$ +\boldsymbol {A} _ {m \times m} ^ {j} = \left( \begin{array}{c c c c c c} j & 0 & 0 & \dots & \dots & 0 \\ j & - j \cdot (m - 1) & 0 & 0 & \dots & 0 \\ 0 & 2 j & - j \cdot (m - 2) & 0 & \dots & 0 \\ 0 & 0 & 3 j & - j \cdot (m - 3) & \dots & 0 \\ \vdots & \vdots & \vdots & \ddots & \ddots & \vdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \dots & j \cdot (m - 1) & - j \end{array} \right) _ {m \times m}, 1 \leq j \leq n - 1, +$$ + +$$ +\boldsymbol {B} _ {m \times m} ^ {j} = \left( \begin{array}{c c c c c c} - (n - j) & 0 & 0 & \dots & \dots & 0 \\ - (n - j) & (n - j) \cdot (m - 1) & 0 & 0 & \dots & 0 \\ 0 & - 2 \cdot (n - j) & (n - j) \cdot (m - 2) & 0 & \dots & 0 \\ 0 & 0 & - 3 \cdot (n - j) & (n - j) \cdot (m - 3) & \dots & 0 \\ \vdots & \vdots & \vdots & \ddots & \ddots & \vdots \\ \vdots & \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & 0 & \dots & - (m - 1) \cdot (n - j) & n - j \end{array} \right) _ {m \times m}, 1 \leq j \leq n - 1. +$$ + +For example, if $n = m = 3$ , then + +$$ +\boldsymbol {A} = \left( \begin{array}{c c c c c c c c c} 1 & - 2 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 2 & - 1 & 0 & 0 & 0 & 0 & 0 \\ \hline 1 & 0 & 0 & - 2 & 0 & 0 & 0 & 0 \\ 1 & - 2 & 0 & - 2 & 4 & 0 & 0 & 0 \\ 0 & 2 & - 1 & 0 & - 4 & 2 & 0 & 0 \\ \hline 0 & 0 & 0 & 2 & 0 & 0 & - 1 & 0 \\ 0 & 0 & 0 & 2 & - 4 & 0 & - 1 & 2 \\ 0 & 0 & 0 & 0 & 4 & - 2 & 0 & - 2 \\ \hline 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right) _ {9 \times 9} +$$ + +Convert $\mathbf{A}$ to $\hat{\mathbf{A}}$ by using the elementary column and row transformation, + +$$ +\hat {\boldsymbol {A}} = \left( \begin{array}{c c c c c c c c c} 1 & - 2 & 0 & 2 & - 4 & 0 & 1 & - 2 & 0 \\ 0 & 2 & - 1 & 0 & 4 & - 2 & 0 & 2 & - 1 \\ \hline 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & - 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 2 & - 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & - 2 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 2 & - 1 & 0 & 0 & 0 \\ \hline 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{array} \right) _ {9 \times 9} +$$ + +According to the property of the elementary row and column transformation, + +$$ +\operatorname {R a n k} (\boldsymbol {A}) = \operatorname {R a n k} (\hat {\boldsymbol {A}}). +$$ + +Consider equation + +$$ +\hat {A} \mathbf {x} = \mathbf {0}, +$$ + +and the solution is only $\mathbf{x} = \mathbf{0}$ , hence + +$$ +\operatorname {R a n k} (\mathbf {A}) = \operatorname {R a n k} (\hat {\mathbf {A}}) = 9. +$$ + +In general, we can prove $\operatorname{Rank}(\mathbf{A}) = nm$ always holds for any $n \geq 1$ and $m \geq 1$ , (Make elementary column transformation for $[A_{m \times m}^j, B_{m \times m}^j]$ in the context of $\mathbf{A}$ with the order of $j = 1, 2, \ldots, n - 1$ .) Hence the solution of Eq. (19) is unique, which is shown in Eq. (20). And we can check Eq. (20) also satisfies Eq. (18), hence the solution of Eq. (18) is unique. + +# C. Proof of Corollary 3.4 + +Proof. We use the same technique in the proof of Lemma B.3. + +$$ +\begin{array}{l} \psi_{i^{*}}^{1d}(h) = \sum_{j\in M}\sum_{\substack{S\subseteq N\setminus i\\ F\subseteq M\setminus j}}p_{s,f}[h(S\cup i,F\cup j) + h(S,F) - h(S\cup i,F) - h(S,F\cup j)] \\ = \sum_{\substack{S\subseteq N\setminus i\\ F\subset M}}h(S\cup i,F)[\sum_{j\in F}p_{s,f - 1} - \sum_{j\notin F}p_{s,f}] + h(S,F)[\sum_{j\notin F}p_{s,f} - \sum_{j\in F}p_{s,f - 1}] \\ = \sum_{\substack{S\subseteq N\setminus i\\ F\subseteq M}}(\sum_{j\in F}p_{s,f - 1} - \sum_{j\notin F}p_{s,f})[h(S\cup i,F) - h(S,F)] \\ = \sum_ {S \subseteq N \backslash i} \left(\sum_ {j \in M} p _ {s, m - 1}\right) [ h (S \cup i, M) - h (S, M) ]. \\ \end{array} +$$ + +Substitute Eq. (20) into the above equation and we get the conclusion. The similar argument can be applied to $\psi_{j}^{1d}$ . + +# D. Permutation-based 2D-Shapley Formulation + +To compute 2D-Shapley more efficient, we propose the following corollary. + +Corollary D.1. Eq. (8) has an equivalent form as follows: + +$$ +\psi_ {i j} ^ {2 d} = \frac {1}{n m} \sum_ {\substack {S \subseteq N \backslash i \\ F \subseteq M \backslash j}} \frac {\left[ h (S \cup i , F \cup j) + h (S , F) - h (S \cup i , F) - h (S , F \cup j) \right]}{\binom {n - 1} {| S |} \binom {m - 1} {| F |}}, \tag{22} +$$ + +or + +$$ +\psi_ {i j} ^ {2 d} = \frac {1}{n ! m !} \sum_ {\substack {\pi_ {1} \in \Pi (N) \\ \pi_ {2} \in \Pi (M)}} [ h \left(P _ {i} ^ {\pi_ {1}} \cup i, P _ {j} ^ {\pi_ {2}} \cup j\right) + h \left(P _ {i} ^ {\pi_ {1}}, P _ {j} ^ {\pi_ {2}}\right) - h \left(P _ {i} ^ {\pi_ {1}} \cup i, P _ {j} ^ {\pi_ {2}}\right) - h \left(P _ {i} ^ {\pi_ {1}}, P _ {j} ^ {\pi_ {2}} \cup j\right) ], \tag{23} +$$ + +where $\Pi(A)$ denotes a set of all permutations of $A$ and $P_k^\pi$ a set of all elements of $A$ that precede $k \in A$ in the permutation $\pi \in \Pi(A)$ . + +The formulation in Eq. (22) is a simple derivation from Eq. (8) that sums marginal contributions over all subsets. Whereas, the second formulation in Eq. (23) sums over all sample and feature permutations, and the marginal contribution of block $(i,j)$ is weighted by a coefficient that measures all orderings of samples appearing before and after sample $i$ and all orderings of features appearing before and after feature $j$ . This corollary gives a simple expression of 2D-Shapley. Using this equivalent formulation, we can design efficient algorithms for 2D-Shapley implementation. + +# E. Algorithm Details + +Here, we explain the implementation of algorithms and explore ways to achieve efficient computation. + +# E.1. Saving Computation in 2D-Shapley-MC + +First, we focus on 2D-Shapley-MC. Apart from Monte Carlo sampling on both sample and feature permutations to reduce complexity, we also reduce the number of model training to a single time for each counterfactual evaluation as opposed to 4, which is derived in Eq. 2. Let us observe that in the marginal contribution equation, we have 4 utility terms, but actually, 3 of them are already computed, which we can reuse them. We take a pair $(i,j)$ as an example. For the marginal contribution of $(i,j)$ , we have 4 utility terms to compute: $h(S\cup i,F\cup j), h(S,F\cup j), h(S\cup i,F), h(S,F)$ . However, we notice that $h(S,F\cup j)$ was already computed for a pair $(i - 1,j), h(S\cup i,F)$ for a pair $(i,j - 1)$ , and $h(S,F)$ for $(i - 1,j - 1)$ . Therefore, by saving these evaluations, we can reduce the total number of model training by $75\%$ . Saving all model evaluations for every block might overflow the memory. However, we only need to save the utilities of the previous and current rows (columns) if we are looping horizontally downwards (vertically rightwards), which promotes efficient memory usage. Additionally, our algorithm can be parallelized. In particular, every permutation can be computed independently and combined at the last stage, which is the "while loop" in Algorithm 1. + +# E.2. Limitations of 2D-Shapley-MC and Possible Improvements + +One limitation of the Monte Carlo method is time complexity which scales with the number of rows and columns in an aggregate data matrix. To improve the efficiency of 2D-Shapley-MC, we can reduce the burden on model retraining of 2D-Shapley-MC to lower the computation cost. For example, there exist highly efficient methods for model re-training, such as FFCV [1,2], which has been applied in Datamodels [3] and can significantly reduce computation complexity. Another limitation is that 2D-Shapley-MC relies on the performance scores associated with models trained on different subsets to determine the cell values. However, these values are susceptible to noise due to training stochasticity when the learning algorithm is randomized (e.g., SGD) (Wang & Jia, 2022). To overcome these limitations, we proposed an efficient, nearest-neighbor-based method, 2D-Shapley-KNN, which involves no model training and only requires sorting data. With this method, we also avoid the problem of model training stochasticity, which 2D-Shapley-MC is facing with. Another advantage of 2D-Shapley-KNN is that it has an explicit formulation for sample values and only requires permuting over features. This method not only + +beats 2D-Shapley-MC by an order of magnitude in terms of computational efficiency but is straightforward to compute and only requires CPU resources. + +# E.3. Saving Computation in 2D-Shapley-KNN + +Apart from removing the dependency on the sample permutations and all model training, 2D-Shapley-KNN can further be reduced in computation. Similar to the 2D-Shapley-MC, we here also save the utility terms, as shown in Algorithm 2. For each pair $(i,j)$ , we need to compute $SV_{KNN}(i,P_j^\pi \cup k)$ and $SV_{KNN}(i,P_j^\pi)$ . However, the second term was already calculated for the previous feature in $\pi$ prior to $j$ . Thus, we can reduce the total number of $SV_{KNN}$ evaluations by $50\%$ . + +Algorithm 1 2D-Shapley-MC Valuation Algorithm. +Input: Training Set $D$ , Learning Algorithm $\mathcal{A}$ , Test Set $T$ , Utility Function $h$ . Output: Sample-Feature 2D Shapley Values $\psi^{2d}$ . +Ensure: $\forall i,j$ , $\psi_{ij}^{2d} = 0$ ; $t = 0$ . +while $\psi^{2d}$ not converged do + $\pi_N \gets$ Random Samples Permutation + $\pi_M \gets$ Random Features Permutation + $u \gets 0 //$ Utility Matrix + for $i,j$ in range $(\pi_N), \text{range}(\pi_M)$ do + $s \gets \pi_N(i), f \gets \pi_M(j)$ $u[s,f] \gets h\left(P_s^{\pi_N} \cup \{s\}, P_f^{\pi_M} \cup \{f\}\right)$ $\psi_{sf}^{new} \gets u[s,f] + u[\pi_N(i - 1), \pi_M(j - 1)] - u[\pi_N(i), \pi_M(j - 1)] - u[\pi_N(i - 1), \pi_M(j)]$ $\psi_{sf}^{2d} \gets \frac{t}{t + 1} \psi_{sf}^{2d} + \frac{1}{t + 1} \psi_{sf}^{new}$ + end +End +Set $t \gets t + 1$ + +Algorithm 2 2D-Shapley-KNN Valuation Algorithm. +Input: Training Set $D$ Test Set $T$ Top $K$ +Output: Sample-Feature 2D Shapley Values $\psi^{2d}$ +Ensure: $\forall_{i,j},\psi_{ij}^{2d} = 0;t = 0$ +while $\psi^{2d}$ not converged do + $\pi_M\gets$ Random Features Permutation + $u\gets 0 / / SV_{knn}$ values +for $j$ in range $(\pi_M)$ do + $f\gets \pi_M(j)$ $u[f]\gets SV_{KNN}(N,P_m^{\pi_M}\cup \{f\} ,T)$ $\psi_{sf}^{new}\gets u[f]_s - u[\pi_M(j - 1)]_s$ $\psi_{sf}^{2d}\gets \frac{t}{t + 1}\psi_{sf}^{2d} + \frac{1}{t + 1}\psi_{sf}^{new}$ +end +Set $t\gets t + 1$ +end + +# E.4. Actual Runtime Complexity + +Time complexity is an important aspect when evaluating the efficiency of algorithms. In our case, we focus on determining the runtime of our methods for different number of cell valuations on the Census dataset until the values' convergence is achieved. While computing the runtime for the exact 2D Shapley runtime, we encounter a challenge due to the exponential growth of permutations with the cell size, making exact 2D Shapley intractable to compute. To address this, we benchmark the exact 2D Shapley runtime, by measuring the runtime for a + +single permutation and scale it by the total number of permutations needed for the exact 2D Shapley. As we observe in Table 1, 2D-Shapley-KNN, exhibits exceptional efficiency compared to 2D-Shapley-MC across various cell valuations on the Census dataset. At 1,000 cells valuation, 2D-Shapley-KNN was at least 25 times faster than 2D-Shapley-MC, showcasing a substantial advantage. Furthermore, as the number of cells increased to 100,000, 2D-Shapley-KNN demonstrates a remarkable speed advantage, being approximately 300 times faster than 2D-Shapley-MC. These findings clearly establish an advantage of 2D-Shapley-KNN over 2D-Shapley-MC in terms of runtime efficiency. Moreover, we observe that both 2D-Shapley-KNN and 2D-Shapley-MC outperform the exact 2D Shapley method in terms of runtime. These results highlight the effectiveness and practicality of our approach for computing 2D-Shapley in real-world cases. + +
Method1K5K10K20K50K100K
2D Shapley-Exact (Theoretical)1.5E+301s2.0E+1505s2.8E+3010s5.6E+6020s4.4E+15051s1.4E+30103s
2D-Shapley-MC280s1,661s3,127s9,258s17,786s26,209s
2D-Shapley-KNN11s25s37s44s53s88s
+ +# F. Implementation Details & Results + +# F.1. Details on Datasets and Models + +For our experiments, we use the following datasets from Machine Learning Repository (Dua & Graff, 2017): + +Table 1: Actual runtime comparison between 2D-Shapley methods. + +
DatasetTraining DataTest DataFeatures
Census Income325611628114
Default of Credit Card Clients180001200024
Heart Failure51251313
Breast Cancer Wisconsin (Original)24224110
Wine Dataset1067213
+ +Table 2: Details on datasets used in experiments. + +In Breast Cancer Wisconsin dataset, we removed "ID number" from the list of features as it was irrelevant for model training. + +For methods requiring model training, 1D-Shapley, Random, and 2D-Shapley-MC, we implemented a decision tree classifier on all of them. + +Empirically, we verified that for each of the method, the cell values converge within 500 permutations and that is the number we decide to use to run these methods. + +Due to varying sizes of each dataset with different number of features, we set a different number of cells to be removed at a time. For bigger datasets, Census Income and Credit Default, we remove ten cells at a time, and for a smaller dataset, Breast Cancer, we remove one cell at a time. + +# F.2. Additional Results on Sanity check of cell-wise values experiment + +We provide results on additional datasets, Heart Failure and Wine Dataset, to demonstrate the effectiveness of 2D-Shapley in cell-wise valuation. We additionally include the 2D LOO baseline for comparison. As we can observe in Figure 8, 2D LOO performance is comparable to or worse than the Random baseline. One of the main reasons is that 2D LOO only valuates a cell's contribution when all other cells are present. This means that after the sequential removal of some cells, the values obtained from 2D LOO may no longer accurately represent the importance of the cells. In contrast, our method computes a cell's value by averaging its contribution over various sample and feature subset sizes, which ensures our cell values are informative even after the sequential + +![](images/78ecdb3b8470b31be2b49d4158e32795aae2a83bfb03532dc8c37f452f4cd9e8.jpg) +Figure 8: 2D-Shapley values for benign patients in the original breast cancer dataset. The green border denotes a cell before an outlier value has been injected to that cell. + +removal of a certain amount of cells, thereby addressing the shortcomings of 2D LOO and leading to improved performance in cell-wise valuation. + +# F.3. Additional Details and Results on Fine-Grained Outlier Localization experiment + +# F.3.1. Outlier Value Generation + +Our outlier generation technique is inspired by (Du et al., 2022). Specifically, for a random cell with a sample index $i$ and a feature index $j$ , we generate an outlier value based on its feature $j$ . We first recreate a distribution of the feature $j$ and then sample a value from a low-probability-density region, below $5\%$ in our experiment. + +# F.3.2. Heatmaps Comparison + +To better understand the detection rate of outlier values, we visualize them through a heatmap. In Figure 9, we provide a 2D-Shapley heatmap of the original dataset before outlier injection and compare with a 2D-Shapley heatmap in Figure 10 after injecting outliers. Due to dimensional reasons, we transpose the heatmap, where the rows represent features and the columns denote the samples. + +We observe through the breast cancer dataset that the cells with injected outliers have changed their values and lie mostly in the lower range of 2D-Shapley values. However, we can also notice that other cells are also affected by the outliers and the overall range of values has increased in both directions. + +In addition, we present a heatmap with injected outliers generated by 1D-Shapley to provide insights into the 1D-Shapley detection performance, which we show in Figure 5A). As we can observe the 1D-Shapley heatmap in Figure 11, the values of injected outliers are scattered which explains why the detection rate by 1D-Shapley was suboptimal. + +# F.3.3. Ablation Study on the Budget of Inserted Outliers + +In Figure 5A), we injected outlier values to $2\%$ of total cells. Here, we explore whether our 2D-Shapley method can still detect outliers on various different amounts of outliers. Thus, we randomly inject $1\%$ , $2\%$ , $5\%$ , $10\%$ , $15\%$ of outlier values to the original breast cancer dataset and plot the detection rate. + +As we observe in Figure 12, the detection rate of outliers is very high within the first 200 inspected cells for every outlier injection rate. Further, we observe that with more outliers added to the dataset, our detection rate slightly decreases. It is indeed reasonable, since as we inject more outliers in the dataset, the less uncommon these outliers are. + +![](images/d14ff787e122c8da6b7e6179c789ec124fa02ea2680d1dca1a1475f6588c0b84.jpg) +Figure 9: 2D-Shapley values for benign patients in the original breast cancer dataset. The green border denotes a cell before an outlier value has been injected to that cell. + +![](images/5a98fd985c9b0af6ca37951a60d171884c0147fc48871726b7d35f1d6f52652e.jpg) +Figure 10: 2D-Shapley values for benign patients in the breast cancer dataset with randomly inserted outliers. The green border denotes a cell after an outlier value has been injected to that cell. + +# F.4. Additional Details on Sub-matrix Valuation experiment + +For the plots in Figure 6, we have randomly split the Credit Default dataset into blocks. One of the random split is pictured in Figure 13. We randomly moved the horizontal and vertical lines and permuted separately rows and columns to create different possibilities for block splits. + +# F.5.Hardware + +In this work, we used an 8-Core Intel Xeon Processor E5-2620 v4 @ 2.20Ghz CPU server as a hardware platform. + +# F.6. Code + +The code repository is available via this link https://github.com/ruoxi-jia-group/2dshapley. + +![](images/96a2b9d4fd8064a9264eb5f6695551812967822a3cca26d91ec92e0c82b300af.jpg) +Figure 11: 1D-Shapley values for benign patients in the breast cancer dataset with randomly inserted outliers. The green border denotes a cell after an outlier value has been injected to that cell. + +![](images/b69eb6335dbaca03de64bcc0f30ad89776be827a8cd74f1c3ee62bf7e3ebf66b.jpg) +Figure 12: 2D-Shapley Detection rate of randomly inserted outliers in the breast cancer dataset over various injection rates. + +![](images/dd0f25ee93b07ac12481f86ce97b6065fa0ec58c83c85ff2c6087071ddd1fc07.jpg) +Figure 13: An example of a dataset split into blocks. \ No newline at end of file diff --git a/2dshapleyaframeworkforfragmenteddatavaluation/images.zip b/2dshapleyaframeworkforfragmenteddatavaluation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..83275cee89b10aba69bc5c1c7a7c089f579a0821 --- /dev/null +++ b/2dshapleyaframeworkforfragmenteddatavaluation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7128a2f2231165cd5de455985f66e11fcf8f4f844df610426f039ea177fd6f84 +size 1243236 diff --git a/2dshapleyaframeworkforfragmenteddatavaluation/layout.json b/2dshapleyaframeworkforfragmenteddatavaluation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..fbc4428c0270935694075d20593f64366b5520a0 --- /dev/null +++ b/2dshapleyaframeworkforfragmenteddatavaluation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d044aa955afe579ee82ff09957cc95472e70773bdddf9c62485996d7fa9c96a9 +size 962652 diff --git a/acategorytheoreticalmetaanalysisofdefinitionsofdisentanglement/e2d04a42-5377-4c90-b24a-a1cebb6cf79b_content_list.json b/acategorytheoreticalmetaanalysisofdefinitionsofdisentanglement/e2d04a42-5377-4c90-b24a-a1cebb6cf79b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3c883a475544a7c66c52b2fbd35ee02acee10abc --- /dev/null +++ b/acategorytheoreticalmetaanalysisofdefinitionsofdisentanglement/e2d04a42-5377-4c90-b24a-a1cebb6cf79b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:877afc51bfe61f0b0d39620cc21e34b10f84959a7bc9141a392a235d4f1574e1 +size 152524 diff --git a/acategorytheoreticalmetaanalysisofdefinitionsofdisentanglement/e2d04a42-5377-4c90-b24a-a1cebb6cf79b_model.json b/acategorytheoreticalmetaanalysisofdefinitionsofdisentanglement/e2d04a42-5377-4c90-b24a-a1cebb6cf79b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..fbc31360f029f3a713b7e5f7926851f82549f848 --- /dev/null +++ b/acategorytheoreticalmetaanalysisofdefinitionsofdisentanglement/e2d04a42-5377-4c90-b24a-a1cebb6cf79b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa952ebd96316bb5055aaabbee7b241374dbf96d46dc81b96186780e1dcdcb60 +size 184646 diff --git a/acategorytheoreticalmetaanalysisofdefinitionsofdisentanglement/e2d04a42-5377-4c90-b24a-a1cebb6cf79b_origin.pdf b/acategorytheoreticalmetaanalysisofdefinitionsofdisentanglement/e2d04a42-5377-4c90-b24a-a1cebb6cf79b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..df3e50874a93c978378d713c395c0316c7655372 --- /dev/null +++ b/acategorytheoreticalmetaanalysisofdefinitionsofdisentanglement/e2d04a42-5377-4c90-b24a-a1cebb6cf79b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6dd6bdea3cf8b0b49ca42a575b5c3dbae0d38b428d2c4ec295acde8d3442247f +size 584643 diff --git a/acategorytheoreticalmetaanalysisofdefinitionsofdisentanglement/full.md b/acategorytheoreticalmetaanalysisofdefinitionsofdisentanglement/full.md new file mode 100644 index 0000000000000000000000000000000000000000..518aaf9756017d48a64ce3851a8b73aca704f4cb --- /dev/null +++ b/acategorytheoreticalmetaanalysisofdefinitionsofdisentanglement/full.md @@ -0,0 +1,826 @@ +# A Category-theoretical Meta-analysis of Definitions of Disentanglement + +Yivan Zhang $^{12}$ Masashi Sugiyama $^{21}$ + +# Abstract + +Disentangling the factors of variation in data is a fundamental concept in machine learning and has been studied in various ways by different researchers, leading to a multitude of definitions. Despite the numerous empirical studies, more theoretical research is needed to fully understand the defining properties of disentanglement and how different definitions relate to each other. This paper presents a meta-analysis of existing definitions of disentanglement, using category theory as a unifying and rigorous framework. We propose that the concepts of the cartesian and monoidal products should serve as the core of disentanglement. With these core concepts, we show the similarities and crucial differences in dealing with (i) functions, (ii) equivariant maps, (iii) relations, and (iv) stochastic maps. Overall, our meta-analysis deepens our understanding of disentanglement and its various formulations and can help researchers navigate different definitions and choose the most appropriate one for their specific context. + +# 1. Introduction + +Disentanglement, in machine learning, refers to the ability to identify and separate the underlying factors that contribute to a particular variation in data (Bengio et al., 2013). It is a process of breaking down a complex phenomenon into simpler components. It has been suggested that disentangled representation learning is a promising way toward reliable, interpretable, and data-efficient machine learning (Locatello et al., 2019; Montero et al., 2020; Dittadi et al., 2021). + +Because disentanglement is an important concept, many researchers have approached this problem from different angles, resulting in various definitions, metrics, methods, + +1The University of Tokyo, Tokyo, Japan 2RIKEN AIP, Tokyo, Japan. Correspondence to: Yivan Zhang . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +and models. Some definitions are based on the intuition that: (1. modularity) a change in one factor should lead to a change in a single code; (2. compactness/completeness) a factor should be associated with only one code; and (3. explicitness/informativeness) the code should be able to predict the factor (Ridgeway & Mozer, 2018; Eastwood & Williams, 2018). Another line of research is based on group theory and representation theory (Cohen & Welling, 2014; 2015; Higgins et al., 2018), where the mapping from the data to the code is required to be equivariant to product group actions, preserving the product structure of automorphisms (a.k.a. symmetries). Meanwhile, information theory (Chen et al., 2018) and invariance (Higgins et al., 2017) also play an important role in characterizing disentanglement. + +Then why do we want to conduct a meta-analysis? Because we study the theories and techniques of disentanglement, yet our definitions of it are quite entangled. Although large-scale experimental studies exist (Locatello et al., 2019), theoretical analyses and systematic comparisons are limited (Seplierskaia et al., 2019; Carbonneau et al., 2022). Several important questions remain to be answered: + +- What are the defining properties of disentanglement? +- What operations and structures are essential, and what are specific to the task? +- Given two definitions or metrics, does one imply the other in any situation? +- Are the existing algebraic and statistical approaches compatible with one another? + +Things quickly become complicated without an abstract language to describe existing results. + +Category theory (Borceux, 1994; Awodey, 2006; Leinster, 2014) is particularly suitable for designing and organizing a system of this level of complexity. It has found applications in many scientific fields (Baez, 2017; Bradley, 2018; Fong & Spivak, 2019), recently also in machine learning (Gavranovic, 2019; de Haan et al., 2020; Shiebler et al., 2021; Dudzik & Velicković, 2022). In this work, we aim to disentangle the definitions of disentanglement from a categorical perspective. + +In Section 2, we first introduce the essential concepts of the cartesian product and monoidal product, which we argue should be the core of disentanglement. Next, we look into + +the requirements based on examples and counterexamples through Sections 3 to 6. We use the categories of (1. Set) sets and functions to define the concepts of modularity and explicitness as the defining properties of disentanglement (Ridgeway & Mozer, 2018); (2. [S, Set]) functors and natural transformations to generalize to actions of an algebra (monoid, group, etc.) and equivariant maps (Higgins et al., 2018); (3. Rel) sets and relations as an example of a symmetric monoidal category; and (4. Stoch) measurable spaces and stochastic maps to introduce the concept of the Markov category (Fritz, 2020) and explain how we should use the copy/delete/projection operations to characterize disentanglement. A full-blown example is given in the end. + +It is worth clarifying that this paper does not discuss metrics, models, methods, supervision, and learnability. Also, our contribution is not to category theory itself, as the math we used is not new. However, our work shows how category theory can transfer and integrate knowledge across disciplines and how abstract definitions can simplify a complex system (Baez, 2017). We hope our work is an initial step toward a full understanding of disentanglement. + +# 2. Product: Core of Disentanglement + +In this section, we briefly review two important categorical concepts — the cartesian product and monoidal product, which are the core of the disentanglement. We will omit many basic concepts such as the category, functor, natural transformation, and monad. Note that we frequently use commutative diagrams (Awodey, 2006) and string diagrams (Selinger, 2010) as graphical calculus (See Appendix A.1). + +# 2.1. Cartesian Category + +Let us dive into the definition of the (cartesian) product: + +Definition 1 (Product). In any category $\mathbf{C}$ , a product of two objects $A$ and $B$ is an object $A \times B$ , together with two morphisms $A \xleftarrow{p_1} A \times B \xrightarrow{p_2} B$ , called projections, satisfying the universal property: + +$$ +A \xleftarrow {\begin{array}{c} f _ {1} \\ \downarrow \end{array} } A \times B \xrightarrow [ p _ {2} ]{C} B \tag {1} +$$ + +Given any object $C$ and morphisms $A \xleftarrow{f_1} C \xrightarrow{f_2} B$ , there exists a unique morphism $\langle f_1, f_2 \rangle : C \to A \times B$ , called a paring of $f_1$ and $f_2$ , such that $f_1 = p_1 \circ \langle f_1, f_2 \rangle$ and $f_2 = p_2 \circ \langle f_1, f_2 \rangle$ . + +The gist is that any morphism $C \xrightarrow{f} A \times B$ to a product is merely a pair of component morphisms $A \xleftarrow{f_1} C \xrightarrow{f_2} B$ , and all such morphisms arise this way. However, note that a morphism $A \times B \to C$ from a product can depend on both components. + +We will be needing the following definitions and properties: + +The product morphism of $f: A \to C$ and $g: B \to D$ is defined as $f \times g: A \times B \to C \times D := \langle f \circ p_1, g \circ p_2 \rangle$ , which makes product $\times: \mathbf{C} \times \mathbf{C} \to \mathbf{C}$ a bifunctor. +- The diagonal morphism of an object $A$ is defined as $\Delta_A: A \to A \times A \coloneqq \langle \mathrm{id}_A, \mathrm{id}_A \rangle$ , which "duplicates" $A$ . +- The terminal object 1, if exists, is the unit of the product: for any object $A$ , there is a unique terminal morphism $e_A : A \to 1$ , which " deletes" $A$ , and $A \times 1 \cong A \cong 1 \times A$ . +The product is associative up to isomorphism $\alpha_{A,B,C}:(A\times B)\times C\cong A\times (B\times C):=\langle p_1\circ p_1,p_2\times\mathrm{id}_C\rangle$ , which allows us to define products $\prod_{i = 1}^{N}A_{i} = A_{1}\times \dots \times A_{N}$ and projections $p_i:\prod_{i = 1}^N A_i\to A_i$ for $N\geq 2$ objects. We use subscript $f_{i}\coloneqq p_{i}\circ f$ as an abbreviation. +The product is commutative up to isomorphism $\beta_{A,B}: A \times B \cong B \times A := \langle p_2, p_1 \rangle$ . + +A cartesian category is a category with all finite products, i.e., all binary products and a terminal object. + +# 2.2. Monoidal Category + +Having all products is sometimes too strong a condition. Besides, the product, if exists, is not always an appropriate concept for disentanglement. Therefore, sometimes we need to consider a weaker notion of the "product": + +Definition 2 (Symmetric monoidal category). A symmetric monoidal category $(\mathbf{C},\otimes ,I)$ is a category $\mathbf{C}$ equipped with a monoidal product $\otimes :\mathbf{C}\times \mathbf{C}\to \mathbf{C}$ and a monoidal unit $I$ which is unital, associative, and commutative up to natural isomorphisms and subject to some coherence conditions. + +The monoidal products are weaker because they do not need to satisfy the universal property, so there are no canonical projections anymore. A cartesian (monoidal) category is a symmetric monoidal category whose monoidal product is given by the cartesian product. However, many interesting monoidal categories are not cartesian. + +Some symmetric monoidal categories have extra structures or properties, including + +- monoidal category with diagonals $\Delta_A: A \to A \otimes A$ , which is natural in $A$ if + +$$ +\begin{array}{c c} A \xrightarrow {f} B \\ \Delta_ {A \downarrow} & \downarrow \Delta_ {B} \\ A \otimes A \xrightarrow {f \otimes f} B \otimes B \end{array} \quad \boxed {f} \boxed {f} = \boxed {f} (2) +$$ + +- semicartesian (monoidal) category, whose monoidal unit $I$ is a terminal object: + +$$ +\begin{array}{c c} A \xrightarrow {f} B \\ e _ {A} \searrow \searrow e _ {B} \\ I \end{array} \quad \boxed {f} = \quad (3) +$$ + +- monoidal category with projections $\pi_1: A \otimes B \to A$ and $\pi_2: A \otimes B \to B$ (Franz, 2002; Leinster, 2016), and +- Markov category (Fritz, 2020, Definition 2.1). + +They have the following relationship: + +$$ +\begin{array}{l} \text {c a r t e s i a n} \subset \text {M a r k o v} \subset \text {s e m i c a r t e s i a n} \\ \cap \quad \parallel \tag {4} \\ \end{array} +$$ + +$$ +\begin{array}{l} \text {d i a g o n a l s} \subset \text {m o n o i d a l} \supset \text {p r o j e c t i o n s} \end{array} +$$ + +These structures and properties will be important in the rest of this paper. + +# 3. Sets and Functions + +Equipped with these concepts, let us now look at the definitions of disentanglement. Set, the category of sets and functions, serves as our primary example. Set is cartesian, whose product is given by the Cartesian product of sets. + +We use $[1..N]$ to denote the set of numbers from 1 to $N$ . We use $\backslash i$ as an abbreviation of $[1..N] \setminus \{i\}$ , i.e., the set of numbers from 1 to $N$ except $i$ . + +# 3.1. Generating Process + +First, let us consider how the data is generated from a set of factors. If all combinations of factors are equally possible (cf. Section 5), we can assume that + +Assumption 1. The set of factors $Y \coloneqq \prod_{i=1}^{N} Y_i$ is a product of $N$ sets. + +Then, let $X$ be the set of observations. A generating process $g: Y \to X$ is simply a morphism from a product, i.e., a function with multiple inputs. It is an "entangling process" because we do not have any structural assumptions on $X$ . However, we need some basic requirements for $g$ to ensure that the analysis is meaningful. For starters, we assume that + +Assumption 2. $g:Y\to X$ is a monomorphism. + +This means that if two observations are the same, their underlying factors must be the same, too. This assumption avoids the model not satisfying a disentanglement definition simply because of a wrong choice of factors. + +# 3.2. Encoding Process + +Next, we consider how an encoding process $f: X \to Z$ can exhibit disentanglement and what desiderata are. Following Ridgeway & Mozer (2018) and Eastwood & Williams (2018), we call $Z$ the set of codes, which should also be a product. In this work, we consider a simple case where + +Assumption 3. The codes $Z$ also have $N$ components, and the code projections $p_i: Z \to Z_i$ are known a priori. + +Based on Assumption 3, we present our first definition: + +Disentanglement 1 (A morphism to a product). In a category $\mathbf{C}$ , a disentangled encoding process is a morphism $f: X \to Z$ to a product $Z := \prod_{i=1}^{N} Z_i$ . + +This is perhaps the minimal requirement for an encoder to + +exhibit some level of disentanglement. It means that the encoder outputs multiple components, and we can extract each component without losing any information. Note that D. 1 does not even rely on the ground-truth factors $Y$ and a generating process $g$ .1 + +Let us now improve D. 1. A disentanglement requirement that many researchers agree on is modularity, such that "each code conveys information about at most one factor" (Ridgeway & Mozer, 2018). It is natural to consider the composition $m: Y \to Z := f \circ g$ of a generating process $g$ and an encoding process $f$ , which we call a code generating process (w.r.t. a given encoding $f$ ), while $g: Y \to X$ can be referred to as a data generating process. Then, modularity is a property of a code generating process: + +Disentanglement 1.1. $m = \prod_{i=1}^{N}(m_{i,i}:Y_i\to Z_i)$ . + +$$ +\begin{array}{c c} \framebox {Z _ {1}} & Z _ {2} \\ \framebox {m: Y \to Z} \\ \framebox {Y _ {1}} & \framebox {Y _ {2}} & \framebox {Y _ {3}} \end{array} = \begin{array}{c c} \framebox {Z _ {1}} & Z _ {2} \\ \framebox {m _ {1 , 1}} & \framebox {m _ {2 , 2}} \\ \framebox {Y _ {1}} & \framebox {Y _ {2}} & \framebox {Y _ {3}} \end{array} +$$ + +"The $i$ -th code only encodes the $i$ -th factor." + +Morphisms $m, m_i$ , and $m_{i,i}$ have the following relationship: + +Proposition 1. $\forall i\in [1..N].m_i\coloneqq p_i\circ m = m_{i,i}\circ p_i$ + +$$ +\begin{array}{c} Y \xrightarrow {m} Z \\ p _ {i} \Bigg \downarrow \quad \Bigg \downarrow m _ {i} \\ Y _ {i} \xrightarrow {m _ {i , i}} Z _ {i} \end{array} \tag {5} +$$ + +D. 1.1 is straightforward and intuitive, but there is one difficulty: it relies on the existence of some other morphisms. Given $m$ , verifying if $m_{i,i}$ exists is not trivial. Although, if D. 1.1 holds, we can construct $m_{i,i}$ from $m$ as follows: + +Proposition 2. $\forall i\in [1..N].\forall y_{i}:1\to Y_{i}.m_{i,i} = Y_{i}\stackrel {\cong}{\longrightarrow}$ + +$$ +1 \times \dots \times Y _ {i} \times \dots \times 1 \xrightarrow {y _ {1} \times \dots \times \mathrm {i d} _ {Y _ {i}} \times \dots \times y _ {N}} Y \xrightarrow {m} Z \xrightarrow {p _ {i}} Z _ {i}. +$$ + +In words, we can choose other factors arbitrarily, and a modular encoder should give us the same code. This inspires us to have a more verifiable definition as follows. + +A good property of Set is that it is cartesian closed, i.e., it has exponential objects, given by the sets of functions. Let $\widehat{m_i}: Y \backslash_i \to Z_i^{Y_i}$ be the exponential transpose (currying) of $m_i: Y \to Z_i$ . To check modularity, we can verify if + +Disentanglement 1.2. $\widehat{m_i}$ is a constant morphism. + +Therefore, we can obtain the exponential transpose first and check whether it is constant. Even better, we can guarantee that these definitions are equivalent: + +Theorem 3. $D$ . 1.1 $\leftrightarrow D$ . 1.2. + +Proof. Diagram chase. + +![](images/2147889c3b7819becabbe271e331437a888c7b09613f029e2bbfbc7cc704bf07.jpg) + +![](images/c255ed49138a74cde3b7a1eda774a1d9843df0b863d47ae51830ff4666fe0159.jpg) + +Up to this point, we defined modularity in a cartesian closed category like Set. However, we point out that modularity alone is not sufficient: + +Example (Constant). Let $Z$ be the terminal object $1^{N} \cong 1$ . The terminal morphism $e_{Y}: Y \to 1$ satisfies D. 1.1. + +That is, an encoder sending everything to singletons is perfectly modular but also completely useless. Therefore, in addition to modularity, we should measure how useful and informative the codes are. + +# 3.3. Decoding Process + +This is where the concepts of explicitness (Ridgeway & Mozer, 2018) or informativeness (Eastwood & Williams, 2018) come in, meaning that "the factors can be precisely determined from the codes". It might be tempting to define explicitness as + +Disentanglement 1.3. $f$ is an inverse of $g$ . + +Then, the factors can be completely reconstructed from the observations. A drawback of D. 1.3 is that it requires the code set $Z$ to be the same as the factor set $Y$ , so $Y$ needs to be known during training. However, it is common that an encoder $f: X \to Z$ is trained with self-supervision or weak supervision (Shu et al., 2020; Wang et al., 2021), and the ground-truth factors $Y$ are only available during evaluation. + +Therefore, we weaken the requirement and define the explicitness of a code generating process as + +Disentanglement 1.4. $m$ is a split monomorphism. + +![](images/523f0b5a96d4cc2802bf9e30356c65661b4f3e342cd1076206f8ac618d25c0ab.jpg) + +"The codes encode the factors faithfully." + +This means that there exists a morphism $h: Z \to Y$ , which we call a decoding process, such that $h \circ m = \mathrm{id}_Y$ . In other words, $h$ is a retraction of $m$ . To summarize, we will focus on the following morphisms from now: + +$$ +Y \xrightarrow {- g (\text {g e n e r a t i n g}) \rightarrow X - f (\text {e n c o d i n g}) \rightarrow Z - h (\text {d e c o d i n g}) \rightarrow Y} \tag {7} +$$ + +Note that explicitness only indicates if the factors can be recovered. We may end up with entangled codes: + +Example (Rotation). Let $Y$ be a vector space. A rotation is an invertible linear transformation and satisfies D. 1.4. + +To avoid this, we may want the decoder to be modular, too. This property is related to the concepts of compactness (Ridgeway & Mozer, 2018) and completeness (Eastwood & Williams, 2018), meaning that "a factor is associated with only one code" (See also Appendix A.2). Like D. 1.1, we can require $h$ to be a product morphism: + +Disentanglement 1.5. $h = \prod_{i=1}^{N}(h_{i,i}:Z_i\to Y_i)$ . + +![](images/e097a0e9011506f2d369ae3c3c9f6f0bace76a8da24ce3d90a7ded4757d2f9dc.jpg) + +"The i-th code encodes the i-th factor faithfully." + +If an encoder has a modular decoder, we can safely drop other codes if a downstream task only relies on a subset of factors. For example, if a task only depends on factor $Y_{i}$ , then a component encoder $f_{i}: X \to Z_{i}$ can encode sufficient information for this task. + +We point out that an encoder with a modular decoder may not be modular itself: + +Example (Duplicate). Let $Z$ be $Y \times Y$ . The diagonal morphism $\Delta_Y$ satisfies D. 1.5 with a retraction $p_1 \times p_2$ . + +This means that a non-modular and explicit encoder may copy all the factors for each code $Z_{i} \coloneqq Y$ , and its modular decoder $h_{i,i}: Z_{i} \to Y_{i} \coloneqq p_{i}$ can simply project the code to each component, which is not what we expect. + +A potential remedy to this issue is to require that the code does not contain any other information except for the target factor: + +Disentanglement 1.6. $\forall i,j\in [1..N].\not\exists h_{i,j}:Z_i\to Y_j.$ $(i\neq j)\wedge (h_{i,j}\circ m_i = p_j)$ + +"The i-th code does not encode the j-th factor." + +However, D. 1.6 is even harder to verify than D. 1.1 because it relies on the non-existence of some morphisms. This is another difficulty in dealing with non-modular encoders. + +Fortunately, we can guarantee that a modular and explicit encoder must have a modular decoder: + +Theorem 4. $(D,1.1\wedge D,1.4)\to D,1.5$ + +It is now clear that modularity (D. 1.1) and explicitness (D. 1.4) of an encoder should be the defining properties of disentanglement and our main focus when designing and evaluating disentangled representation learning algorithms. Waiving either of these requirements could cause problems. Our analysis supports similar arguments made by Ridgeway & Mozer (2018), Duan et al. (2020), and Carbonneau et al. (2022). + +A minor issue is that a modular and explicit encoder may have a "non-explicit" decoder: + +Example (Redundancy). Let $Z$ be $(Y_{1} \times Y_{1}) \times Y_{2}$ . The morphism $m = \Delta_{Y_1} \times \mathrm{id}_{Y_2}$ satisfies both D. 1.1 and D. 1.4. + +It means that $Z_{1} \coloneqq Y_{1} \times Y_{1}$ contains redundant information of $Y_{1}$ . All meaningful codes are of the form $((y_{1}, y_{1}), y_{2})$ , while codes of the form $((y_{1}, y_{1}^{\prime}), y_{2})$ are meaningless and should not be decoded. In categorical terms, $m$ is a product morphism, a split monomorphism, but not an epimorphism. If we want to traverse the code space, we can additionally require $m$ to be a (split) epimorphism. + +# 4. Algebra Actions and Equivariant Maps + +We can simply change the category from Set to [S, Set]. + +In this section, we explain the above sentence by showing three ways to extend D. 1 and how it relates to the definition based on the direct product of groups (Higgins et al., 2018). + +[S, C] denotes the functor category of functors from S to C and natural transformations between these functors. We call the category S a scheme. To see how it relates to the existing algebraic formulation of disentanglement, we need the following well-known fact: + +Definition 3 (Equivalence as naturality). Many algebraic structures, such as monoids and groups, can be considered as single-object categories. Then, an action of an algebra at an object $A$ is precisely a functor $F_A: \mathbf{S} \to \mathbf{C}$ from the corresponding scheme $\mathbf{S}$ to a category $\mathbf{C}$ containing $A$ , and an equivariant map $f: A \to B$ between two actions $F_A$ and $F_B$ is precisely a natural transformation $\phi: F_A \Rightarrow F_B$ . + +An example is shown below: + +![](images/ac9c910550163dcac3f1be3ffbc00c58277ecf5f8c69d0fa727c1f4d3a26231f.jpg) + +We use subscript $a_{A} \coloneqq F_{A}a$ as an abbreviation. We can see that $F_{A}$ and $F_{B}$ send the single S-object $*$ to C-objects $A$ and $B$ and send endomorphisms to endomorphisms. In this way, we can consider S as syntax and C as semantics. + +Example (Regression vs. Ranking). Not all problems can be formulated using only endomorphisms, let alone groups. Some ranking problems (Liu, 2011) roughly correspond to finding order-preserving functions, which is equivariant to actions of the free monoid of natural numbers $\mathbb{N}$ . However, the usual regression problems also require the preservation $f(x_0) = 0$ of the zero point, which is a nullary operation zero: $1 \to \mathbb{N}$ (a morphism from a singleton to the set $\mathbb{N}$ ). + +# 4.1. Product Category and Functor Product + +Let us now consider the products of categories and functors. We highlight the following two important properties: + +- The category of small categories $\mathbf{Cat}$ is cartesian closed, with the product and exponential object given by the product category $\mathbf{S}_1 \times \mathbf{S}_2$ and functor category $[\mathbf{S}, \mathbf{C}]$ . +If $\mathbf{C}$ has (co)limits of a certain shape (e.g., product), then [S, C] has pointwise (co)limits of the same shape (e.g., functor product $F_{1} \times^{\mathbf{S}} F_{2}: \mathbf{S} \xrightarrow{\langle F_{1}, F_{2} \rangle} \mathbf{C} \times \mathbf{C} \xrightarrow{\times} \mathbf{C}$ ).2 + +Knowing if $\mathbf{C}$ has products then so does $[\mathbf{S},\mathbf{C}]$ , we can now extend D. 1 straightforwardly by simply changing the category from $\mathbf{C}$ to $[\mathbf{S},\mathbf{C}]$ : + +Disentanglement 2 (A natural transformation to a functor product). Let $\mathbf{S}$ be a category, $\mathbf{C}$ be a category with products, and $F_{X}, F_{Z_{i}}: \mathbf{S} \to \mathbf{C}, i \in [1..N]$ be functors. A disentangled encoding process is a morphism to a product in $[\mathbf{S}, \mathbf{C}]$ , i.e., a natural transformation $\phi: F_{X} \Rightarrow F_{Z}$ to a functor product $F_{Z} := \prod_{i=1}^{N} F_{Z_{i}}$ . + +In other words, the same scheme $\mathbf{S}$ has $N$ different models via $F_{Z_i}$ in $\mathbf{C}$ , which are combined into a single model via product $F_Z$ . In the product group action example (Higgins et al., 2018), $\mathbf{D}$ . 2 means that the product group is viewed as a single-object category $\mathbf{S}$ , and the product structure of automorphisms is preserved via the functor product. + +Another approach is to view each group as a single-object category and the product group as a product category. Then, we can use the following definition: + +Disentanglement 3 (A natural transformation between multifunctors). Let $\mathbf{S} = \prod_{i=1}^{N} \mathbf{S}_i$ be a product category, $\mathbf{C}$ be a category, and $F_X, F_Z: \mathbf{S} \to \mathbf{C}$ be multifunctors. A disentangled encoding process is a morphism in $[\mathbf{S}, \mathbf{C}]$ , i.e., a natural transformation $\phi: F_X \Rightarrow F_Z$ between multifunctors. + +That is, a scheme $\mathbf{S}$ with $N$ components has a model in $\mathbf{C}$ . We can see that $\mathbf{D}$ .2 defines disentanglement via the product of functors (based on the product in the codomain category $\mathbf{C}$ ), while $\mathbf{D}$ .3 uses the product of domain categories (based + +on the product in Cat). They have their own application scenarios, but due to space limits, we will not study D. 2 and D. 3 further in this paper. + +# 4.2. Product-preserving Functors + +Instead, let us consider a definition based on the product in the domain category S, which could be more flexible: + +Disentanglement 4 (A natural transformation at a product). Let $\mathbf{S}$ be a category with binary products, $\mathbf{C}$ be a category, and $F_{X}, F_{Z}: \mathbf{S} \to \mathbf{C}$ be functors. A disentangled encoding process is a component of a natural transformation $\phi: F_{X} \Rightarrow F_{Z}$ at a product. + +Additionally, if the codomain category $\mathbf{C}$ also has products, we can require that + +Disentanglement 4.1. $F_{Z}$ is product-preserving. + +In other words, $F_{Z}$ should be a cartesian (monoidal) functor, so products and projections in $\mathbf{S}$ are mapped to products and projections in $\mathbf{C}$ . An example is shown below: + +![](images/cbe100b094d9a1b80aa5963c470d5d1bdb9b5027cc89642bbe1496dc88b1eea0.jpg) + +We can see that two S-objects $\ast$ and $\ast$ have a product $\ast \times \ast$ . $F_{Z}$ preserves products so $(a\times b)_{Z} = a_{Z}\times b_{Z}$ . A disentangled encoding process $f\coloneqq \phi_{*\times *}$ is a component of a natural transformation $\phi$ at a product $\ast \times \ast$ . Note that $X$ is not necessarily a product but its endomorphisms can have a product structure (Higgins et al., 2018). + +Next, let us check what the counterpart of modularity is in the context of natural transformations. What we will do here is essentially the same as what we showed in Section 3.2. Again, it is natural to consider a code generating process $\mu : F_{Y} \Rightarrow F_{Z}$ in $[\mathbf{S},\mathbf{C}]$ , and we have a counterpart of Assumption 1 as follows: + +Assumption 4. $F_{Y}$ is product-preserving. + +Then, we can simply say that a modular encoder $\mu$ is a natural transformation between product-preserving functors. Even more, we can prove the following property: + +Proposition 5. $\forall *, * \in \mathbf{S}$ . $\mu_{* \times *} = \mu_* \times \mu_*$ . + +The reader should compare D. 4.1, Assumption 4, and Proposition 5 with D. 1.1. + +The following commutative diagram encompasses all the requirements (cf. Proposition 1): + +$$ +\begin{array}{c} Y \xrightarrow [ p _ {i} ]{\quad \mu_ {A}} Z \\ \downarrow^ {\quad a _ {Y}} Y \xrightarrow [ p _ {i} ]{\quad \mu_ {A}} \stackrel {| p _ {i} \stackrel {} {-} a _ {Z}} {\longrightarrow} Z \\ Y _ {i} \xrightarrow [ a _ {i _ {Y}} ]{\quad p _ {i} ]} Z _ {i} \xrightarrow [ Y _ {i} ]{\quad \mu_ {A _ {i}} ]{\mu_ {A _ {i}}} Z _ {i} \xrightarrow [ \mu_ {A _ {i}} ]{\quad a _ {i _ {Z _ {i}}}} Z _ {i} \end{array} \tag {10} +$$ + +The three axes correspond to (i) product, (ii) endomorphism, and (iii) natural transformation, respectively. + +Up to this point, our definition includes the one proposed by Higgins et al. (2018) as a special case. The reader may have noticed that there is only a counterpart of modularity D. 1.1 but not explicitness D. 1.4. Without the requirement, we may encounter the same failure case: + +Example (Constant). The constant functor $\Delta 1$ satisfies D. 4.1 with a natural transformation $e_{Y}:Y\to 1$ + +To patch this, one way is to require that + +Disentanglement 4.2. $F_{Z}$ is faithful. + +This means that $F_Z$ is injective on morphisms for each pair of S-objects. We need to rule out unfaithful models of a scheme lest we end up with uninformative representations. This requirement also tells us some basic properties the codes $Z$ should have such as the minimal size or dimension, depending on the choice of the scheme S. + +On the other hand, the exact counterpart of explicitness D. 1.4 is as follows: + +Disentanglement 4.3. $\mu$ is a split monomorphism. + +D. 4.3 is a stronger notion when $F_{Y}$ is also faithful: + +Theorem 6. $D$ . $4.3 \to D$ . 4.2. + +As a final note, we point out that D. 4 is more flexible because it is not limited to endomorphisms: + +Example (Binary operation). Let $\ast$ be an S-object (which itself can be a product) and $c: \ast \times \ast \rightarrow \ast$ an S-morphism. The following diagram describes how binary operations can exhibit disentanglement: + +$$ +\begin{array}{c} \ast \times \ast \supset a \times b \\ \begin{array}{c} \lambda \downarrow c ^ {\prime} \\ \ast \\ \begin{array}{c} \lambda^ {\prime} \\ f \times f = \phi_ {* \times *} \\ \end{array} \end{array} \\ a _ {X} \times b _ {X} \subsetneq X \times X \\ c _ {X} \downarrow \\ X \xrightarrow [ f := \phi_ {*} ]{} Z \end{array} \tag {11} +$$ + +Regarding $c \circ (a \times b)$ , the functoriality and naturality lead to the following requirement: + +$$ +f \left(c _ {X} \left(a _ {X} \left(x _ {1}\right), b _ {X} \left(x _ {2}\right)\right)\right) = c _ {Z} \left(a _ {Z} \left(f \left(x _ {1}\right)\right), b _ {Z} \left(f \left(x _ {2}\right)\right)\right). +$$ + +This formulation is particularly useful when dealing with multiple instances or heterogeneous inputs (Gatys et al., + +2016; Liu et al., 2018). Further investigation is left for future work. + +In summary, we showed that seemingly distinct approaches to disentanglement (Ridgeway & Mozer, 2018; Higgins et al., 2018) can be described by the same abstract language, and their underlying mechanisms (e.g., modularity and product-preserving action) are essentially the same. The core is the cartesian product of sets, functions, algebras, actions, objects, morphisms, categories, and functors. + +# 5. Sets and Relations + +The Cartesian product of sets is not cartesian in Rel. + +In this section, we present an example of (non-cartesian) monoidal products using Rel, the category of sets and relations (Patterson, 2017). + +We may want to work with relations instead of functions if we need to consider (i) unannotated factors, (ii) multiple observations for the same factor, or (iii) only a subset of all combinations of factors. Besides, Rel serves as a bridge between functions and probabilities, which will be discussed in the next section. + +To characterize Rel, it is convenient to consider it as the Kleisli category of the powerset monad $P$ in Set: + +$$ +\mathbf {R e l} := \mathbf {S e t} _ {P}. \tag {12} +$$ + +The powerset monad $P$ sends a set $A$ to its powerset $PA$ and a function $f: A \to B$ to a set function $Pf: PA \to PB$ . Its Kleisli category $\mathbf{Rel}$ has relations $A \rightsquigarrow B$ as the Kleisli morphisms, which are precisely set-valued functions $A \to PB$ . The composition is the Kleisli composition $g \Leftarrow f$ , given by the relation composition. + +Relations arise naturally in practice. For example, if we have a labeling process $l: X \to Y$ , which is a function in Set, a data generating process $g: Y \rightsquigarrow X := l^*$ can be defined as its inverse image, which is not a function anymore but a relation in Rel. Then, $g$ now can map a factor to multiple observations or the empty set. + +# 5.1. Monoidal Product of Relations + +Next, let us examine the product structures in Rel. We point out the following three important facts about Rel: + +- Rel is cartesian and cocartesian, with both the product and coproduct given by the disjoint union of sets $A \oplus B$ . +- Rel is monoidal closed, with both the monoidal product and internal hom given by the Cartesian product of sets $A \otimes B$ and the monoidal unit given by the singleton $\{*\}$ . + +- Rel is pointed, with the zero object (an object that is both initial and terminal) given by the empty set $\varnothing$ . + +That is, in Rel, the Cartesian product of sets is monoidal, but confusingly, not cartesian. So a relation $A \rightsquigarrow B \otimes C$ to a Cartesian product of two sets is more than just a pair of relations $A \rightsquigarrow B$ and $A \rightsquigarrow C$ . On the other hand, the monoidal product/internal hom $\otimes$ gives us an isomorphism between hom-sets: + +$$ +\operatorname {H o m} (A \otimes B, C) \cong \operatorname {H o m} (A, B \otimes C), \tag {13} +$$ + +which leads to the following example: + +$$ +\begin{array}{l} \begin{array}{l}(a, 0)\\(a, 1)\\(b, 0)\\(b, 1)\end{array}\underset {y} {\bigvee} x \cong\begin{array}{l}a \xleftarrow {(0 , x)}\\b \xleftarrow {(0 , y)}\\(1, x)\\(1, y)\end{array}\left. \right.\rightleftharpoons\begin{array}{l}a \xleftarrow {0} b \xrightarrow {1} a \xrightarrow {x}\\A \xrightarrow {} B\\A \xrightarrow {} C\end{array}\tag {14} \\ A \otimes B \twoheadrightarrow C \quad A \twoheadrightarrow B \otimes C \\ \end{array} +$$ + +Rel is an example of how the cartesian product $\oplus$ is not an appropriate concept for disentanglement, while a suitable one $\otimes$ only has a monoidal structure. The monoidal unit $\{*\}$ is different from the terminal object $\varnothing$ , so Rel is not even semicartesian. Although we still can define the "duplicating" and "deleting" operations (Patterson, 2017, Section 3.3), they do not behave as nicely as those diagonal and terminal morphisms in Set because of their non-naturality. + +Then, how can we characterize disentanglement? At least, we still have a counterpart of disentanglement D. 1: + +Disentanglement 5 (A morphism to a monoidal product). In a symmetric monoidal category $\mathbf{C}$ , a disentangled encoding process is a morphism $f: X \to Z$ to a monoidal product $Z := \bigotimes_{i=1}^{N} Z_i$ . + +Further, we can extend the definition of modularity D. 1.1: + +Disentanglement 5.1. $m = \bigotimes_{i=1}^{N}(m_{i,i}:Y_i\to Z_i)$ . + +So, D. 1 and D. 1.1 are special cases of D. 5 and D. 5.1 for a cartesian category. However, without projections, D. 5.1 is more difficult to verify than D. 1.1. + +Then, how can we resolve this? One way is to restrict our attention to right-unique relations, i.e., partial functions, so duplication behaves nicely (Eq. (2)), but it means that there is at most one observation for each factor. We can also focus on left-total relations, i.e., multivalued functions, so deletion behaves nicely (Eq. (3)), but we need to assume that there is at least one observation for each factor (Fritz, 2020, Example 2.6). If we want both, then we will end up with Set — a cartesian subcategory of Rel. Despite its many good properties, Set might be too restrictive if we want to incorporate uncertainty in disentanglement. Later we will see that a semicartesian category with (not necessarily natural) diagonals might be a balanced choice, which provides a rich collection of operations to characterize disentanglement. + +# 5.2. Functor Category, Revisited + +Before moving on to the next section, "can we change from Rel to [S, Rel]?" we have to ask. First, the fact that [S, C] has a pointwise monoidal structure derived from C tells us that D. 2 generalizes to the functor monoidal product straightforwardly. Second, D. 4.1 is a special case of the following requirement for a cartesian category: + +# Disentanglement $4.1'$ . $F_Z$ is a monoidal functor. + +Higgins et al. (2018) mainly worked with the direct sum $\oplus$ (direct product $\times$ ) of vector spaces and briefly mentioned the tensor product $\otimes$ . We remind the reader that their decisive difference is between the cartesian and monoidal products. + +# 6. Measurable Spaces and Stochastic Maps + +We can copy/delete in a Markov category like Stoch. + +Besides the algebraic approach (Higgins et al., 2018), the probabilistic, statistical, and information-theoretic methods (Higgins et al., 2017; Chen et al., 2018; Kumar et al., 2018; Suter et al., 2019; Do & Tran, 2020) are perhaps the most popular tools for disentangled representation learning. In this section, we outline the essential operations required for characterizing disentanglement of stochastic maps. + +The structure is similar to that of Rel: the category Stoch of measurable spaces and stochastic maps (Markov kernels) is the Kleisli category of the Giry monad $P$ in the category Meas of measurable spaces and measurable functions: + +$$ +\mathbf {S t o c h} := \mathbf {M e a s} _ {P}. \tag {15} +$$ + +The Giry monad $P$ sends a measurable set $A$ to the set $PA$ of probability measures on $A$ and a measurable function $f: A \to B$ to its pushforward $f_{*}: PA \to PB$ . The Kleisli morphisms are stochastic maps $p(B|A)$ , and the Kleisli composition $p(C|A) = p(C|B) \Leftrightarrow p(B|A)$ is the Chapman-Kolmogorov equation (Giry, 1982). + +# 6.1. Joint Distribution and Conditional Independence + +Next, let us start by highlighting the impossibility result in Locatello et al. (2019), which is essentially about the product structures of Stoch. It can be succinctly restated in the categorical language as + +Theorem 7 (Locatello et al. (2019, Theorem 1)). Stoch is not cartesian. + +This theorem implies the following diagram (cf. Eq. (1)): + +$$ +Z _ {1} \xleftarrow {\underset {\pi_ {1}} {p _ {1}}} Z _ {1} \otimes Z _ {2} \xrightarrow [ \uparrow f \neq \mathrm {i d} _ {Z _ {1} \otimes Z _ {2}} ]{I} Z _ {2} \tag {16} +$$ + +It means that a joint distribution $p(Z_1, Z_2)$ is not uniquely + +specified by its marginals $p_1(Z_1)$ and $p_2(Z_2)$ . Locatello et al. (2019) explicitly constructed a family of bijections $f: Z \to Z$ using the inverse transform sampling technique. + +Note the projection morphisms $\pi_1$ and $\pi_2$ used in Eq. (16), which are available because Stoch is a Markov category (Fritz, 2020). A Markov category, roughly speaking, is a category in which every object $A$ is equipped with a "copy" $\mathrm{copy}_A: A \to A \otimes A$ (not necessarily natural in $A$ ) and a "delete" $\mathrm{del}_A: A \to I$ (natural in $A$ ) morphism satisfying some coherence conditions. Therefore, all morphisms are deletable but only some are copyable, which allows for a sufficiently expressive category with enough operations to characterize disentanglement: + +Disentanglement 6 (A Markov kernel to a joint). In a Markov category $\mathbf{C}$ , a disentangled encoding process is a Markov kernel $f: X \to Z$ to a joint $Z := \bigotimes_{i=1}^{N} Z_i$ . + +We point out that the conditional independence $A \perp B \parallel C$ of a Markov kernel $p(A, B|C)$ (Fritz, 2020, Definition 12.12) can be used to derive a prerequisite for the modularity of an encoder $m: Y \to Z$ : + +Disentanglement 6.1. $\forall i\in [1..N].Z_{i}\perp Z_{\backslash i}\parallel Y.$ + +Let $m_i:Y\to Z_i\coloneqq \operatorname{del}_{Z_{\backslash i}}\circ m$ be the marginalization of $m$ over $Z_{\backslash i}$ and $\mathrm{copy}_A^N:A\to A^{\otimes N}$ a "multiple copy" morphism. We can prove that D. 6.1 is equivalent to the following equational identity (cf. Eq. (1)): + +Disentanglement 6.2. $m = \left(\bigotimes_{i=1}^{N} m_{i}\right) \circ \mathrm{copy}_{Y}^{N}$ . + +$$ +\begin{array}{c c} \framebox {Z _ {1}} & Z _ {2} \\ \framebox {m} \\ \framebox {Y} \end{array} = \quad \begin{array}{c c} \framebox {|} & \bullet \\ \framebox {m} & \framebox {|} \\ \framebox {|} & m \\ \framebox {|} & \bullet \\ \framebox {|} & \bullet \\ \framebox {|} & \bullet \\ \framebox {|} & \bullet \\ \framebox {|} & \bullet \\ \framebox {|} & \bullet \\ \framebox {|} & \bullet \\ \framebox {|} & \bullet \\ \framebox {|} & \bullet \\ \framebox {|} & \bullet \\ \framebox {|} & \bullet \\ \framebox {|} & \bullet \end{array} +$$ + +Theorem 8. $D$ . 6.1 $\leftrightarrow D$ . 6.2. + +We call an encoder satisfying D. 6.2 projectable. This is a more fine-grained condition than the total correlation used in Chen et al. (2018) because it is conditioned on the factors. + +With this precondition, we can finally define the modularity of a stochastic encoder: + +Disentanglement 6.3. $\forall i\in [1..N].\forall n_i:Y_i\to Y_{\backslash i}.$ $Z_{i}\perp Y_{\backslash i}\parallel Y_{i}$ + +![](images/37fecd3b7f29ff53b5069ed2f6fc64fd1b789f1f0f4871a8b4d6127d1abd4416.jpg) + +The reader may have noticed that this means that any $n_i: Y_i \to Y_{\backslash i}$ behaves like a deterministic morphism (Fritz, 2020, Definition 10.1) when composed with $m_i: Y \to Z_i$ . + +Why do we need this? It is because, not like Rel, where $A \otimes B \rightsquigarrow C$ is the same thing as $A \rightsquigarrow B \otimes C$ (Eq. (13)), in Stoch, $\operatorname{Hom}(A, B \otimes C)$ is larger than $\operatorname{Hom}(A \otimes B, C)$ . We need a "probe" in $\operatorname{Hom}(A, B)$ to characterize how $C$ depends on $B$ , and $n_i : Y_i \to Y_{\backslash i}$ serves as this probe. + +Based on this, we revealed a common loophole in existing statistical approaches: if we use the mutual information between factors and codes to characterize disentanglement, the distribution of factors is assumed to be fixed (Chen et al., 2018; Li et al., 2020; Tokui & Sato, 2022). However, the training and test distributions could be different (Trauble et al., 2021), and the existing definitions may be too coarse-grained and insufficient in such a situation. + +# 6.2. Structured Markov Kernels + +An important fact is that the category of functors to the subcategory of deterministic morphisms is again a Markov category (Fritz, 2020, Section 7), so we can deal with "structured" Markov kernels. We end our discussion with an example based on this fact, without any category theory jargon, to show what we can get from our formulation. + +Example $([\mathbb{N},\mathbf{Set}_N]_{\mathrm{det}})$ . A robot processing video feed of multiple objects should be able to (i) identify objects, (ii) understand that objects continue to exist even if they are occluded (object permanence), and (iii) track the movement of hidden objects (invisible displacement). All these abilities should not be affected by the shape or color of the objects. + +With category theory, we can formulate such a complex task with ease because the components are compositional. See Appendix B for a detailed explanation. + +# 7. Limitations + +As an initial step toward categorical characterization of disentanglement, this work only focused on the definitions. Many other important aspects of disentanglement were excluded, such as metrics, supervision, learnability, models, methods, and experimental validation. + +With a clear understanding of the definitions in place, the immediate next step would be to find a systematic way to enrich a definition to a metric. We hypothesize that a good direction includes the following three steps: + +- equality $\rightsquigarrow$ metric +universal quantification $\rightsquigarrow$ aggregation +existential quantification $\rightsquigarrow$ approximation + +With a good metric, we can quantify how good a model is, even if it does not strictly satisfy a disentanglement + +definition. For example, from Theorem 4, we know that a modular and explicit encoder must have a modular decoder. Given some modularity and explicitness metrics, we may want to know how much the modularity and explicitness of an encoder imply the modularity of its decoder. + +Other potential future directions include the studies of partial combination of factors (Section 5) and unknown projection (Assumption 3). The relation between D. 2, D. 3, and D. 4 deserves further investigation. How to formulate disentanglement in more complex learning problems, such as reinforcement learning, is also an interesting direction. While we have obtained more results for cartesian categories due to their favorable properties, further theoretical analyses on the monoidal category case would be useful. + +# 8. Conclusion + +In this work, we presented a meta-analysis of several definitions of disentanglement (Cohen & Welling, 2014; 2015; Ridgeway & Mozer, 2018; Eastwood & Williams, 2018; Higgins et al., 2018; Chen et al., 2018) using category theory as a unifying language. We revealed that some seemingly distinct formulations are just different manifestations of the same structures, the cartesian products and monoidal products, in different categories. We argued that the modularity (product morphism) and explicitness (split monomorphism) should be the defining properties of disentanglement and introduced tools to analyze these properties in various settings, including equivariant maps (functor category) and stochastic maps (Markov category). We also reinterpreted some existing results (Locatello et al., 2019) and provided support to some arguments based on empirical evidence (Ridgeway & Mozer, 2018; Träuble et al., 2021). We hope our findings can help researchers choose the most appropriate definition of disentanglement for their specific task and consequently discover better metrics, models, methods, and algorithms for disentangled representation learning. + +# Acknowledgements + +We would like to thank Tobias Fritz for answering our questions about Markov categories. We thank Wei Wang and Johannes Ackermann for their valuable feedback on the draft. We also thank the anonymous reviewers for their useful comments and constructive suggestions. Finally, we would like to express our gratitude to all contributors to nLab, MathOverflow, and StackExchange for creating a sharing community. + +YZ was supported by JSPS KAKENHI Grant Number 22J12703. MS was supported by JST CREST Grant Number JPMJCR18A2. + +# References + +Adamek, J., Herrlich, H., and Strecker, G. E. Abstract and Concrete Categories: The Joy of Cats. John Wiley and Sons, 1990. URL http://www.tac.mta.ca/tac/reprints/articles/17/tr17abs.html. A.5 +Awodey, S. Category theory. Oxford University Press, 2006. URL https://doi.org/10.1093/acprof:os0/9780198568612.001.0001.1, 2, A.1 +Baez, J. Applied category theory 2018 | the n-category café, 2017. URL https://golem.ph.utexas.edu/category/2017/09/applied_category_th +eory_1.html.1 +Baez, J. C., Fritz, T., and Leinster, T. A characterization of entropy in terms of information loss. Entropy, 13(11): 1945-1957, 2011. URL https://doi.org/10.3 390/e13111945. https://arxiv.org/abs/ 1106.1791.A.5 +Bengio, Y., Courville, A., and Vincent, P. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798-1828, 2013. URL https://doi.org/10.1109/TPAMI.2013.50. https://arxiv.org/abs/1206.5538.1 +Borceux, F. Handbook of categorical algebra: volume 1, Basic category theory, volume 1. Cambridge University Press, 1994. URL https://doi.org/10.1017/CBO9780511525858.1 +Bradley, T.-D. What is applied category theory? arXiv preprint arXiv:1809.05923, 2018. URL https://arxiv.org/abs/1809.05923.1 +Carbonneau, M.-A., Zaidi, J., Boilard, J., and Gagnon, G. Measuring disentanglement: A review of metrics. IEEE Transactions on Neural Networks and Learning Systems, 2022. URL https://doi.org/10.1109/TNNLS.2022.3218982. https://arxiv.org/abs/2012.09276.1,3.3 +Chen, R. T., Li, X., Grosse, R. B., and Duvenaud, D. K. Isolating sources of disentanglement in variational autoencoders. In Neural Information Processing Systems, 2018. URL https://proceedings.neurips.cc/paper/2018/black/1ee3dfcd8a0645a25 a35977997223d22-Abstract.html. 1, 6, 6.1, 6.1, 8 +Cho, K. and Jacobs, B. Disintegration and bayesian inversion via string diagrams. Mathematical Structures in Computer Science, 29(7):938-971, 2019. URL https://doi.org/10.1017/S0960129518000488. https://arxiv.org/abs/1709.00322. A.7 + +Cohen, T. and Welling, M. Learning the irreducible representations of commutative lie groups. In International Conference on Machine Learning, 2014. URL https://proceedings.mlr.press/v32/cohen14.html. 1,8,A.4,A.4 +Cohen, T. and Welling, M. Transformation properties of learned visual representations. In International Conference on Learning Representations, 2015. URL http://arxiv.org/abs/1412.7659.1,8,A.4 +de Haan, P., Cohen, T., and Welling, M. Natural graph networks. Neural Information Processing Systems, 33: 3636-3646, 2020. URL https://proceedings.neurips.cc/paper/2020/bash/2517756c5a9be6ac007fe9bb7fb92611-Abstract.html.1 +Dittadi, A., Trauble, F., Locatello, F., Wuthrich, M., Agrawal, V., Winther, O., Bauer, S., and Scholkopf, B. On the transfer of disentangled representations in realistic settings. In International Conference on Learning Representations, 2021. URL https://open review.net/forum?id=8VXvj1QNrl1.1 +Do, K. and Tran, T. Theory and evaluation metrics for learning disentangled representations. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=HJgKoh4Ywr.6 +Duan, S., Matthew, L., Saraiva, A., Watters, N., Burgess, C., Lerchner, A., and Higgins, I. Unsupervised model selection for variational disentangled representation learning. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SyxL2TNtvr.3.3 +Dudzik, A. J. and Velicković, P. Graph neural networks are dynamic programmers. In Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=wu1Za9dY1GY.1 +Eastwood, C. and Williams, C. K. A framework for the quantitative evaluation of disentangled representations. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=By-7dz-AZ.1,3.2,3.3,3.3,8 +Fong, B. and Spivak, D. I. An invitation to applied category theory: seven sketches in compositionality. Cambridge University Press, 2019. URL https://doi.org/10.1017/97811086688804. https://arxiv.org/abs/1803.05316.1 +Franz, U. What is stochastic independence? In Noncommutativity, infinite-dimensionality and probability at + +the crossroads, pp. 254-274. World Scientific, 2002. URL https://doi.org/10.1142/9789812705242_0008. https://arxiv.org/abs/math/0206017.2.2 +Fritz, T. A synthetic approach to markov kernels, conditional independence and theorems on sufficient statistics. Mathematics, 370:107239, 2020. URL https://doi.org/10.1016/j.aim.2020.107239. https://arxiv.org/abs/1908.07021.1,2.2,5.1,6.1,6.1,6.1,6.2,A.6,4,B,C +Gatys, L. A., Ecker, A. S., and Bethge, M. Image style transfer using convolutional neural networks. In Computer Vision and Pattern Recognition, 2016. URL https://doi.org/10.1109/CVPR.2016.265.4.2 +Gavranovic, B. Compositional deep learning. Master's thesis, University of Zagreb, 2019. URL https://arxiv.org/abs/1907.08292.1 +Giry, M. A categorical approach to probability theory. Categorical Aspects of Topology and Analysis, pp. 68-85, 1982. URL https://doi.org/10.1007/BFb0092872.6 +Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A. beta-VAE: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations, 2017. URL https://open review.net/forum?id=Sy2fzU9gl.1,6 +Higgins, I., Amos, D., Pfau, D., Racaniere, S., Matthew, L., Rezende, D., and Lerchner, A. Towards a definition of disentangled representations. arXiv preprint arXiv:1812.02230, 2018. URL https://arxiv.org/abs/1812.02230.1, 4, 4.1, 4.2, 4.2, 4.2, 5.2, 6, 8, A.4, A.4 +Kumar, A., Sattigeri, P., and Balakrishnan, A. Variational inference of disentangled latent concepts from unlabeled observations. In International Conference on Learning Representations, 2018. URL https://openreview .net/forum?id=H1kG7GZAW.6 +Leinster, T. Basic category theory, volume 143. Cambridge University Press, 2014. URL https://doi.org/10.1017/CBO9781107360068. https://arxiv.org/abs/1612.09375.1 +Leinster, T. Monoidal categories with projections | the n-category café, 2016. URL https://golem.ph.ute.xas.edu/category/2016/08/monoidal_categories_with_proje.html.2.2 + +Li, Z., Murkute, J. V., Gyawali, P. K., and Wang, L. Progressive learning and disentanglement of hierarchical representations. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SJxpsxrYPS.6.1 +Liu, T.-Y. Learning to Rank for Information Retrieval. Springer, 2011. URL https://doi.org/10.1007/978-3-642-14267-3.4 +Liu, X., Van De Weijer, J., and Bagdanov, A. D. Leveraging unlabeled data for crowd counting by learning to rank. In Computer Vision and Pattern Recognition, 2018. URL https://doi.org/10.1109/CVPR.2018.00799.4.2 +Locatello, F., Bauer, S., Lucic, M., Raetsch, G., Gelly, S., Schölkopf, B., and Bachem, O. Challenging common assumptions in the unsupervised learning of disentangled representations. In International Conference on Machine Learning, 2019. URL https://proceedings.mrlr.press/v97/locatello19a.html. 1, 6.1, 7, 6.1, 8 +Montero, M. L., Ludwig, C. J., Costa, R. P., Malhotra, G., and Bowers, J. The role of disentanglement in generalisation. In International Conference on Learning Representations, 2020. URL https://openreview .net/forum?id=qbH974jKUVy.1 +Patterson, E. Knowledge representation in bicategories of relations. arXiv preprint arXiv:1706.00526, 2017. URL https://arxiv.org/abs/1706.00526.5,5.1 +Piedeleu, R. and Zanasi, F. An introduction to string diagrams for computer scientists. arXiv preprint arXiv:2305.08768, 2023. URL https://arxiv.org/abs/2305.08768. A.1 +Ridgeway, K. and Mozer, M. C. Learning deep disentangled embeddings with the f-statistic loss. In Neural Information Processing Systems, 2018. URL https://proceedings.neurips.cc/paper/2018/hash/2b24d495052a8ce66358eb576b8912c8-Abstract.html. 1, 3.2, 3.2, 3.3, 3.3, 3.3, 4.2, 8 +Selinger, P. A survey of graphical languages for monoidal categories. In New structures for physics, pp. 289-355. Springer, 2010. URL https://doi.org/10.1007/978-3-642-12821-9_4. https://arxiv.org/abs/0908.3347.2, A.1 +Sepliarskaia, A., Kiseleva, J., and de Rijke, M. Evaluating disentangled representations. arXiv preprint arXiv:1910.05587, 2019. URL https://arxiv.org/abs/1910.05587.1 + +Shiebler, D., Gavranovic, B., and Wilson, P. Category theory in machine learning. arXiv preprint arXiv:2106.07032, 2021. URL https://arxiv.org/abs/2106.07032.1 +Shu, R., Chen, Y., Kumar, A., Ermon, S., and Poole, B. Weakly supervised disentanglement with guarantees. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=HJgSwyBKvr.3.3 +Suter, R., Miladinovic, D., Scholkopf, B., and Bauer, S. Robustly disentangled causal mechanisms: Validating deep representations for interventional robustness. In International Conference on Machine Learning, 2019. URL http://proceedings.mlr.press/v97/suter19a.html.6 +Tokui, S. and Sato, I. Disentanglement analysis with partial information decomposition. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=pETy-HVvGtt.6.1 +Trauble, F., Creager, E., Kilbertus, N., Locatello, F., Dittadi, A., Goyal, A., Scholkopf, B., and Bauer, S. On disentangled representations learned from correlated data. In International Conference on Machine Learning, 2021. URL https://proceedings.mlr.press/v139/trauble21a.html.6.1,8 +Wang, T., Yue, Z., Huang, J., Sun, Q., and Zhang, H. Self-supervised learning disentangled group representation as feature. In Neural Information Processing Systems, 2021. URL https://openreview.net/forum?id=RQfckT1M_4.3.3 + +# A. Additional Remarks + +# A.1. Diagram + +We frequently use commutative diagrams (Awodey, 2006) and string diagrams (Selinger, 2010; Piedeleu & Zanasi, 2023) as graphical calculus. + +In a commutative diagram for a category, nodes are objects, arrows are morphisms, and paths are compositions of morphisms. Any morphisms with the same domains and codomains are equal, i.e., any two paths starting from $A$ and ending with $B$ result in the same morphism. + +In a string diagram for a symmetric monoidal category, rectangles are morphisms (from bottom to top), lines without rectangles are identity morphisms. Juxtaposition of two morphisms means the monoidal product, and cross means the braiding. Domains and codomains of morphisms are often omitted. + +# A.2. Compactness + +Note that there could be two interpretations of compactness. + +A non-compact encoder can mean: + +(a) We can recover $Y_{j}$ from $Z_{i}$ ; or +(b) $Z_{i}$ itself is a product $Z_{i1} \times Z_{i2}$ . + +We argue that (a) is what we want to avoid, while (b) is more or less harmless. For example, we can decompose $\mathbb{R}^{10}$ into $\mathbb{R}^2\times \mathbb{R}^3\times \mathbb{R}^5$ , where each component again can be decomposed into smaller parts. However, sometimes this is beneficial: while embedding a cycle into a vector space, $\mathbb{R}^2$ may be a better choice than $\mathbb{R}$ because the embedding can be continuous. In this work, we do not pay much attention to whether each code $Z_{i}$ is "minimal". + +# A.3. Functorial Semantics + +Can we formulate modularity in terms of functors and natural transformations? The answer is yes, because the product, as a limit, can be defined via functors in the first place. Here, we only give an alternative definition of D. 1.1: + +Disentanglement $1.1'$ . $m$ is a natural transformation between functors from a discrete category. + +$$ +\begin{array}{l} Y _ {1} \xrightarrow [ \text {人} ^ {\prime} ]{\text {人}} Z _ {1} \\ Y _ {2} \xrightarrow [ m _ {2 , 2} := m _ {*} ]{} Z _ {2} \end{array} \tag {17} +$$ + +It shows that a modular encoder $m$ is merely a collection of component morphisms $m_{i,i}: Y_i \to Z_i$ . Nothing more, nothing less. + +# A.4. Commutativity and Irreducibility + +Cohen & Welling (2014) in their seminal paper used the irreducible representations of compact commutative Lie groups to define and study disentangled representations, while Higgins et al. (2018) used the direct product of groups. Here, we briefly remark on the product, commutativity, and irreducibility. + +First, let us keep it simple and only consider unital magma — a set with a unital binary operation. If we have two unital magmas $(M, \circ_{M}, e_{M})$ and $(N, \circ_{N}, e_{N})$ , we can define their product, denoted by $P = M \times N$ , as the Cartesian product of their underlying sets equipped with a binary operation $\circ_{P}: (M \times N) \times (M \times N) \to (M \times N)$ given by + +$$ +\left(m _ {1}, n _ {1}\right) \circ_ {P} \left(m _ {2}, n _ {2}\right) := \left(m _ {1} \circ_ {M} m _ {2}, n _ {1} \circ_ {N} n _ {2}\right), \tag {18} +$$ + +whose unit is $e_P \coloneqq (e_M, e_N)$ . We can see that the product is also a unital magma. + +Then, we can find that every element $(m,n)$ in the product $P$ can be decomposed in two ways: + +$$ +\begin{array}{l} (m, n) \\ = \left(m \circ_ {M} e _ {M}, e _ {N} \circ_ {N} n\right) = \left(m, e _ {N}\right) \circ_ {P} \left(e _ {M}, n\right) \tag {19} \\ = \left(e _ {M} \circ_ {M} m, n \circ_ {N} e _ {N}\right) = \left(e _ {M}, n\right) \circ_ {P} \left(m, e _ {N}\right), \\ \end{array} +$$ + +which can be expressed in string diagrams: + +$$ +\begin{array}{c c c c c} \hline m & n \\ \hline \end{array} = \begin{array}{c c c c c} \hline m \\ \hline n \\ \hline \end{array} = \begin{array}{c c c c c} \hline m \\ \hline \end{array} (2 0) +$$ + +We can identify $(m, e_N)$ as $m$ and $(e_M, n)$ as $n$ because of the unit magma isomorphisms: + +$$ +(M, \circ_ {M}, e _ {M}) \cong (M \times \{e _ {N} \}, \circ_ {P} | _ {M \times \{e _ {N} \}}, e _ {P}), \tag {21} +$$ + +$$ +\left(N, \circ_ {N}, e _ {N}\right) \cong \left(\left\{e _ {M} \right\} \times N, \circ_ {P} \right| _ {\left\{e _ {M} \right\} \times N}, e _ {P}). \tag {22} +$$ + +From this perspective, as long as we can define a serial combination $\circ$ and its unit $e$ for each component, the product operation $\times$ allows us to combine elements from different components in parallel commutatively. We can deal with one component at a time, and the order of the components does not matter. However, note that the serial combination within a component may not be commutative, such as the 3D rotations SO(3) (Cohen & Welling, 2015; Higgins et al., 2018). + +This property may inspire us to "discover" disentangled components from observational data using commutativity: we can find components such that elements from the same component are closed under composition, and elements from different components are commutative. + +Such a decomposition may not be unique. For example, consider $\mathbb{R}^2$ with addition $+$ (as a unital magma, a monoid, or a group). $A = \{(a,0)\mid a\in \mathbb{R}\}$ , $B = \{(0,b)\mid b\in \mathbb{R}\}$ , and $C = \{(c,c)\mid c\in \mathbb{R}\}$ are all subalgebras of $\mathbb{R}^2$ , and both $A\times B$ and $A\times C$ are isomorphic to $\mathbb{R}^2$ via the addition. + +Learning the (potentially product) algebraic structure from data and determining an appropriate decomposition based on commutativity is an interesting research direction. + +Besides, it is worth noting that Cohen & Welling (2014) identified a connection between irreducible representations and disentanglement, which is not covered in this work. Furthermore, Cohen & Welling (2015) made an insightful observation that irreducibility is also linked to the statistical dependency structure of the representation. Using tools such as functor categories and Markov categories, we may obtain more fruitful results on the connection between algebraic and statistical properties of disentanglement. + +# A.5. Probability + +Note that Meas has finite products: $(A, \Sigma_A) \times (B, \Sigma_B) \coloneqq (A \times B, \Sigma_A \times \Sigma_B)$ , where $\Sigma_A \times \Sigma_B$ is the coarsest $\sigma$ -algebra such that two projections are measurable. + +A useful construction is the category of probability measures and measure-preserving functions, which can be defined based on the concept of the comma category: + +$\mathbf{Prob}:= 1\hookrightarrow \mathbf{Stoch}\downarrow \mathbf{Meas}\to \mathbf{Stoch}.$ (23) + +Concretely, $1 \hookrightarrow$ Stoch is the inclusion functor, and the functor $\mathbf{Meas} \to \mathbf{Stoch}$ sends a measurable set $A$ to itself and a measurable function $f$ to its pushforward $f_{*}$ . + +Prob is a category whose objects are (isomorphic to) probability measures $(A,1\stackrel {p_A}{\longrightarrow}PA)$ , and morphisms $p_A\to p_B$ are measure-preserving functions $f:A\rightarrow B$ such that $p_B = f_*p_A$ . This category will be important when characterizing the metrics based on entropy and mutual information (Baez et al., 2011). + +Meas, Stoch, and Prob can be illustrated as follows: + +![](images/bb17e0b7ad051af46a41bf2fe6674fc695bfee25926b513cb60e2b00b82f00b6.jpg) + +All arrows are morphisms in Meas; red arrows are objects in Prob; yellow arrows are morphisms in Prob; green squiggly arrows represent morphisms in Stoch, which are the same as red or blue arrows. + +The commutativity of red and yellow arrows indicates the + +composition of measure-preserving functions in Prob; the commutativity of blue and black arrows indicates the Kleisli composition of stochastic maps in Stoch. + +As a side note, we can also use this construction to define the category of relations and relation-preserving functions (Adamek et al., 1990, Scion 3.3): + +$$ +\mathbf {1} \hookrightarrow \operatorname {R e l} \downarrow \operatorname {S e t} \rightarrow \operatorname {R e l}. \tag {25} +$$ + +# A.6. Markov Category + +A Markov category (Fritz, 2020) is a symmetric monoidal category in which every object $A$ is equipped with a commutative comonoid structure given by a comultiplication $\mathrm{copy}_A: A \to A \otimes A$ and a counit $\mathrm{del}_A: A \to I$ , depicted in string diagrams as + +$$ +\operatorname {c o p y} _ {A} = \bigcup_ {\bullet} \quad \operatorname {d e l} _ {A} = \bigstar \tag {26} +$$ + +and satisfying some compatibility conditions. + +# A.7. Conditional Independence + +Definition 4 (Conditional independence (Fritz, 2020, Definition 12.16)). A morphism $f: A \to X \otimes W \otimes Y$ displays the conditional independence $X \perp Y \mid W \parallel A$ if there are morphisms $g: A \to W$ , $h: A \otimes W \to X$ and $k: W \otimes A \to Y$ such that + +![](images/38f756caf59625e70e1a64b5c1ebd65ebb67b5f8fe15b0446815f3cc7aadd943.jpg) + +Two special cases are when $A = I$ we have $X \perp Y \mid W$ and when $W = I$ we have $X \perp Y \parallel A$ . + +Another way to define the modularity of a stochastic encoder is as follows, which relies on the existence of some other morphisms (cf. D. 1.2): + +Disentanglement 6.4. $\forall i\in [1..N].m_{i} = m_{i,i}\otimes \mathrm{del}_{Y_{\backslash i}}$ + +![](images/ab73ae4460c47f17c46305fae7ede01c06060c226251d975b4fc6c7001fad392.jpg) + +This condition was also studied in Cho & Jacobs (2019, Proposition 6.9). We can see that it is stronger than D. 6.3: + +Theorem 9. $D$ . 6.4 $\rightarrow$ D. 6.3. + +However, it is not yet clear if they are equivalent. + +# B. Example + +Let us start from the category Set. Consider the nonempty multiset monad $N$ in Set, which sends a set $A$ to $\mathbb{N}^A\setminus \varnothing$ . For example, the set $\{a,b\}$ is sent to + +$$ +\{\{(a, 1) \}, \{(a, 2) \}, \dots , \{(b, 1) \}, \dots , \{(a, 1), (b, 1) \}, \dots \} +$$ + +The Kleisli category $\mathbf{Set}_N$ of this monad consists of sets and multiset functions. A multiset function $f: A \rightsquigarrow B$ outputs how many ways to get a target $b \in B$ from a source $a \in A$ . The composition of multiset functions is defined by the multiplication and sum of natural numbers. This category is a Markov category. + +A Markov category $\mathbf{C}$ has a cartesian subcategory $\mathbf{C}_{\mathrm{det}}$ of deterministic morphisms. Given a small category $\mathbf{S}$ , the subcategory $[\mathbf{S},\mathbf{C}]_{\mathrm{det}}$ of the functor category $[\mathbf{S},\mathbf{C}]$ , which consists of functors of the form $\mathbf{S} \to \mathbf{C}_{\mathrm{det}} \hookrightarrow \mathbf{C}$ , is again a Markov category (Fritz, 2020, Section 7, notation slightly modified). The category $[\mathbf{S},\mathbf{C}]_{\mathrm{det}}$ contains deterministic diagrams of shape $\mathbf{S}$ and stochastic maps between them that preserve the shape. + +The set of natural numbers can be considered a single-object category $(\ast, \mathbb{N}, +, 0)$ with the numbers as morphisms and the addition as the composition. The identity morphism $\mathrm{id}_{*}$ is the number 0. + +Based on these, let us consider the category $[\mathbb{N},\mathbf{Set}_N]_{\mathrm{det}}$ . This category contains sets equipped with endofunctions indexed by natural numbers as objects and multiset functions between these sets that preserve their endofunctions as morphisms. A natural transformation to a constant functor (which maps all morphisms to the identity morphism) in this category means that no matter how the input changes with time, the count is invariant. An example is shown below: + +![](images/c5f0e19c1aa25f7e06829942d6b3d5c9627430dcef8a40c3e810c126c38f1aba.jpg) + +If we want to characterize more complex behavior, we may simply change the source category $\mathbb{N}$ and define a proper category (possibly with a product structure) that encodes our requirements. The extension is left for future work. + +# C. Proofs + +Proposition 1. + +$$ +\begin{array}{c} Y _ {1} \xleftarrow {p _ {1}} Y _ {1} \times Y _ {2} \xrightarrow {p _ {2}} Y _ {2} \\ \begin{array}{l} m _ {1, 1} \Bigg \downarrow \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \end{array} \\ Z _ {1} \xleftarrow {p _ {1}} Z _ {1} \times Z _ {2} \xrightarrow {p _ {2}} Z _ {2} \end{array} \tag {28} +$$ + +![](images/a4ff70a5c4154ae16674a22ee7cac8c5f3629039bdbf0616d75a3631034770a3.jpg) + +Proposition 2. + +$$ +\begin{array}{c} Y _ {1} \xleftarrow {p _ {1}} Y _ {1} \times 1 \\ \mathrm {i d} _ {Y _ {1}} \Bigg \downarrow \quad \stackrel {\cong} {\longrightarrow} \quad \Bigg \downarrow \mathrm {i d} _ {Y _ {1}} \times y _ {2} \\ Y _ {1} \xleftarrow {p _ {1}} Y _ {1} \times Y _ {2} \\ m _ {1, 1} \Bigg \downarrow \quad \quad \quad \quad \quad \Bigg \downarrow m \\ Z _ {1} \xleftarrow {p _ {1}} Z _ {1} \times Z _ {2} \end{array} \tag {29} +$$ + +![](images/e0f24fc9419776ba67038b90c65d790d17ca65b925398b12222aad6b68bc846f.jpg) + +Theorem 3 can be proven using the following lemma: + +Lemma 10. Let $f: A \times B \to C$ be a morphism from a product and $\widehat{f}: B \to C^A$ its exponential transpose. Then, there exists a morphism $f': A \to C$ such that $f = f' \circ p_1$ if and only if the exponential transpose $\widehat{f}$ is a constant morphism. + +Proof. Diagram chase: + +![](images/b0547a8eb99fe606472f1f88a7b2227f7b32c730382dc768b59057f8f4737b66.jpg) + +We need to use the following commutative diagrams: (i) red: the universal property of the exponential object $C^A$ and the evaluation morphism $\epsilon_A$ ; (ii) green: the constant morphism $\widehat{f}$ , which factors through the terminal object 1 and defines the morphism $\widehat{f}'$ ; (iii) blue: the product morphism $\mathrm{id}_A \times \widehat{f}'$ ; and (iv) yellow: the definition of $f'$ . + +It is easy to prove $\widehat{f}:B\to C^A$ is a constant morphism if $f = f^{\prime}\circ p_{1}$ . Suppose $\widehat{f}:B\to C^{A}$ is a constant morphism, so it factors through the terminal object 1. We denote the morphism by $\widehat{f}':1\to C^A$ . We can define $f^{\prime}:A\to C$ as $\epsilon_{A}\circ (\mathrm{id}_{A}\times \widehat{f}^{\prime})$ . To prove $f = f^{\prime}\circ p_{1}$ , i.e., $f = \epsilon_{A}\circ (\mathrm{id}_{A}\times \widehat{f}^{\prime})\circ p_{1}$ , we only need to show $\mathrm{id}_A\times \widehat{f} = (\mathrm{id}_A\times \widehat{f}^\prime)\circ (\mathrm{id}_A\times e_B)$ . This triangle commutes because it is simply a product of the identity morphism $\mathrm{id}_A$ and the constant morphism $\widehat{f}$ . + +Alternatively, we can also characterize product morphisms using pullback. Concretely, let $Y \times_{Y_i} Y$ be the pullback of the projections $p_i: Y \to Y_i$ and $\pi_1, \pi_2: Y \times_{Y_i} Y \to Y$ be the pullback projections. In the category Set of sets, the pullback $Y \times_{Y_i} Y = \{(y, y') \in Y \times Y \mid y_i = y'_i\}$ is the set of pairs of factors whose $i$ -th components are the same. Then, $m$ is a product morphism if and only if $m_i \circ \pi_1 = m_i \circ \pi_2$ , i.e., $m_i(y_i, y_{\backslash i}) = m_i(y_i, y_{\backslash i}')$ . This can be proven using the following lemma: + +Lemma 11. Let $f: A \times B \to C$ be a morphism from a product and $(A \times B) \times_A (A \times B)$ be the pullback of the projections $p_1: A \times B \to A$ with two pullback projections $\pi_1, \pi_2: (A \times B) \times_A (A \times B) \to A \times B$ . Then, there exists a morphism $f': A \to C$ such that $f = f' \circ p_1$ if and only if $f \circ \pi_1 = f \circ \pi_2$ . + +Proof. Diagram chase: + +![](images/7711375a3b81b2036e6e86db53a93dd0c32686a31d783df7395b3c63b1c3f53e.jpg) + +Suppose that $f = f' \circ p_1$ . Because the pullback rectangle commutes, $p_1 \circ \pi_1 = p_1 \circ \pi_2$ , it is easy to show that $f \circ \pi_1 = f' \circ p_1 \circ \pi_1 = f' \circ p_1 \circ \pi_2 = f \circ \pi_2$ . + +Now suppose that $f \circ \pi_1 = f \circ \pi_2$ . We define $f': A \to C$ as $f \circ \langle \mathrm{id}_A, g \rangle$ for an arbitrary morphism $g: A \to B$ . To prove $f = f' \circ p_1$ , we can consider two morphisms $\operatorname{id}_{A \times B}$ and $\langle \mathrm{id}_A, g \rangle \circ p_1$ . Because they complete the commutative diagram of the pullback $(A \times B) \times_A (A \times B)$ , $p_1 \circ \operatorname{id}_{A \times B} = p_1 \circ \langle \mathrm{id}_A, g \rangle \circ p_1 = p_1$ , there exists a unique morphism $u: A \times B \to (A \times B) \times_A (A \times B)$ such that $\pi_1 \circ u = \operatorname{id}_{A \times B}$ and $\pi_2 \circ u = \langle \mathrm{id}_A, g \rangle \circ p_1$ . We can now chase the diagram to show that $f = f \circ \operatorname{id}_{A \times B} = f \circ \pi_1 \circ u = f \circ \pi_2 \circ u = f \circ \langle \mathrm{id}_A, g \rangle \circ p_1 = f' \circ p_1$ . + +To prove that this construction does not depend on specific choice of $g: A \to B$ , let us consider two morphisms $g, g': A \to B$ . Because $\langle \mathrm{id}_A, g \rangle$ and $\langle \mathrm{id}_A, g' \rangle$ complete + +the commutative diagram of the pullback, there exists a unique morphism $v: A \to (A \times B) \times_A (A \times B)$ such that $\pi_1 \circ v = \langle \mathrm{id}_A, g' \rangle$ and $\pi_2 \circ v = \langle \mathrm{id}_A, g \rangle$ . Then, $f \circ \langle \mathrm{id}_A, g \rangle = f \circ \pi_2 \circ v = f \circ \pi_1 \circ v = f \circ \langle \mathrm{id}_A, g' \rangle$ , which shows that $f' = f \circ \langle \mathrm{id}_A, g \rangle$ is independent of the choice of $g: A \to B$ . + +Based on this, we can obtain the following diagram: + +![](images/ca823c3c70b8a32c061f6378153446970ac32ca0354b2590c82536e4ec90f0ec.jpg) + +Both Lemmas 10 and 11 show that there are alternative ways to characterize "invariance", without a group theoretical formulation. + +Theorem 4. + +$$ +\begin{array}{c}\left(\begin{array}{c}\rightarrow Y _ {1} \xleftarrow {p _ {1}} Y _ {1} \times Y _ {2}\\\mathrm {i d} _ {Y _ {1}} \Bigg \downarrow\\Y _ {1} \xleftarrow {p _ {1}} Y _ {1} \times Y _ {2}\\m _ {1, 1} \Bigg \downarrow\\Z _ {1} \xleftarrow {p _ {1}} Z _ {1} \times Z _ {2}\\\mathrm {i d} _ {Z _ {1}} \uparrow \stackrel {\cong} {\longrightarrow} \Bigg \downarrow\end{array}\right) h\\Z _ {1} \xleftarrow {p _ {1}} Z _ {1} \times z _ {2}\end{array}\tag {33} +$$ + +![](images/63377bc2c00700d19b53fb24eca61905cd5e8093cf609c21b9e70441086e0b5b.jpg) + +Proposition 5. Let $F, G: \mathbf{C} \to \mathbf{D}$ be product preserving functors. + +![](images/9e39d839bb7fc2ba1cab7d35a6e8c4d69308aeaa74ba755cd4b7b022a45eb94e.jpg) + +Theorem 6. Let $F, G: \mathbf{C} \to \mathbf{D}$ be functors, $\alpha: F \Rightarrow G$ be a natural transformation. + +$$ +\begin{array}{c} F A \xrightarrow {\alpha_ {A}} G A \\ F p, F q \Bigg \downarrow \quad \Bigg \downarrow G p, G q \\ F B \xrightarrow {\alpha_ {B}} G B \end{array} \tag {35} +$$ + +We have the following reasoning: + +- $F$ is not faithful: $\exists p \neq q$ . $Fp = Fq$ +- $\alpha$ is natural: $Fp = Fq \to Gp \circ \alpha_{A} = Gq \circ \alpha_{A}$ +- $\alpha$ is epic: $Gp\circ \alpha_{A} = Gq\circ \alpha_{A}\to Gp = Gq$ + +Then, + +$F$ is not faithful $\wedge \alpha$ is epic $\rightarrow G$ is not faithful. (36) + +Or equivalently, + +$\alpha$ is epic $\rightarrow (G$ is faithful $\rightarrow F$ is faithful). (37) + +Similarly, + +- $G$ is not faithful: $\exists p \neq q$ . $Gp = Gq$ +- $\alpha$ is natural: $Gp = Gq \to \alpha_{B} \circ Fp = \alpha_{B} \circ Fq$ +- $\alpha$ is monic: $\alpha_{B} \circ Fp = \alpha_{B} \circ Fq \to Fp = Fq$ + +Then, + +$G$ is not faithful $\wedge \alpha$ is monic $\rightarrow F$ is not faithful. (38) + +Or equivalently, + +$\alpha$ is monic $\rightarrow (F$ is faithful $\rightarrow G$ is faithful). (39) + +![](images/786232673e663cbabd78683f800470a9a96e1a38a5254a488bb001704ab21efa.jpg) + +Theorem 8. When $N = 2$ , D. 6.2 is the definition of D. 6.1 (Fritz, 2020, Lemma 12.11). When $N > 2$ , we can apply this equation recursively. + +![](images/ed1fa07dd631aed2d55183ba6b36e8d0c1b8087c12068cab2440df207b7cf33f.jpg) + +![](images/8fbd4d4eab140dd57d577107fc7343b952e4c64a4adc484168f60e34ff332c76.jpg) \ No newline at end of file diff --git a/acategorytheoreticalmetaanalysisofdefinitionsofdisentanglement/images.zip b/acategorytheoreticalmetaanalysisofdefinitionsofdisentanglement/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..684a5a2a302089ff13a2da96c0df4fc7ebb0ce53 --- /dev/null +++ b/acategorytheoreticalmetaanalysisofdefinitionsofdisentanglement/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00a947e9951b0191e8049cde99d8d5cee220c1cdae2970ed39614cf4535c4d04 +size 441905 diff --git a/acategorytheoreticalmetaanalysisofdefinitionsofdisentanglement/layout.json b/acategorytheoreticalmetaanalysisofdefinitionsofdisentanglement/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..bd342a9b17ef6b1650ada085208124fce7db920f --- /dev/null +++ b/acategorytheoreticalmetaanalysisofdefinitionsofdisentanglement/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d3ecc6b84a16b6b16da98a53c8ca6513407cc392621863629cfc6b533d79ee35 +size 1050085 diff --git a/acloserlookatfewshotclassificationagain/bcec5be9-469f-4be1-a990-fdf6de674987_content_list.json b/acloserlookatfewshotclassificationagain/bcec5be9-469f-4be1-a990-fdf6de674987_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..192151dd0e1d90c6f46146589b0887102a2025c0 --- /dev/null +++ b/acloserlookatfewshotclassificationagain/bcec5be9-469f-4be1-a990-fdf6de674987_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4eb97ffe2ee7892ebdb7c731299476d6bedc24bc6962b705d6abbef910b39401 +size 171346 diff --git a/acloserlookatfewshotclassificationagain/bcec5be9-469f-4be1-a990-fdf6de674987_model.json b/acloserlookatfewshotclassificationagain/bcec5be9-469f-4be1-a990-fdf6de674987_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2b059ae4423be9c45930760807ec80d3da35bcda --- /dev/null +++ b/acloserlookatfewshotclassificationagain/bcec5be9-469f-4be1-a990-fdf6de674987_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a94f5a5cca098c249d9430bf382b434a0a4a0514ee54854b08c3db611b3fc20 +size 194887 diff --git a/acloserlookatfewshotclassificationagain/bcec5be9-469f-4be1-a990-fdf6de674987_origin.pdf b/acloserlookatfewshotclassificationagain/bcec5be9-469f-4be1-a990-fdf6de674987_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8bd604a0b29dd49373dd4a1a19902d20c114fa18 --- /dev/null +++ b/acloserlookatfewshotclassificationagain/bcec5be9-469f-4be1-a990-fdf6de674987_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ccf514927bf315a65fb7c997be9c987cd53f244055b9c31738cb7a82fdc8a0d4 +size 985375 diff --git a/acloserlookatfewshotclassificationagain/full.md b/acloserlookatfewshotclassificationagain/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e8c7c2b66e6a349d9e375a4355d067b349e52b63 --- /dev/null +++ b/acloserlookatfewshotclassificationagain/full.md @@ -0,0 +1,588 @@ +# A Closer Look at Few-shot Classification Again + +Xu Luo\*1 Hao Wu\*1 Ji Zhang\*1 Lianli Gao\*1 Jing Xu\*2 Jingkuan Song + +# Abstract + +Few-shot classification consists of a training phase where a model is learned on a relatively large dataset and an adaptation phase where the learned model is adapted to previously-unseen tasks with limited labeled samples. In this paper, we empirically prove that the training algorithm and the adaptation algorithm can be completely disentangled, which allows algorithm analysis and design to be done individually for each phase. Our meta-analysis for each phase reveals several interesting insights that may help better understand key aspects of few-shot classification and connections with other fields such as visual representation learning and transfer learning. We hope the insights and research challenges revealed in this paper can inspire future work in related directions. Code and pre-trained models (in PyTorch) are available at https://github.com/ Frankluox/CloserLookAgainFewShot. + +# 1. Introduction + +During the last decade, deep learning approaches have made remarkable progress in large-scale image classification problems (Krizhevsky et al., 2012; He et al., 2016). Since there are infinitely many categories in the real world that cannot be learned at once, a desire following success in image classification is to equip models with the ability to efficiently learn new visual concepts. This demand gives rise to few-shot classification (Fei-Fei et al., 2006; Vinyals et al., 2016)—the problem of learning a model capable of adapting to new classification tasks given only few labeled samples. + +This problem can be naturally broken into two phases: the training phase for learning an adaptable model and the adaptation phase for adapting the model to new tasks. To make quick adaptation possible, it is natural to think that the + +*Equal contribution $^{1}$ University of Electronic Science and Technology of China $^{2}$ Harbin Institute of Technology Shenzhen. Correspondence to: Jingkuan Song . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +design of the training algorithm should prepare for the algorithm used for adaptation. For this reason, pioneering works (Vinyals et al., 2016; Finn et al., 2017; Ravi & Larochelle, 2017) formalize the problem with meta-learning framework, where the training algorithm directly aims at optimizing the adaptation algorithm during training in a learning-to-learn fashion. Attracted by meta-learning's elegant formalization and properties well suited for few-shot learning, many subsequent works designed different meta-learning mechanisms to solve few-shot classification problems. + +It is then a surprise to find that a simple transfer learning baseline — learning a supervised model using the training set and adapting it using a simple adaptation algorithm (e.g., logistic regression) — performs better than all meta-learning methods (Chen et al., 2019; Tian et al., 2020; Rizve et al., 2021). Since simple supervised training is not designed specifically for few-shot classification, this observation reveals that the training algorithm can be designed without considering the choice of adaptation algorithm while achieving satisfactory performance. In this work, we take a step further and ask the following question: + +Are training and adaptation algorithms completely uncorrelated in few-shot classification? + +Here, "completely uncorrelated" means that the performance ranking of any set of adaptation algorithms is not affected by the choice of training algorithms, and vice versa. If this is true, the problem of finding the best combinations of training and adaptation algorithms can be reduced to optimizing the training and adaptation algorithms individually, which may largely ease the algorithm design process in the future. We give an affirmative answer to this question by conducting a systematic study on a variety of training and adaptation algorithms used in few-shot classification. + +This "uncorrelated" property also offers us an opportunity to independently analyze algorithms of one phase by fixing the algorithm of the other phase. We conduct such analysis in Section 4 for training algorithms and Section 5 for adaptation algorithms. By varying important factors like dataset scale, model architectures for the training phase, and shots, ways, data distribution for the adaptation phase, we obtain several interesting observations that lead to a deeper understanding of few-shot classification and reveal some critical relations to visual representation learning and trans + +fer learning literature. Such meta-level understanding can be useful for future few-shot learning research. The analysis for each phase leads to the following key observations: + +1. We observed a different neural scaling law in few-shot classification that test error falls off as a power law with the number of training classes, instead of the number of training samples per class. This observation highlights the importance of the number of training classes in few-shot classification and may help future research further understand the crucial difference between few-shot classification and other vision tasks. +2. We found two evaluated datasets on which increasing the scale of training dataset does not always lead to better few-shot performance. This suggests that it is never realistic to train a model that can solve all possible tasks well just by feeding it a very large amount of data. This also indicates the importance of properly filtering training knowledge for different few-shot classification tasks. +3. We found that standard ImageNet performance is not a good predictor of few-shot performance for supervised models (contrary to previous observations in other vision tasks), but it does predict well for self-supervised models. This observation may become the key to understanding both the difference between few-shot classification and other vision tasks, and the difference between supervised learning and self-supervised learning. +4. We found that, contrary to a common belief that finetuning the whole network with few samples would lead to severe overfitting, vanilla fine-tuning performs the best among all adaptation algorithms even when data is extremely scarce, e.g., 5-way 1-shot task. In particular, partial finetune methods that are designed to overcome the overfitting problem of vanilla finetune in few-shot setting perform worse. The advantage of finetune expands with the increase of the number of ways, shots and the degree of task distribution shift. However, finetune methods suffer from extremely high time complexity. We show that the difference in these factors is the reason why state-of-the-art methods in different few-shot classification benchmarks differ in adaptation algorithms. + +# 2. The Problem of Few-shot Classification + +Few-shot classification aims to learn a model that can quickly adapt to a novel classification task given only few observations. In the training phase, given a training dataset $\mathcal{D}^{train} = \{x_n,y_n\}_{n = 1}^{\left|\mathcal{D}^{train}\right|}$ with $N_{C}$ classes, where $x_{i}\in \mathbb{R}^{D}$ is the $i$ -th image and $y_{i}\in [N_{C}]$ is its label, a model $f_{\theta}$ is learned via a training algorithm $\mathcal{A}^{train}$ , i.e., $\mathcal{A}^{train}(\mathcal{D}^{train}) = f_{\theta}$ . In the adaptation phase, a series of few-shot classification tasks $\mathcal{T} = \{\tau_i\}_{i = 1}^{N_T}$ are constructed + +from the test dataset $\mathcal{D}^{test}$ with classes and domains possibly different from those of $\mathcal{D}^{train}$ . Each task $\tau$ consists of a support set $S = \{(x_i, y_i)\}_{i=1}^{N_S}$ used for adaptation and a query set $\mathcal{Q} = \{(x_i^*, y_i^*)\}_{i=1}^{N_Q}$ that is used for evaluation and shares the same label space with $S$ . $\tau$ is called a $N$ -way $K$ -shot task if there are $N$ classes in the support set $S$ and each class contains exactly $K$ samples. To solve each task $\tau$ , the adaptation algorithm $\mathcal{A}^{adapt}$ takes the learned model $f_\theta$ and the support set $S$ as inputs, and produces a new classifier $g(\cdot; f_\theta, S): \mathbb{R}^D \to [N]$ . The constructed classifier will be evaluated on the query set $\mathcal{Q}$ to test its generalization ability. The evaluation metric is the average performance over all sampled tasks. We denote both the resultant average accuracy and the radius of $95\%$ confidence interval as a function of training and adaptation algorithms: $\mathrm{Avg}(\mathcal{A}^{train}, \mathcal{A}^{adapt})$ and $\mathrm{CI}(\mathcal{A}^{train}, \mathcal{A}^{adapt})$ , respectively. + +Depending on the form of training algorithm $\mathcal{A}^{train}$ , the model $f_{\theta}$ can be different. For non-meta-learning methods, $f_{\theta}: \mathbb{R}^{D} \to \mathbb{R}^{d}$ is simply a feature extractor that takes an image $x \in \mathbb{R}^{D}$ as input and outputs a feature vector $z \in \mathbb{R}^{d}$ . Thus any visual representation learning algorithms can be used as $\mathcal{A}^{train}$ . For meta-learning methods, the training algorithm directly aims at optimizing the performance of the adaptation algorithm $\mathcal{A}^{adapt}$ in a learning-to-learn fashion. Specifically, meta-learning methods firstly parameterize the adaptation algorithm $\mathcal{A}_{\theta}^{adapt}$ that makes it profitable. Then the model $f_{\theta}$ used for training is set equal to $\mathcal{A}_{\theta}^{adapt}$ , i.e., $\mathcal{A}^{train}(\mathcal{D}^{train}) = f_{\theta} = \mathcal{A}_{\theta}^{adapt}$ . The training process consists of constructing pseudo few-shot classification tasks $\mathcal{T}^{train} = \{(S_{t}^{train}, \mathcal{Q}_{t}^{train})\}_{t=1}^{N_{\mathcal{T}}^{train}}$ from $\mathcal{D}^{train}$ that take the same form with tasks during adaptation. In each iteration $t$ , just like what will be done in the adaptation phase, the model $f_{\theta}$ takes $\mathcal{S}_{t}^{train}$ as input and outputs a classifier $g(\cdot; S_{t})$ . Images in $\mathcal{Q}_{t}^{train}$ are then fed into $g(\cdot; S_{t})$ and return a loss that is used to update $f_{\theta}$ . After training, $f_{\theta}$ is directly used as the adaptation algorithm $\mathcal{A}_{\theta}^{adapt}$ . Although different from non-meta-learning methods, most meta-learning algorithms still set the learnable parameters $\theta$ as the parameters of a feature extractor, making it possible to change the algorithm used for adaptation. + +# 3. Are Training and Adaptation Algorithms Uncorrelated? + +Given a set of training algorithms $M^{train} = \{\mathcal{A}_i^{train}\}_{i=1}^{m_1}$ and a set of adaptation algorithms $M^{adapt} = \{\mathcal{A}_i^{adapt}\}_{i=1}^{m_2}$ , we say that $M^{train}$ and $M^{adapt}$ are uncorrelated if changing algorithms from $M^{train}$ does not influence the performance ranking of algorithms from $M^{adapt}$ , and vice versa. To give a precise description, we first define a partial order. + +Definition 3.1. We say two training algorithms $\mathcal{A}_a^{train},\mathcal{A}_b^{train}$ have the partial order $\mathcal{A}_a^{train} \preceq \mathcal{A}_b^{train}$ , if + +Table 1. Few-shot classification performance of pairwise combinations of a variety of training and adaptation algorithms. All evaluation tasks are 5-way 5-shot tasks sampled from Meta-Dataset (excluding ImageNet). We sample 2000 tasks per dataset in Meta-Dataset and report the average accuracy over all datasets along with the $95\%$ confidence interval. The algorithms are listed according to their partial order according to Definition 3.2 from top to bottom and from left to right. * denotes training algorithm that uses transductive BN (Bronskill et al., 2020) that produces a much higher, unfair performance using Fintune and TSA as adaptation algorithms. †: TSA and eTT are both architecture-specific partial-finetune algorithms, thus TSA can be used for CNN only and eTT for original ViT only. + +
Adaptation algorithm
Training algorithmTraining datasetArchitectureMatchingNetMetaOptNCCLRURLCCTSA/eTT†Finetune
PNminiImageNetConv-448.54±0.449.84±0.451.38±0.451.65±0.451.82±0.451.56±0.458.08±0.460.88±0.4
MAML*miniImageNetConv-453.71±0.453.69±0.455.01±0.455.03±0.455.66±0.455.63±0.462.80±0.464.87±0.4
CEminiImageNetConv-454.68±0.456.79±0.458.54±0.458.26±0.459.63±0.459.20±0.564.14±0.465.12±0.4
MatchingNetminiImageNetResNet-1255.62±0.457.20±0.458.91±0.458.99±0.461.20±0.460.50±0.464.88±0.467.93±0.4
MAML*miniImageNetResNet-1258.42±0.458.52±0.459.65±0.460.04±0.460.38±0.460.50±0.471.15±0.473.13±0.4
PNminiImageNetResNet-1260.19±0.461.70±0.463.71±0.464.46±0.465.64±0.465.76±0.470.44±0.474.23±0.4
MetaOptminiImageNetResNet-1262.06±0.463.94±0.465.81±0.466.03±0.467.47±0.467.24±0.472.07±0.474.96±0.4
DeepEMDminiImageNetResNet-1262.67±0.464.15±0.466.14±0.466.14±0.468.66±0.469.76±0.474.21±0.474.83±0.4
CEminiImageNetResNet-1263.27±0.464.91±0.466.96±0.467.14±0.469.78±0.469.52±0.474.30±0.474.89±0.4
Meta-BaselineminiImageNetResNet-1263.25±0.465.02±0.467.28±0.467.56±0.469.84±0.469.76±0.473.94±0.475.04±0.4
COSminiImageNetResNet-1263.99±0.466.09±0.468.31±0.469.26±0.470.71±0.471.03±0.475.10±0.475.68±0.4
PNImageNetResNet-5063.68±0.465.79±0.468.40±0.468.87±0.469.69±0.470.81±0.474.15±0.478.42±0.4
S2M2miniImageNetWRN-28-1064.41±0.466.59±0.468.67±0.469.16±0.470.88±0.471.38±0.474.94±0.476.89±0.4
FEATminiImageNetResNet-1265.42±0.467.15±0.469.06±0.469.21±0.471.24±0.472.07±0.475.99±0.476.83±0.4
IERminiImageNetResNet-1265.37±0.467.31±0.469.30±0.470.01±0.472.48±0.472.85±0.476.70±0.477.54±0.4
Moco v2ImageNetResNet-5065.47±0.568.63±0.471.05±0.471.49±0.474.46±0.474.57±0.479.70±0.479.98±0.4
Exemplar v2ImageNetResNet-5067.70±0.570.07±0.472.55±0.472.93±0.475.26±0.476.83±0.480.22±0.481.75±0.4
DINOImageNetResNet-5073.97±0.476.45±0.478.30±0.478.72±0.480.73±0.481.05±0.483.64±0.483.20±0.4
CEImageNetResNet-5074.75±0.476.94±0.478.96±0.479.57±0.480.89±0.481.51±0.484.07±0.484.92±0.4
BiT-SImageNetResNet-5075.44±0.477.86±0.479.84±0.479.97±0.481.79±0.481.91±0.484.84±0.386.40±0.3
CEImageNetSwin-B75.17±0.477.81±0.480.06±0.481.04±0.482.55±0.482.46±0.4-88.16±0.3
DeiTImageNetViT-B75.82±0.478.34±0.480.62±0.481.68±0.482.80±0.383.13±0.484.22±0.387.62±0.3
CEImageNetViT-B76.78±0.478.81±0.480.65±0.481.13±0.382.69±0.382.77±0.385.60±0.388.48±0.3
DINOImageNetViT-B76.44±0.479.11±0.481.23±0.482.01±0.484.16±0.384.44±0.386.25±0.388.04±0.3
CLIPWebImageTextViT-B78.06±0.481.20±0.483.04±0.383.22±0.384.11±0.384.20±0.387.66±0.390.26±0.3
+ +for all $i\in [m_2]$ + +$$ +\begin{array}{l} \operatorname {A v g} \left(\mathcal {A} _ {a} ^ {\text {t r a i n}}, \mathcal {A} _ {i} ^ {\text {a d a p t}}\right) - \operatorname {C I} \left(\mathcal {A} _ {a} ^ {\text {t r a i n}}, \mathcal {A} _ {i} ^ {\text {a d a p t}}\right) \\ < \operatorname {A v g} \left(\mathcal {A} _ {b} ^ {\text {t r a i n}}, \mathcal {A} _ {i} ^ {\text {a d a p t}}\right) + \operatorname {C I} \left(\mathcal {A} _ {b} ^ {\text {t r a i n}}, \mathcal {A} _ {i} ^ {\text {a d a p t}}\right). \tag {1} \\ \end{array} +$$ + +This inequality holds when the values inside the confidence interval of $\mathcal{A}_b^{train}$ are all larger than or at least have overlap with that of $\mathcal{A}_a^{train}$ when evaluated with every adaptation algorithm in $M^{adapt}$ . This implies that there is a considerable probability that the performance of $\mathcal{A}_b^{train}$ is no worse than $\mathcal{A}_a^{train}$ when combined with any possible adaptation algorithm $\mathcal{A}_i^{adapt}$ , thus the ranking of the two training algorithms are not influenced by adaptation algorithms with high probability. We here use $\preceq$ instead of $\prec$ to show that the defined partial order is not strict, so it is valid that $\mathcal{A}_a^{train} \preceq \mathcal{A}_b^{train}$ and $\mathcal{A}_b^{train} \preceq \mathcal{A}_a^{train}$ hold simultaneously, meaning that the two algorithms are comparable. The partial order inside $M^{adapt}$ can be similarly defined by exchanging training and adaptation algorithms above. We are now ready to define what it means for two sets of algorithms to be uncorrelated. + +Definition 3.2. $M^{train}$ and $M^{adapt}$ are uncorrelated if they are both ordered sets wrt the partial order relation defined in Definition 3.1. $^{1}$ + +Now, to see whether training and adaptation algorithms in few-shot classification are uncorrelated, we choose a wide range of training and adaptation algorithms from previous few-shot classification methods with various training datasets and network architectures to form $M^{train}$ and $M^{adapt}$ . We then conduct experiments on each pair of algorithms, one from $M^{train}$ and another from $M^{adapt}$ , to check whether the two sets are ordered sets. + +Algorithms evaluated. The selected set of training algorithms $M^{train}$ encompasses both meta-learning and non-meta-learning methods. For meta-learning methods, we evaluate MAML (Finn et al., 2017), ProtoNet (Snell et al., 2017), MatchingNet (Vinyals et al., 2016), MetaOpt (Lee et al., 2019), Feat (Ye et al., 2020), DeepEMD (Zhang et al., 2020) and MetaBaseline (Chen et al., 2021b). For non-meta-learning methods, we evaluate supervised algorithms including Cross-Entropy baseline (Chen et al., 2019), COS (Luo et al., 2021), S2M2 (Mangla et al., 2020), IER (Rizve et al., 2021), BiT (Kolesnikov et al., 2020), Exemplar v2 (Zhao + +et al., 2021) and DeiT (Touvron et al., 2021); unsupervised algorithms including MoCo-v2 (He et al., 2020) and DINO (Caron et al., 2021); and a multimodal pre-training algorithm CLIP (Radford et al., 2021). $M^{adapt}$ encompasses the ones from meta-learning methods including MatchingNet, MetaOpt, Nearest Centroid Classifier (PN), Finetune (MAML); the ones from non-meta-learning methods including Logistic Regression (Tian et al., 2020), URL (Li et al., 2021), Cosine Classifier (Chen et al., 2019); and test-time-only methods TSA (Li et al., 2022b) and eTT (Xu et al., 2022a). + +Datasets. For the test dataset, we choose Meta-Dataset (Triantafillou et al., 2020), a dataset of datasets that covers 10 diverse vision datasets from different domains. We remove ImageNet from Meta-Dataset to avoid label leakage from training. For training, we choose three datasets of different scales: the train split of miniImageNet (Vinyals et al., 2016) that contains 38400 images from 64 classes, the train split of ImageNet (Deng et al., 2009) that contains more than 1 million images from 1000 classes, and a large-scale multimodal dataset WebImageText (Radford et al., 2021) that contains 400 million (image, text) pairs. For completeness, we also show traditional miniImageNet-only experiments in Table 4-5 in the Appendix. + +Results. Table 1 shows 5-way 5-shot performance of all pairwise combinations of algorithms from $M^{train}$ and $M^{adapt}$ . As seen, both training algorithms and adaptation algorithms form ordered sets according to Definition 3.2: when we fix any adaptation algorithm (a column in the table), the performance is monotonically increasing (or at least confidence intervals are intersected) as we move from top to bottom; similarly, adaptation algorithms form an ordered set from left to right. 1-shot results are similar and are given in Table 3 in the Appendix. Since we have covered a bunch of representative few-shot classification algorithms, we can say that with high probability, training and adaptation algorithms are uncorrelated in few-shot classification. + +Remark. According to Definition 3.2, since $M^{train}$ and $M^{adapt}$ are uncorrelated, changing algorithms either in $M^{train}$ or $M^{adapt}$ along the sequences in the ordered set always leads to performance improvement. Thus a simple greedy search on either side of algorithms always leads to global optima. A direct consequence is that, if two phases of algorithms are optimal on their own, their combinations are optimal too. For example, from Table 1 we can see that, for 5-way 5-shot tasks on Meta-Dataset, CLIP and Finetune are the optimal training and adaptation algorithms respectively, and their combination also becomes the optimal combination. + +This algorithm-disentangled property would greatly simplify the algorithm design process in few-shot classification. In + +the next two sections, we will, for the first time, individually analyze each of the two phases of algorithms while fixing the algorithms in the other phase. + +# 4. Training Analysis + +Throughout this section, we will fix the adaptation algorithm to the Nearest-Centroid Classifier, and analyze some aspects of interest in the training process of few-shot classification. According to Section 3, observations would not change with high probability if we change adaptation algorithms. + +# 4.1. On the Scale of Training Dataset + +We are first interested in understanding how the scale of training dataset influences few-shot classification performance. In few-shot classification, since classes in training and adaptation do not need to overlap, in addition to increasing the number of samples per class, we can also increase the training dataset size by increasing the number of training classes. This is different from standard vision classification tasks where studying the effect of increasing the number of samples per class is of more interest. + +We conduct both types of scaling experiments on the training set of ImageNet, a standard vision dataset that is always used as a pre-training dataset for downstream tasks. We choose three representative training algorithms that cover main types of algorithms: (1) Cross Entropy (CE) training, the standard supervised training in image classification tasks; (2) ProtoNet (PN), a widely-used meta-learning algorithm; (3) MoCo-v2, a strong unsupervised visual representation learning algorithm. For each dataset scale, we randomly select samples or classes 5 times, train a model using the specified training algorithm, and report the average performance and the standard variation over the 5 trials of training. The adaptation datasets we choose include 9 datasets from Meta-Dataset and the standard validation set of ImageNet. We plot the results of ranging the number of samples per class in Figure 1 and the results of ranging the number of classes in Figure 2. Both axes are plotted in log scale. We also report the results evaluated on additional 9 datasets in BSCD-FSL benchmark and DomainNet in Figure 8-9 in the Appendix. We make the following observations. + +Neural scaling laws for training. Comparing Figure 1 and 2, we can see that for supervised models (CE and PN), increasing the number of classes is much more effective than increasing the number of samples per class (We give clearer comparisons in Figure 10-12 in the Appendix). The effect of increasing the number of samples per class plateaus quickly, while increasing the number of classes leads to very stable performance improvement throughout all scales. We notice that most performance curves of PN and CE in Figure 2 look like a straight line. In Figure 13-15 in the appendix + +![](images/95838f3744f578355e5492ec31680b1a4e3f4c6bbd9e708b76ce928a72fa6444.jpg) + +![](images/dc99fb847fae4af635779b77ac2472d057fd60188bae5a216c210ea696158751.jpg) + +![](images/e164e38df1501bda5c08c9870cb79e4327d6405851fb6771f4156a21e7bb6f2a.jpg) + +![](images/90a52ef11fe322f7849d80e047ba86ed4cc9a8fff38f18a85a3e18e27cd4b679.jpg) + +![](images/d277cf35b9594fe088ee632e8b2045570706bc5cceb5c839ad2dc74e7a79329c.jpg) + +![](images/4bff2594e370b4e364fe2ba401a1387d997300be503fdeacaf6152a5b71b9086.jpg) +Figure 1. The effect of sample size per training class on few-shot classification performance. We use all 1000 classes of the training set of ImageNet for training. Both axes are logit-scaled. ImageNet-val means conducting few-shot classification on the original validation set of ImageNet. The average performance is obtained by averaging performance on 9 datasets excluding ImageNet-val. Best viewed in color. + +![](images/53e118f2d312c80cfc64881a06fa01e74c702cff0b6b98deb75b3822c83ceece.jpg) + +![](images/7b4c91328c43a8e8772ee4a29b46a8fe792c2890625e652b9b3c45b41da04c9c.jpg) + +![](images/bb56815b2be76c2295a141f6c65a0a5515f2eadc7793207ff1e34ebf81919403.jpg) + +![](images/c46a2841de1c1e1f84ab57b63c7dfea671a572684f488ef4f312b2d9a64be614.jpg) + +![](images/5aae46d0bc2f6a0999bbec9494406b135cb0501531f664cffce4f111386cf099.jpg) + +![](images/9e436ae86a7b65f639dec7b7fc7691fa87380d41b84c090a635ff8beee1bfd08.jpg) + +![](images/148dde8cb0dd1fd0a412c8d4c776159da6265ab7086dd0657ac5c04fcb1b759a.jpg) + +![](images/0ab9375ef8cc1dc25d4661e59950ee1a4a09b57cd83e4ccfd7671d323dae7c30.jpg) + +![](images/a467f35fff7a6c03c06d97909759a9001dbb952a989963ff7556a16a999a2e7b.jpg) + +![](images/ecdfd5a6c4993f73416460f7cd38c54a5399cd109a86c1b1fc99590840da33d5.jpg) + +![](images/a5ff50deac6aecdf650096d611788931b6cf4e970ec561b5eeedee2a25788014.jpg) +Figure 2. The effect of the number of training classes on few-shot classification performance. For each randomly selected class in ImageNet, we use all of its samples from the training set for training. Both axes are logit-scaled. Best viewed in color. + +![](images/34c376367a11a77365b9313b244f9cab8aa823e6b77a234411165ea442654795.jpg) + +![](images/84eebc1d43ef1b52621dae296f8fef88dc0c9251d75b05e0ad1b2f275f581e57.jpg) + +![](images/f0b576b6e3109c9235739d64a2ca1ce5be99427d99a3222125968799e5d47e34.jpg) + +![](images/583027e36f07a68e6884980a431b6f1f072930708830eebc2806f7ecd326ea40.jpg) + +![](images/b1be1d818ba3b17ed5f900c2d7b91b72303911ea65172d1e60fa2a1a1acdc3a4.jpg) +Number of classes used for training + +we plot the linear fit which verifies our observations. In fact, the Pearson coefficient between the log scale of the number of training classes and the log scale of average test error is $-0.999$ for CE and $-0.995$ for PN, showing strong evidence of linearity. This linearity indicates the existence of a form of neural scaling laws in few-shot classification: test error falls off as a power law with the number of training classes, which is different from neural scaling laws observed in other machine learning tasks (Hestness et al., 2017; Zhai et al., 2022; Kaplan et al., 2020) that test error falls off as a power law with the number of training samples per class. Such a difference reveals the intrinsic difference between few-shot classification and other tasks: while seeing more samples in a training class does help in identifying new samples in the same class, it may not help that much in identifying previously-unseen classes in a new task. On the other hand, seeing more classes may help the model learn more potentially useful knowledge that might help distinguish new classes. + +Bigger is not necessarily better. On most evaluated datasets, test error decreases with more training samples/classes. However, on Omniglot and ISIC (shown in Figure 8-9), the error first goes down and then goes up, especially for supervised models. On the contrary, previous works (Snell et al., 2017) have shown that a simple PN model, both training and evaluating on Omniglot (class sepa + +rated), can easily obtain a near-zero error. This indicates that as the number of training samples/classes increases, there exists a progressively larger mismatch between the knowledge learned from ImageNet and the knowledge needed for distinguishing new classes in these two datasets. Thus training a large model on a big dataset that can solve every possible task well is not a realistic hope, unless the training dataset already contains all possible tasks. How to choose a part of the training dataset to train a model on, or how to select positive/useful knowledge from a learned model depending on only a small amount of data in the specified adaptation scenario is an important research direction in few-shot classification. + +CE training scales better. As seen from both figures, PN and MoCo perform comparably to CE on small-scale training data, but as more training data comes in, the gap gradually widens. Considering that all algorithms have been fed with the same amount of data during training, we can infer that CE training indeed scales better than PN and MoCo. This trend seems to be more obvious for fine-grained datasets including Aircraft, Birds, Fungi, and VGG Flower. While this phenomenon needs further investigation, we speculate it is due to that CE simultaneously differentiates all classes during training which requires distinguishing all possible fine-grained classes. On the contrary, meta-learning algorithms like PN typically need to distinguish only a limited + +![](images/d3a3a58b4791aef16c24c7b8c2ea90a9e4b8f81ceb578696421232c1587214b7.jpg) +Figure 3. For supervised models, ImageNet performance is not a good predictor of few-shot classification performance. Each point in a plot refers to a supervised CE model with a specific backbone architecture. Both axes are logit-scaled. + +![](images/27e0d4f360b15bcadd4e09bec652717a621f4aaa90d8ec495cee16e1ddc788e9.jpg) +Figure 4. For self-supervised models, ImageNet performance is a good predictor of few-shot classification performance. Each point in a plot refers to a self-supervised model with a specific training algorithm/architecture. Both axes are logit-scaled. The regression line and a $95\%$ confidence interval are plotted in blue. "r" refers to the correlation coefficient between the two axes of data. + +number of classes during each iteration, and self-supervised models like MoCo do not use labels, thus focusing more on global information in images (Zhao et al., 2021) and performing not well on fine-grained datasets. We leave it for future work to verify if this conjecture holds generally. + +# 4.2. ImageNet Performance vs Few-shot Performance + +Then we fix the scale of training dataset and investigate how the changes in training algorithms and network architectures influence few-shot performance. We especially pay our attention to CE-trained and self-supervised models due to their superior performance shown in Table 1. Previous studies have revealed that the standard ImageNet performance of CE models trained on ImageNet is a strong predictor (with a linear relationship) of its performance on a range of vision tasks, including transfer learning (Kornblith et al., 2019), open-set recognition (Vaze et al., 2022) and domain generalization (Taori et al., 2020). We here ask if this observation also holds for few-shot classification. If this is true, we can improve few-shot classification + +on benchmarks that use ImageNet as the training dataset like Meta-Dataset, by just waiting for state-of-the-art ImageNet models. For this, we test 36 pre-trained supervised CE models with different network architectures, including VGG (Simonyan & Zisserman, 2015), ResNet (He et al., 2016), MobileNet (Howard et al., 2017), RegNet (Radosavovic et al., 2020), DenseNet (Huang et al., 2017), ViT (Dosovitskiy et al., 2021), Swin Transformer (Liu et al., 2021b) and ConvNext (Liu et al., 2022). We also test 32 self-supervised ImageNet pre-trained models with different network algorithms and architectures. The algorithms include MoCo-v2 (He et al., 2020), MoCo-v3 (Chen et al., 2021a), InstDisc (Wu et al., 2018), BYOL (Grill et al., 2020), SwAV (Caron et al., 2020), OBoW (Gidaris et al., 2021), SimSiam (Chen & He, 2021), Barlow Twins (Zbontar et al., 2021), DINO (Caron et al., 2021), MAE (He et al., 2022), iBOT (Zhou et al., 2022) and EsViT (Li et al., 2022a). We use KNN (Caron et al., 2021) to compute top-1 accuracy for these self-supervised models. We plot results for supervised models in Figure 3 and self-supervised models in Figure 4. + +![](images/d9c39438484b052ce526adeae874c3b0a477c8fdd88b8c85bc7d2909efd32ba4.jpg) +Figure 5. Way and shot experiments of adaptation algorithms on ImageNet and Quick Draw. For shot experiment, we fix the number of ways to 5 and show test error, and for way experiment, we fix the number of shots to 5 and show test accuracy. Both axes are logit-scaled. Best viewed in color. + +![](images/11090fc0a646f4193fd7250bd583f7b90047a4556cbae1709b6aabed2793219c.jpg) + +![](images/17096acd12367caa65a101df0639d36b48952c213e62840f2bdbe07a7e557555.jpg) + +![](images/6b84b88b898b93a31394a9228995dfd852ceaa17dce16c7980c7190335c84b4b.jpg) + +Supervised ImageNet models overfit to ImageNet performance. For supervised models, we can observe from Figure 3 that on most datasets sufficiently different from ImageNet, such as Aircraft, Birds, Textures, Fungi, and VGG Flower, the test error of few-shot classification first decreases and then increases with the improvement of ImageNet performance. The critical point is at about $23\%$ Top-1 error on ImageNet, which is the best ImageNet performance in 2017 (e.g., DenseNet (Huang et al., 2017)). This indicates that recent years of improvement in image classification on ImageNet overfit to ImageNet performance when the downstream task is specified as few-shot classification. We also observe that on datasets like Quick Draw, Traffic Signs, and Ominglot, there is no clear relationship between ImageNet performance and few-shot performance. Since supervised ImageNet performance is usually a strong predictor of other challenging vision tasks, few-shot classification stands out to be a special task that needs a different and much better generalization ability. Identifying the reasons behind the difference may lead to a deeper understanding of both few-shot classification and vision representation learning. + +ImageNet performance is a good predictor of few-shot performance for self-supervised models. Different from supervised models, for self-supervised models, we observe the clear positive correlation between ImageNet performance and few-shot classification performance. The best self-supervised model only obtains $77\%$ top-1 accuracy on ImageNet, but obtains more than $83\%$ average few-shot performance, outperforming all evaluated supervised models. Thus self-supervised algorithms indeed generalize better and the few-shot learning community should pay more attention to the progress of self-supervised learning. + +# 5. Adaptation Analysis + +In this section, we fix the training algorithm to the CE model trained on miniImageNet and analyze adaptation algorithms. + +# 5.1. Way and Shot Analysis + +Ways and shots are important variables during the adaptation phase of few-shot classification. For the first time, we + +analyze how the performance of different adaptation algorithms varies under different choices of ways and shots, with the training algorithm unchanged. For this experiment, we choose ImageNet and Quick Draw as the evaluated datasets because these two datasets have enough classes and images per class to be sampled and are representative for in-domain and out-of-domain datasets, respectively. For ImageNet, we remove all classes from miniImageNet. + +Neural scaling laws for adaptation. We notice that for Logistic Regression, Finetune, and MetaOPT, the performance curves approximate straight lines when varying the number of shots. This indicates that for the scale of the adaptation dataset, the classification error obeys the traditional neural scaling laws (different from what we found for the scale of the training dataset in Section 4.1). While this seems to be a reasonable phenomenon for Finetune, we found it a surprise for Logistic Regression and MetaOpt, which are linear algorithms (for adaptation) built upon frozen features and are thus expected to reach performance saturation quickly. This reveals that even for small-scaled models trained on miniImageNet, the learned features are still quite linearly-separable for new tasks. However, their growth rates differ, indicating they differ in their capability to scale. + +Backbone adaptation is preferred for high-way, high-shot, or cross-domain tasks. As seen from Figure 5, while Finetune and the partial-finetune algorithm TSA do not significantly outperform other algorithms on 1-shot and 5-shot tasks on ImageNet, their advantages become greater when shots or ways increase or the task is conducted on Quick Draw. Thus we can infer that backbone adaptation is preferred when data scale is large enough to avoid overfitting or when the domain shift so large that the learned feature space deforms on the new domain. + +Query-support matching algorithms scale poorly. Query-support matching algorithms like TSA, MatchingNet, NCC, and URL obtain query predictions by comparing the similarities of query features with support features $^2$ , different + +Table 2. The average support set size and the degree of task distribution shift of tasks from each dataset on three benchmarks. The metric measuring the degree of task distribution shift is defined by the deviation of feature importance; see Table 3 of Luo et al. (2022) for details. + +
Benchmark DatasetminiImageNet miniImageNetBSCD-FSLMeta-Dataset
ChestXISICESATCropDILSVRCOmniglotAircraftBirdsTexturesQuickDFungiFlowerTraffic SignCOCO
Mean support set size5 or 2525 or 100 or 250374.588.5337.6316.0287.3425.2361.9292.5421.2416.1
Task distribution shift0.0560.1860.2050.1530.1010.0540.1160.0970.1170.1000.1060.0800.0960.1500.083
+ +from other algorithms that learn a classifier from the support set directly. As observed in Figure 5, all these algorithms perform well when the shot is 1 or 5, but scale weaker than power law as the number of shots increases except for TSA on Quick Draw where backbone adaptation is much preferred. Considering that URL as a flexible,izable linear head and TSA as a partial fine-tune algorithm have enough capacities for adaptation, their failure to scale well, especially on ImageNet, indicates that the objectives of query-support matching algorithms have fundamental optimization difficulties during adaptation when data scale increases. + +# 5.2. Analysis of Finetune + +As seen from Table 1 and Figure 5, vanilla Finetune algorithm performs always the best, even when evaluated on in-domain tasks with extremely scarce data. In particular, we have shown that recent partial-finetune algorithms, such as TSA and eTT that are designed to overcome this problem, both underperform vanilla Finetune algorithms. This is quite surprising since the initial Meta-Dataset benchmark (Triantafillou et al., 2020) shows that vanilla Finetune meets severe overfitting when data is extremely scarce. + +The reasons lie in two aspects. First, in the original paper of Meta-Dataset, training and adaptation algorithms are bound together, so different adaptation algorithms use different backbones, making it unfair for comparison. This problem is then amplified later in the paper of TSA and eTT, where they use strong backbone for their own adaptation algorithms while copying the original results of Finetune in the benchmark. Second, previous works typically search for a single learning rate for Finetune. We found it important to separately search for the learning rates for backbone and the linear head. This simple change leads to a considerable performance improvement, as shown in Figure 6. We found that the optimal learning rate of backbone is typically much smaller than that of linear head. + +We also wonder what is the critical factor that makes Finetune effective. In Figure 7, we show how the relative improvement of Finetune over PN changes when we increase total number of samples in the support set (way $\times$ shot). The relative improvement is quite close for all choices of ways, as long as the support set size does not change. Thus support set size is crucial for Finetune to be effective, which aligns with our intuition that the backbone can be adjusted + +![](images/0842f9da8b2a979f9a155f6154fe9fb2808ed54495633b7e4389b27315979e51.jpg) +Figure 6. Comparisons of using consistent and separated learning rate for backbone and linear head during the finetune process. + +![](images/9910a15b1cb424e3be3392dbd67bd3f15c2b78c1bf6a709a799d40ec57ccdd87.jpg) + +![](images/8e521c70061031326bdb5ec4ee957527abb79ff241e77c1f489772951b62b105.jpg) +Figure 7. The advantages of Finetune increase as a function of the total number of samples in the support set. + +more properly when seeing more data. + +After analyzing the effectiveness of Finetune, we can now answer a question: why on traditional benchmarks like CIFAR, miniImageNet, tieredImageNet, the state-of-the-art algorithms do not adapt learned backbone during adaptation, but on benchmarks like BSCD-FSL and Meta-Dataset model adaptation becomes popular? As seen from Table 2, miniImageNet (similarly for CIFAR and tieredImageNet) has a small support set size of 5 or 25 and a small distribution shift from training to test datasets, while BSCD-FSL and Meta-Dataset have $10\mathrm{x}$ larger support set size and encompass datasets with extremely large distribution shift. Thus according to our analysis, backbone adaptation algorithms such as Finetune do not have advantages on benchmarks like miniImageNet, especially when the learning rates are not separated; while on BSCD-FSL and Meta-Dataset, backbone needs adaptation towards new domains and abundant support samples make this possible. To avoid biased assessment, we recommend to the community that, besides reporting standard benchmark results, a method should also report the performance with different, specific ways and shots on datasets with different degrees of distribution shift. + +# 6. Related Work + +As an active research field, few-shot learning is considered to be a critical step towards building efficient and brain-like machines (Lake et al., 2017). Meta-learning (Thrun & Pratt, 1998; Schmidhuber, 1987; Naik & Mammone, 1992) was thought to be an ideal framework to approach this goal. Under this framework, methods can be roughly split into three branches: optimization-based methods, black-box methods, and metric-based methods. Optimization-based methods, mainly originated from MAML (Finn et al., 2017), learn the experience of how to optimize a neural network given a few training samples. Variants in this direction consider different parts of optimization to meta-learn, including model initialization point (Finn et al., 2017; Rusu et al., 2019; Rajeswaran et al., 2019; Zintgraf et al., 2019; Jamal & Qi, 2019; Lee et al., 2019), optimization process (Ravi & Larochelle, 2017; Xu et al., 2020; Munkhdalai & Yu, 2017; Li et al., 2017) or both (Baik et al., 2020; Park & Oliva, 2019). Black-box methods (Santoro et al., 2016; Garnelo et al., 2018; Mishra et al., 2018; Requeima et al., 2019) directly model the learning process as a neural network without explicit inductive bias. Metric-based methods (Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018; Yoon et al., 2019; Zhang et al., 2020) meta-learn a feature extractor that can produce a well-shaped feature space equipped with a pre-defined metric. In the context of few-shot image classification, most state-of-the-art meta-learning methods fall into the metric-based and optimization-based ones. + +Recently, a number of non-meta-learning methods that utilize supervised (Chen et al., 2019; Tian et al., 2020; Dhillon et al., 2020; Triantafillou et al., 2021; Li et al., 2021) or unsupervised representation learning methods (Rizve et al., 2021; Doersch et al., 2020; Das et al., 2022; Hu et al., 2022; Xu et al., 2022a) to train a feature extractor have emerged to tackle few-shot image classification. In addition, a bunch of meta-learning methods (Chen et al., 2021b; Zhang et al., 2020; Hu et al., 2022; Ye et al., 2020) learn a model initialized from a pre-trained backbone (Our experiments also consider pretrain+meta-learning training algorithms such as Meta-Baseline, DeepEMD and FEAT. Thus our conclusions hold generally). Since these methods do not strictly follow meta-learning framework, the training algorithm does not necessarily have a relationship with the adaptation algorithm, and they are found to be simpler and more efficient than meta-learning methods while achieving better performance. Following this line, our paper further reveals that the training and adaptation phases in few-shot image classification are completely disentangled. + +One relevant work (Sbai et al., 2020) also gives a detailed and comprehensive analysis of few-shot learning, especially on the training process. Our study complements this work in several ways: (1) the neural scaling laws that we found have + +not been discovered before, which proves the importance of the number of classes in few-shot learning. Although in Sbai et al. (2020) the significance of the number of classes has also been discussed from different perspectives, there are no clear conclusions in Sbai et al. (2020) and thus we complement their studies; (2) we observed that larger datasets may lead to degraded performance in specific downstream datasets, both in terms of increasing the number of classes and samples per class. Such findings were not present in Sbai et al. (2020), and hence our study opens new avenues for future research by inspecting specific datasets; (3) there's no clear evidence in Sbai et al. (2020) that simple supervised training scales better than other types of training algorithms; (4) moreover, our paper evaluates 18 datasets, including those beyond ImageNet and CUB, which are the only ones studied in Sbai et al. (2020). Thus, our study provides a broader perspective and complements the analysis in Sbai et al. (2020). + +# 7. Discussion + +One lesson learned from our analysis is that training by only scaling models and datasets is not a one-fit-all solution. Either the design of the training objective should consider what the adaptation dataset is (instead of the adaptation algorithm), or the adaptation algorithm should select accurate training knowledge of interest. The former approach limits the trained model to a specific target domain, while the latter approach cannot be realized easily when only few labeled data are provided in the target task which makes knowledge selection difficult or even impossible due to bias of distribution estimation (Luo et al., 2022; Xu et al., 2022b). More effort should be put into aligning training knowledge and knowledge needed in adaptation. Although we have shown vanilla Finetune performs so well, we believe that such a brute-force, non-selective model adaptation algorithm is not the final solution, and it has other drawbacks such as having extremely high adaptation cost, as shown in Appendix D. Viewed from another perspective, our work points to the plausibility of using few-shot classification as a tool to better understand some key aspects of general visual representation learning. + +# Acknowledgements + +Special thanks to Qi Yong for providing indispensable spiritual support for the work. We also would like to thank all reviewers for constructive comments that help us improve the paper. This work is supported by National Key Research and Development Program of China (No. 2018AAA0102200), and the National Natural Science Foundation of China (Grant No. 62122018, No. U22A2097, No. 62020106008, No. 61872064). + +# References + +Abnar, S., Dehghani, M., Neyshabur, B., and Sedghi, H. Exploring the limits of large scale pre-training. In ICLR, 2022. +Baik, S., Choi, M., Choi, J., Kim, H., and Lee, K. M. Meta-learning with adaptive hyperparameters. In NeurIPS, 2020. +Bateni, P., Goyal, R., Masrani, V., Wood, F., and Sigal, L. Improved few-shot visual classification. In CVPR, 2020. +Bronskill, J., Gordon, J., Requeima, J., Nowozin, S., and Turner, R. Tasknorm: Rethinking batch normalization for meta-learning. In ICML, 2020. +Caron, M., Misra, I., Mairal, J., Goyal, P., Bojanowski, P., and Joulin, A. Unsupervised learning of visual features by contrasting cluster assignments. In NeurIPS, 2020. +Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., and Joulin, A. Emerging properties in self-supervised vision transformers. In ICCV, 2021. +Chen, W., Liu, Y., Kira, Z., Wang, Y. F., and Huang, J. A closer look at few-shot classification. In ICLR, 2019. +Chen, X. and He, K. Exploring simple siamese representation learning. In CVPR, 2021. +Chen, X., Fan, H., Girshick, R., and He, K. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020. +Chen, X., Xie, S., and He, K. An empirical study of training self-supervised vision transformers. In ICCV, 2021a. +Chen, Y., Liu, Z., Xu, H., Darrell, T., and Wang, X. Meta-baseline: Exploring simple meta-learning for few-shot learning. In ICCV, 2021b. +Das, D., Yun, S., and Porikli, F. Confess: A framework for single source cross-domain few-shot learning. In ICLR, 2022. +Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. +Dhillon, G. S., Chaudhari, P., Ravichandran, A., and Soatto, S. A baseline for few-shot image classification. In *ICLR*, 2020. +Doersch, C., Gupta, A., and Zisserman, A. Crosstransformers: spatially-aware few-shot transfer. In NeurIPS, 2020. + +Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021. +Dumoulin, V., Houlsby, N., Evci, U., Zhai, X., Goroshin, R., Gelly, S., and Larochelle, H. A unified few-shot classification benchmark to compare transfer and meta learning approaches. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1), 2021. +Entezari, R., Wortsman, M., Saukh, O., Shariatnia, M. M., Sedghi, H., and Schmidt, L. The role of pre-training data in transfer learning. arXiv preprint arXiv:2302.13602, 2023. +Fei-Fei, L., Fergus, R., and Perona, P. One-shot learning of object categories. IEEE TPAMI, 2006. +Finn, C., Abbeel, P., and Levine, S. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, 2017. +Garnelo, M., Rosenbaum, D., Maddison, C., Ramalho, T., Saxton, D., Shanahan, M., Teh, Y. W., Rezende, D., and Eslami, S. A. Conditional neural processes. In ICML, 2018. +Gidaris, S., Bursuc, A., Puy, G., Komodakis, N., Cord, M., and Perez, P. Obow: Online bag-of-visual-words generation for self-supervised learning. In CVPR, 2021. +Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al. Bootstrap your own latent-a new approach to self-supervised learning. In NeurIPS, 2020. +Guo, Y., Codella, N. C., Karlinsky, L., Codella, J. V., Smith, J. R., Saenko, K., Rosing, T., and Feris, R. A broader study of cross-domain few-shot learning. In ECCV, 2020. +He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In CVPR, 2016. +He, K., Fan, H., Wu, Y., Xie, S., and Girshick, R. Momentum contrast for unsupervised visual representation learning. In CVPR, 2020. +He, K., Chen, X., Xie, S., Li, Y., Dollar, P., and Girshick, R. Masked autoencoders are scalable vision learners. In CVPR, 2022. +He, L., Chen, Y., Dong, Y., Wang, Y., Lin, Z., et al. Efficient equivariant network. NeurIPS, 2021. + +He, L., Chen, Y., Shen, Z., Yang, Y., and Lin, Z. Neural ePDOs: Spatially adaptive equivariant partial differential operator based networks. In ICLR, 2023. +Hestness, J., Narang, S., Ardalani, N., Diamos, G., Jun, H., Kianinejad, H., Patwary, M., Ali, M., Yang, Y., and Zhou, Y. Deep learning scaling is predictable, empirically. arXiv preprint arXiv:1712.00409, 2017. +Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. +Hu, S. X., Li, D., Stuhmer, J., Kim, M., and Hospedales, T. M. Pushing the limits of simple pipelines for few-shot learning: External data and fine-tuning make a difference. In CVPR, 2022. +Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. Densely connected convolutional networks. In CVPR, 2017. +Jamal, M. A. and Qi, G.-J. Task agnostic meta-learning for few-shot learning. In CVPR, 2019. +Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J., and Amodei, D. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020. +Kolesnikov, A., Beyer, L., Zhai, X., Puigcerver, J., Yung, J., Gelly, S., and Houlsby, N. Big transfer (bit): General visual representation learning. In ECCV. Springer, 2020. +Kornblith, S., Shlens, J., and Le, Q. V. Do better imagenet models transfer better? In CVPR, 2019. +Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. In NeurIPS, 2012. +Lake, B. M., Ullman, T. D., Tenenbaum, J. B., and Gershman, S. J. Building machines that learn and think like people. Behavioral and brain sciences, 2017. +Lee, K., Maji, S., Ravichandran, A., and Soatto, S. Metalearning with differentiable convex optimization. In CVPR, 2019. +Li, C., Yang, J., Zhang, P., Gao, M., Xiao, B., Dai, X., Yuan, L., and Gao, J. Efficient self-supervised vision transformers for representation learning. In ICLR, 2022a. +Li, W., Liu, X., and Bilen, H. Universal representation learning from multiple domains for few-shot classification. In ICCV, 2021. + +Li, W.-H., Liu, X., and Bilen, H. Cross-domain few-shot learning with task-specific adapters. In CVPR, 2022b. +Li, Z., Zhou, F., Chen, F., and Li, H. Meta-sgd: Learning to learn quickly for few-shot learning. arXiv preprint arXiv:1707.09835, 2017. +Liu, Y., Lee, J., Zhu, L., Chen, L., Shi, H., and Yang, Y. A multi-mode modulator for multi-domain few-shot classification. In ICCV, 2021a. +Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, 2021b. +Liu, Z., Mao, H., Wu, C.-Y., Feichtenhofer, C., Darrell, T., and Xie, S. A convnet for the 2020s. In CVPR, 2022. +Luo, X., Wei, L., Wen, L., Yang, J., Xie, L., Xu, Z., and Tian, Q. Rectifying the shortcut learning of background for few-shot learning. In NeurIPS, 2021. +Luo, X., Xu, J., and Xu, Z. Channel importance matters in few-shot image classification. In ICML, 2022. +Mangla, P., Kumari, N., Sinha, A., Singh, M., Krishnamurthy, B., and Balasubramanian, V. N. Charting the right manifold: Manifold mixup for few-shot learning. In WACV, 2020. +Mishra, N., Rohaninejad, M., Chen, X., and Abbeel, P. A simple neural attentive meta-learner. In ICLR, 2018. +Munkhdalai, T. and Yu, H. Meta networks. In ICML, 2017. +Naik, D. K. and Mammone, R. J. Meta-neural networks that learn by learning. In IJCNN, 1992. +Oreshkin, B., Rodríguez López, P., and Lacoste, A. Tadam: Task dependent adaptive metric for improved few-shot learning. In NeurIPS, 2018. +Park, E. and Oliva, J. B. Meta-curvature. In NeurIPS, 2019. +Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS, 2019. +Patacchiola, M., Bronskill, J., Shysheya, A., Hofmann, K., Nowozin, S., and Turner, R. E. Contextual squeeze-and-excitation for efficient few-shot image classification. In NeurIPS, 2022. +Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. In ICML, 2021. + +Radosavovic, I., Kosaraju, R. P., Girshick, R., He, K., and Dólar, P. Designing network design spaces. In CVPR, 2020. +Rajeswaran, A., Finn, C., Kakade, S. M., and Levine, S. Meta-learning with implicit gradients. In NeurIPS, 2019. +Ravi, S. and Larochelle, H. Optimization as a model for few-shot learning. In ICLR, 2017. +Ren, M., Triantafillou, E., Ravi, S., Snell, J., Swersky, K., Tenenbaum, J. B., Larochelle, H., and Zemel, R. S. Meta-learning for semi-supervised few-shot classification. In ICLR, 2018. +Requeima, J., Gordon, J., Bronskill, J., Nowozin, S., and Turner, R. E. Fast and flexible multi-task classification using conditional neural adaptive processes. In NeurIPS, 2019. +Rizve, M. N., Khan, S. H., Khan, F. S., and Shah, M. Exploring complementary strengths of invariant and equivariant representations for few-shot learning. In CVPR, 2021. +Rusu, A. A., Rao, D., Sygnowski, J., Vinyals, O., Pascanu, R., Osindero, S., and Hadsell, R. Meta-learning with latent embedding optimization. In ICLR, 2019. +Santoro, A., Bartunov, S., Botvinick, M., Wierstra, D., and Lillicrap, T. Meta-learning with memory-augmented neural networks. In ICML, 2016. +Sbai, O., Couprie, C., and Aubry, M. Impact of base dataset design on few-shot image classification. In ECCV, 2020. +Schmidhuber, J. Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta... hook. PhD thesis, Technische Universität München, 1987. +Shysheya, A., Bronskill, J. F., Patacchiola, M., Nowozin, S., and Turner, R. E. Fit: Parameter efficient few-shot transfer learning for personalized and federated image classification. In ICLR, 2023. +Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. In *ICLR*, 2015. +Snell, J., Swersky, K., and Zemel, R. Prototypical networks for few-shot learning. In NeurIPS, 2017. +Sun, C., Shrivastava, A., Singh, S., and Gupta, A. Revisiting unreasonable effectiveness of data in deep learning era. In ICCV, 2017. +Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P. H., and Hospedales, T. M. Learning to compare: Relation network for few-shot learning. In CVPR, 2018. + +Taori, R., Dave, A., Shankar, V., Carlini, N., Recht, B., and Schmidt, L. Measuring robustness to natural distribution shifts in image classification. In NeurIPS, 2020. +Thrun, S. and Pratt, L. Learning to learn: Introduction and overview. In Learning to learn. Springer, 1998. +Tian, Y., Wang, Y., Krishnan, D., Tenenbaum, J. B., and Isola, P. Rethinking few-shot image classification: A good embedding is all you need? In ECCV, 2020. +Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., and Jégou, H. Training data-efficient image transformers & distillation through attention. In ICML, 2021. +Triantafillou, E., Zhu, T., Dumoulin, V., Lamblin, P., Evci, U., Xu, K., Goroshin, R., Gelada, C., Swersky, K., Manzagol, P., and Larochelle, H. Meta-dataset: A dataset of datasets for learning to learn from few examples. In ICLR, 2020. +Triantafillou, E., Larochelle, H., Zemel, R. S., and Dumoulin, V. Learning a universal template for few-shot dataset generalization. In ICML, 2021. +Vaze, S., Han, K., Vedaldi, A., and Zisserman, A. Open-set recognition: A good closed-set classifier is all you need. In ICLR, 2022. +Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al. Matching networks for one shot learning. In NeurIPS, 2016. +Wu, Z., Xiong, Y., Yu, S. X., and Lin, D. Unsupervised feature learning via non-parametric instance discrimination. In CVPR, 2018. +Xu, C., Yang, S., Wang, Y., Wang, Z., Fu, Y., and Xue, X. Exploring efficient few-shot adaptation for vision transformers. Transactions on Machine Learning Research, 2022a. +Xu, J., Ton, J.-F., Kim, H., Kosiorek, A., and Teh, Y. W. Metafun: Meta-learning with iterative functional updates. In ICML, 2020. +Xu, J., Luo, X., Pan, X., Pei, W., Li, Y., and Xu, Z. Alleviating the sample selection bias in few-shot learning by removing projection to the centroid. In NeurIPS, 2022b. +Ye, H.-J., Hu, H., Zhan, D.-C., and Sha, F. Few-shot learning via embedding adaptation with set-to-set functions. In CVPR, 2020. +Yoon, S. W., Seo, J., and Moon, J. Tapnet: Neural network augmented with task-adaptive projection for few-shot learning. In ICML, 2019. + +Zbontar, J., Jing, L., Misra, I., LeCun, Y., and Deny, S. Barlow twins: Self-supervised learning via redundancy reduction. In ICML, 2021. +Zhai, X., Puigcerver, J., Kolesnikov, A., Ruyssen, P., Riquelme, C., Lucic, M., Djolonga, J., Pinto, A. S., Neumann, M., Dosovitskiy, A., et al. A large-scale study of representation learning with the visual task adaptation benchmark. arXiv preprint arXiv:1910.04867, 2019. +Zhai, X., Kolesnikov, A., Houlsby, N., and Beyer, L. Scaling vision transformers. In CVPR, 2022. +Zhang, C., Cai, Y., Lin, G., and Shen, C. Deepemd: Few-shot image classification with differentiable earth mover's distance and structured classifiers. In CVPR, 2020. +Zhang, J., Gao, L., Luo, X., Shen, H., and Song, J. Deta: Denoised task adaptation for few-shot learning. arXiv preprint arXiv:2303.06315, 2023. +Zhao, N., Wu, Z., Lau, R. W. H., and Lin, S. What makes instance discrimination good for transfer learning? In ICLR, 2021. +Zhou, J., Wei, C., Wang, H., Shen, W., Xie, C., Yuille, A., and Kong, T. ibot: Image bert pre-training with online tokenizer. In ICLR, 2022. +Zintgraf, L., Shiarli, K., Kurin, V., Hofmann, K., and White-son, S. Fast context adaptation via meta-learning. In ICML, 2019. + +# A. Additional Related Work + +Few-shot classification benchmarks. Earlier benchmarks in few-shot image classification focus on in-domain classification with standard 5-way 1-shot and 5-shot settings, including miniImageNet (Vinyals et al., 2016), FC100 (Oreshkin et al., 2018) and tieredImageNet (Ren et al., 2018). BSCD-FSL benchmark (Guo et al., 2020) targets at a more realistic cross-domain setting and considers the evaluation of higher shots such as 20 or 50. Meta-Dataset (Triantafillou et al., 2020) also targets cross-domain settings, but goes further and considers imbalanced classes and varying numbers of ways and shots. MD+VTAB (Dumoulin et al., 2021) further combines Meta-Dataset and VTAB (Zhai et al., 2019) from transfer learning, aiming at connecting few-shot classification with general visual representation learning. Although all benchmarks evaluate the models' ability to quickly adapt to new few-shot classification tasks, state-of-the-art methods from different benchmarks differ a lot. In this paper, through a fine-grained test-time analysis, we figure out the reason behind this phenomenon. + +Backbone adaptation in few-shot classification. MAML (Finn et al., 2017) is the first paper that uses Finetune as the adaptation algorithm. However, all hyperparameters of Finetune are fixed before training and the backbone is weak, so MAML does not perform well. Later, Tadam (Oreshkin et al., 2018) designs the first adaptation algorithm that partially adapts the backbone by a black-box meta-learning method. The Baseline algorithm (Chen et al., 2019) is the first one that uses a combination of non-meta-learning training and Finetune, and achieves surprisingly good results. Another baseline method (Dhillon et al., 2020) utilizes simple supervised training and Finetune using initialization of the linear layer from feature prototypes. CNAPs (Requeima et al., 2019) is a partial adaptation meta-learning algorithm that learns on multiple datasets on Meta-Dataset and achieves SOTA results. After CNAPs comes out, several works emerge that adapt the backbone either by finetuning or partial backbone adaptation in the adaptation phase on Meta-Dataset (Triantafillou et al., 2021; Li et al., 2022b; Xu et al., 2022a; Patacchiola et al., 2022; Liu et al., 2021a; Bateni et al., 2020; Shysheya et al., 2023; Zhang et al., 2023). Our paper reveals that the popularity of this line of research first declined and then increased is related to the bias of evaluation protocols of benchmarks. + +Connections of pre-trained models with downstream task performance. Kornblith et al. (2019) showed that ImageNet performance has a linear relationship with the downstream transfer performance of classification tasks. Similarly, such a linear relationship was discovered later in domain generalization (Taori et al., 2020) and open-set recognition (Vaze et al., 2022). Abnar et al. (2022) questioned this result with large-scale experiments and showed that when increasing the upstream accuracy to a very high number, performance of downstream tasks can saturate. Our experiments on Omniglot and ISIC further ensure this observation, even when the training data is at a small scale. Recently, Entezari et al. (2023) find that the choice of the pre-training data source is essential for the few-shot classification, but its role decreases as more data is made available for fine-tuning, which complements our study. + +# B. Details of Experiments + +We reimplement some of the training algorithms in Table 1, including all PN models, all MAML models, CE models with Conv-4 and ResNet-12, MetaOpt, Meta-Baseline, COS, and IER. For all other training algorithms, we use existing checkpoints from official repositories or the Pytorch library (Paszke et al., 2019). All reimplemented models are trained for 60 epochs using SGD+Momentum with cosine learning rate decay without restart. The initial learning rates are all set to 0.1. Training batchsize is 4 for meta-learning models and 256 for non-meta-learning models. The input image size is $84 \times 84$ for Conv-4 and ResNet-12 models and $224 \times 224$ for other models. We use random crop and horizontal flip as data augmentation at training. Since some models like PN are trained on normalized features, for a fair comparison, we normalize the features of all models for the adaptation phase. + +For experiments in Section 4.1, to make a fair comparison, we train CE and MoCo for 150 epochs and train PN using a number of iterations that makes the number of seen samples equal. SGD+Momentum with cosine learning rate decay without restart is used. The backbone used is ResNet-18. Learning rates are all set to 0.1. Training batchsize is 4 for PN and 256 for CE and MoCo. The input image size is $84 \times 84$ . During training, we use random crop and horizontal flip as data augmentation for CE and PN, and for MoCo, we use the same set of data augmentations as in the MoCo-v2 paper (Chen et al., 2020). We repeat training 5 times with different sampling of data or classes for each experiment in Section 4.1. All pre-trained supervised models in Section 4.2 are from Pytorch library, and all self-supervised models are from official repositories. To avoid memory issues, we only use 500000 image features of the training set of ImageNet for KNN computation of Top-1 ImageNet accuracy for self-supervised models in Section 4.2. + +All training algorithms that are evaluated in this paper, including meta-learning algorithms, set the learnable parameters as + +Table 3. 5-way 1-shot performance of pairwise combinations of a variety of training and adaptation algorithms on Meta-Dataset. We exclude MatchingNet from the adaptation algorithms because MatchingNet equals NCC when the shot is one. + +
Adaptation algorithm
Training algorithmTraining datasetArchitectureMetaOptNCCLRURLCCTSA/eTTFinetune
PNminiImageNetConv-438.50±0.538.69±0.538.23±0.438.81±0.438.64±0.541.27±0.542.60±0.5
MAMLminiImageNetConv-442.92±0.543.00±0.542.65±0.542.51±0.542.97±0.544.55±0.546.13±0.5
CEminiImageNetConv-444.49±0.544.88±0.544.88±0.544.48±0.544.82±0.546.20±0.546.92±0.5
MatchingNetminiImageNetResNet-1245.00±0.545.23±0.545.24±0.544.89±0.545.40±0.546.18±0.548.53±0.5
MAMLminiImageNetResNet-1246.09±0.546.09±0.545.81±0.545.88±0.546.07±0.551.95±0.553.71±0.5
PNminiImageNetResNet-1247.32±0.547.53±0.547.33±0.547.53±0.547.65±0.549.36±0.553.06±0.5
MetaOptminiImageNetResNet-1249.16±0.549.52±0.549.53±0.549.42±0.549.73±0.552.01±0.553.90±0.5
CEminiImageNetResNet-1251.09±0.551.42±0.551.60±0.550.94±0.551.71±0.553.81±0.554.68±0.5
Meta-BaselineminiImageNetResNet-1251.24±0.551.56±0.551.67±0.551.23±0.551.77±0.553.87±0.554.54±0.5
COSminiImageNetResNet-1251.23±0.551.53±0.551.31±0.551.87±0.551.72±0.554.18±0.554.98±0.5
PNImageNetResNet-5052.50±0.552.84±0.552.71±0.552.90±0.552.93±0.554.34±0.557.40±0.5
IERminiImageNetResNet-1253.31±0.553.63±0.553.82±0.553.24±0.553.98±0.556.32±0.556.98±0.5
Moco v2ImageNetResNet-5054.89±0.555.38±0.555.64±0.555.77±0.555.70±0.558.13±0.559.99±0.5
DINOImageNetResNet-5060.81±0.561.37±0.561.61±0.561.96±0.561.81±0.562.69±0.563.61±0.5
CEImageNetResNet-5062.34±0.562.88±0.562.90±0.563.55±0.563.18±0.565.04±0.565.87±0.5
BiT-SImageNetResNet-5062.41±0.562.95±0.563.15±0.563.40±0.563.40±0.565.02±0.567.05±0.5
CEImageNetSwin-B64.03±0.564.46±0.564.38±0.565.22±0.565.01±0.5-69.12±0.5
DeiTImageNetViT-B64.20±0.564.62±0.564.43±0.565.31±0.565.11±0.566.25±0.569.12±0.5
DINOImageNetViT-B64.86±0.565.36±0.565.31±0.566.05±0.565.91±0.567.26±0.567.89±0.5
CEImageNetViT-B67.19±0.567.61±0.567.56±0.568.00±0.567.85±0.569.78±0.572.14±0.4
CLIPWebImageTextViT-B67.95±0.568.68±0.569.10±0.569.85±0.568.85±0.570.42±0.574.96±0.5
+ +the parameters of a feature extractor, and all adaptation algorithms do not have additional parameters that need to be obtained from training. Thus adapting different training algorithms is as easy as adapting different feature extractors with different adaptation algorithms. There exist other meta-learning algorithms (Oreshkin et al., 2018; Requeima et al., 2019; Patacchiola et al., 2022; Ye et al., 2020; Doersch et al., 2020) that meta-learn additional parameters besides a feature extractor, so their training/adaptation algorithms cannot be combined with other adaptation/training algorithms directly. Thus these algorithms are not included in our experiments. One solution is, for each such algorithm, learn the same additional parameters while freezing the backbone for every other trained model, and then we can compare all algorithms. We expect that after doing this the ranking of both training and adaptation algorithms will still not be changed and we leave it for future work to verify this conjecture. + +Throughout the main paper, for all adaptation algorithms that have hyperparameters, we grid search hyperparameters on the validation dataset of miniImageNet and Meta-Dataset. For Traffic Signs which does not have a validation set, we use the hyperparameters averaged over the found optimal hyperparameters of all other datasets. For adaptation analysis experiments in Section 5, we partition ImageNet and Quick Draw to have a 100-class validation set. The rest is used as the test set. + +# C. Additional Tables, Figures, and Analysis + +# C.1. Additional Tables for Section 3.2 + +Table 3 shows 5-way 1-shot results similar to Table 1. Table 4 and Table 5 show similar results on miniImageNet. All results lead to the same conclusion that training and adaptation algorithms are uncorrelated. One thing to notice in Table 3 is that CE model using ViT-base as backbone trained on ImageNet performs particularly well in 1-shot setting. It outperforms DINO in 1-shot setting while underperforms DINO in 5-shot setting. Also, MAML model on Meta-Dataset performs much better than the same model on miniImageNet (possibly due to the use of transductive BN which gives additional unfair flexibility towards new domains). These phenomena show that although training and adaptation algorithms are uncorrelated, the ranking of training algorithms can be influenced by the change of evaluated tasks. Further understanding of how these factors influence the performance of training models is needed in the future. + +# C.2. Additional Figures and Analysis for Section 4.1 + +Figure 8 and Figure 9 show the data-scaling experiments evaluated on other 9 datasets from BSCD-FSL Benchmark and DomainNet. The general trend is similar to the trend on Meta-Dataset. ISIC shows similar phenomenon to Omniglot in that + +Table 4. 5-way 5-shot performance of pairwise combinations of a variety of training and adaptation algorithms conducted on the miniImageNet benchmark. + +
Adaptation algorithm
Training algorithmArchitectureMatchingNetMetaOptNCCLRURLCCTSA/eTTFinetune
MAMLConv-459.80±0.357.99±0.458.86±0.260.93±0.360.81±0.461.83±0.362.40±0.362.03±0.5
PNConv-463.71±0.564.12±0.563.67±0.565.78±0.565.78±0.465.82±0.565.69±0.466.35±0.5
CEConv-464.09±0.466.41±0.467.93±0.368.92±0.568.63±0.469.08±0.569.22±0.469.51±0.6
MatchingNetResNet-1269.48±0.369.71±0.369.75±0.670.92±0.470.86±0.471.00±0.471.15±0.272.31±0.4
MAMLResNet-1270.27±0.368.37±0.670.09±0.471.94±0.471.33±0.372.10±0.575.70±0.576.18±0.3
PNResNet-1273.64±0.474.03±0.474.99±0.575.46±0.475.72±0.475.65±0.476.99±0.379.62±0.2
MetaOptResNet-1275.21±0.476.51±0.577.69±0.478.09±0.578.36±0.478.43±0.480.55±0.281.44±0.2
CEResNet-1276.66±0.477.66±0.479.97±0.480.01±0.580.11±0.580.34±0.580.65±0.180.84±0.2
Meta-BaselineResNet-1277.06±0.477.59±0.479.85±0.280.54±0.580.52±0.480.77±0.480.97±0.381.42±0.2
COSResNet-1279.70±0.380.07±0.481.01±0.381.28±0.481.54±0.481.52±0.581.97±0.283.26±0.2
IERResNet-1280.37±0.381.33±0.382.80±0.383.71±0.383.83±0.384.04±0.383.53±0.384.02±0.2
+ +Table 5. 5-way 1-shot performance of pairwise combinations of a variety of training and adaptation algorithms conducted on the miniImageNet benchmark. + +
Adaptation algorithm
Training algorithmArchitectureMetaOptNCCLRURLCCTSA/eTTFinetune
MAMLConv-445.97±0.446.24±0.547.62±0.546.81±0.647.40±0.547.55±0.447.40±0.3
PNConv-449.79±0.450.95±0.450.89±0.451.01±0.550.95±0.450.97±0.350.65±0.4
CEConv-451.28±0.551.68±0.751.07±0.652.18±0.651.86±0.752.88±0.351.87±0.4
MatchingNetResNet-1254.52±0.554.96±0.554.85±0.554.84±0.654.89±0.555.27±0.455.52±0.4
MAMLResNet-1256.43±0.455.80±0.757.14±0.756.06±0.857.03±0.757.86±0.458.49±0.2
PNResNet-1259.91±0.460.25±0.760.26±0.760.01±0.660.26±0.760.37±0.560.67±0.2
MetaOptResNet-1260.40±0.360.82±0.560.40±0.561.79±0.560.91±0.561.89±0.462.58±0.4
CEResNet-1262.53±0.662.88±0.662.55±0.663.15±0.662.94±0.663.46±0.463.33±0.4
Meta-BaselineResNet-1263.99±0.264.92±0.764.84±0.764.55±0.764.91±0.764.92±0.364.97±0.2
COSResNet-1264.06±0.364.73±0.964.71±0.964.60±0.864.70±0.964.92±0.465.01±0.4
IERResNet-1265.05±0.166.45±0.666.17±0.666.68±0.666.48±0.666.25±0.365.86±0.4
+ +few-shot performance may not improve if we use larger datasets. But one difference is that for MoCo, few-shot performance does always improve on ISIC, while few-shot performance does not always improve on Omniglot. Also, MoCo performs well on ChestX, while falling behind CE and PN on all other datasets. These show that the knowledge learned from MoCo is somewhat different from that of PN and CE, and this knowledge is useful for classification tasks on ISIC and ChestX. Previous works (Zhao et al., 2021) has shown that contrastive learning models like MoCo tend to learn more low-level visual features that are easier to transfer. We thus conjecture that low-level knowledge is more important for tasks of some datasets such as ChestX and ISIC. This indicates that the design of the training objective should consider what the test dataset is, so a one-fit-all solution may not exist. We also notice that all datasets in DomainNet except for Quick Draw exhibit similar scale patterns. We know that in DomainNet, each dataset has the same set of classes, while differs in domains. Thus we can infer that the test domain is not the key factor that influences the required training knowledge, but the choice of classes is. In (Luo et al., 2022), the authors define a new task distribution shift that measures the difference between tasks, taking classes into consideration. It is future work to see whether task distribution shift is the key factor that influences the required training knowledge for each test dataset. + +Figure 10-12 depicts the comparisons of the two data scaling approaches for CE, PN, and MoCo. We can see that for CE and PN, increasing the number of classes is far more effective than increasing the number of samples per class. However, for MoCo, two data scaling approaches present similar performance at every data ratio used for training. Thus we can infer that self-supervised algorithms that do not use labels for supervision indeed do not be influenced by the number of labels. Self-supervised algorithms do not rely on labels, so they treat each sample equally, especially for contrastive learning methods. Thus for self-supervised algorithms, the total number of training samples is the only variable of interest. While this makes self-supervised algorithms particularly suitable for learning on datasets with scarce classes, this also hinders self-supervised algorithms from scaling well to datasets with a large number of classes, e.g., ImageNet-21K or JFM (Sun et al., 2017). + +Figure 13-15 plot the linear fit of few-shot performance vs the number of training classes on logit-transformed axes. We can see that the linear relationship is obvious for most circumstances (most correlation coefficients are larger than 0.9). Thus we + +![](images/fff94810b0650d93b60472df09e11a9e6baeab3ed4e0fd5662919b2f8a52ac2b.jpg) + +![](images/3e130b1cfbac61b4fa4056eb1fc3694e0fc118ba03633ad47ce6205086a71d11.jpg) + +![](images/dbf496ee8c5e264659193cbaf815a3178becaab16a7d15732998af468c2a4aa9.jpg) + +![](images/67589e379aa5780060dc1de79ae1941e5841982dc344293441fe85d10af58ad1.jpg) + +![](images/74b91884a263ab9661af3a8581efd8f9d7c2bab981f8647c4c23278cb719f455.jpg) +Figure 8. Results of other 9 datasets from BSCD-FSL benchmark and DomainNet about the effect of sample size per training class on few-shot classification performance. The plot follows Figure 1. + +![](images/a84529f6bd9c43037043ae73d437b7f8b63d2b42d8a16a0db25f3b5b614d20b1.jpg) + +![](images/f83e767ce7a890240f9077b35265395dab9ed3d64e14e1797852ad355770550d.jpg) + +![](images/848a7bd987ea61076989262edc6df3190761ca879c3989db2fd2cd389ae6225c.jpg) + +![](images/7e3997259d8bbaa874ecf99bc6e59b5b83dfa4e08cd2a3aa096bdb97cb7b9d76.jpg) + +have verified the discovery of neural scaling laws wrt the number of training classes. + +# C.3. Detailed Results of Figures in Section 4.2 + +Table 6 and Table 7 show the detailed performance of supervised and self-supervised models in Section 4.2. + +# C.4. Additional Analysis for Section 5 + +In Figure 5, the few-shot performance is quite close on ImageNet than on Quick Draw. While LR, Finetune, and MetaOPT all follow the power law, their rates are different. All query-support matching algorithms perform similarly to NCC on ImageNet, showing their difficulties to utilize the capacities for generalizing to in-domain tasks. We notice that Cosine Classifier (CC) as a metric-based method performs much better than other metric-based methods when the number of shots is large on ImageNet. This verifies that it is query-support matching that makes algorithms scale poorly, not the use of metric space. We also notice that the behaviors of different algorithms are quite different. While Logistic Regression (LR) performs relatively well when the number of shots increases, its performance quickly drops when the number of ways increases. The ranking of other algorithms such as CC and MetaOPT changes with different situations. It is future work to figure out what influences the performance of these algorithms. + +# D. Finetune has High Adaptation Cost + +For adaptation algorithms like NCC, MatchingNet, Logistic Regression and MetaOPT, all samples of a task just need to go through a single forward pass, so the adaptation can be very quick, and usually, one task can be completed within one second. For adaptation algorithms like CC and URL, there is a linear layer that needs to be learned during adaptation, so these methods need several forward and backward pass to update the linear layer. For these algorithms, one task can be completed in several seconds. For adaptation algorithms like Fintune and partial-finetune algorithms such as TSA and eTT, the backward process should be passed through the whole network, and the optimal epoch is usually much higher. So for these algorithms, one task can take from several minutes to several hours to complete, depending on the size of the support + +![](images/6a03ffc35f62cea7a048a25e8c086fa6f72c7f0614210ad68ecab357716b6a66.jpg) + +![](images/1404a7ebd339b30da89e252c3548407541b19929dfd6f841bdc671f06d5bd6e5.jpg) + +![](images/8e431ca653512603043bc89dbb40bbd8940d87b8f37ccf5e46293835d53f1aad.jpg) + +![](images/01b89f3a2ebbfa47cfe0b3b90cdc4cb7f83a49b0299001f6df4021d09fbbeca7.jpg) + +![](images/7bb7662ff1014f9daa7e95e422b15f75f101fcb68aa6b01d45de396514b98e4e.jpg) + +![](images/c6d707bf019e9f008cd5e20c7fbaf9ff5fbfff0570c20b1f4f3fbc0dbc18f7c3.jpg) + +![](images/a03c0974ee6d1f03c415bca6a4257444249407445af846f7e2b164b7ab47ed4a.jpg) +Figure 9. Results of other 9 datasets from BSCD-FSL benchmark and DomainNet about the effect of the number of training classes on few-shot classification performance. The plot follows Figure 2. + +![](images/4752c91df82ef96b2affba179da63bf5c0a6d776932601e080c89615cc1f0a2b.jpg) +0 10 20 50 100 300 500 1000 Number of classes used for training + +![](images/9e6bd970da79c10cec7535a3b9ec1007ffb45917ec28672f514edf454a56ed2f.jpg) + +![](images/0b85acdae427490a0909b9a63e5ceb2241355d6b82709fce7af7d01804816723.jpg) + +![](images/3da3c89df400ca6488da519d2c5a1a65346d5524c8634db7f72370864de569c6.jpg) + +![](images/c8b803cb7ce8883d0074066e5bafdd2f59172390d43cf68eb03c84586d8edcc8.jpg) + +![](images/7b95e82ae69fa72996f26947b742cab36652916236c6a91cced608c26f705335.jpg) + +![](images/7c2d0f462ef4a6dffa67997644b1ce8c033bf381db96c78f45800d021d2bf412.jpg) + +![](images/6c7c9e31f842b3b1b5a7e3c37eeb51da3afb78f42377f8b5e694b36780f13d22.jpg) + +![](images/61b469263892ab33df2db0a033f4cf1f944abcddad1bcd36719ffc4507a575e6.jpg) +Figure 10. Comparisons of the two data scaling approaches for CE: scaling with sample size per training class and scaling with the number of training classes. + +![](images/f35d19a82088b4a1149eaf9e8a03bfbf38f379ebfd96bb131b89215b134ccb42.jpg) + +![](images/b46ee4a6d5188659d79a274a85a478bcc1f9af7cb1df1e5192b8867e81fe207e.jpg) + +![](images/4ce48d7e467ed8d3f1be2fcd2c1b6c404b12e8c2630c8dda76c3470a6824667c.jpg) + +![](images/7ff769fe673abc72310c3009164e618367db70277cf70b04e1a3333afdd9e0fd.jpg) + +![](images/3a2acffd1d286ce89298427bf13ed83d8f1b38942bc05409a9bc7d9ee4798bcd.jpg) + +![](images/d4202dbe0ec09ebca28f7584594071ef3b058cd7aee39f37c86c41734acf8f57.jpg) + +![](images/9f000caf4a7e1108dbc727a68728ee69a0c316f42b5aa4c3963015fb9a4c5585.jpg) + +![](images/c7101ef1c169900e38db20afe3bf35eab9bef2d823b3e528abfba03b183ef6b0.jpg) + +![](images/d79659c64dd4ec53cbbc1eddcb701440381488a975aa5a9f82d7322a8a8bccf5.jpg) + +![](images/2aa2894984862c191f5325a0729661b0302a091a8a252c04fbf401b1507cb260.jpg) + +![](images/f47910b85ac04a1df57c68166a37ab690de97df3cfca780b80b6f194744f1100.jpg) + +![](images/ebe954b27919de06d37a308113af36be15ba988901cc2f947524272293e59831.jpg) +Figure 11. Comparisons of the two data scaling approaches for PN. + +![](images/0cf00cdd04d863c8c48f713d3668a23aeaed0a7423a57de49870ced96980ab9a.jpg) + +![](images/201eb419a03f7e6c426acd78adeed9ccf9a22b32265edf6805d27e40f8c006e5.jpg) + +![](images/9e681fd2573e2fd4b8b276bc1843e230b1f749f9e9ef762d3ae87b03de2a140d.jpg) + +![](images/6400e3ca086005c9da7f9eddd2de37724b58c6c988ee2ffd3d63a505e7710d4d.jpg) + +set. In practical scenarios, few-shot learning usually requires real-time response, so such a long time waiting for one task is intolerable. We wish some methods would alleviate this problem. For example, in the future, we may use the equivalent network (He et al., 2021; 2023) to reduce the search space. + +![](images/c8f2473d335cd912c3891bf9b0aae6e61209db0df2fc62f64ee698baf08dc2b6.jpg) + +![](images/d2e1a70d08c8c1524a1e0d66f85107481e95ae49c55e8b09b6278f5430f07e18.jpg) + +![](images/8eb8cb6a1d86d29ae47c65a90c7baacd56f86da578669fec6989b9e398bff710.jpg) + +![](images/006ce15eee84276d479af3f4ecfe4e8cb426344e0d353461844439c6b69d220d.jpg) + +![](images/a410b7fbb787b1d092a88146b83edfc95908214811e013a44b513fc521f4f2e2.jpg) + +![](images/84760560b0fca7ec6e1a97d28b6df49f25fece5614a068b8243741649128b6c3.jpg) + +![](images/949be498416ef04a7532edec93c87df72d2c4f9fde8485e3c9c3f3dcf66cdb07.jpg) +Figure 12. Comparisons of the two data scaling approaches for MoCo. + +![](images/a4099bd51013dc4410ac084435f4aee2d984db7b7d41f8ac4351fb9a8d001474.jpg) + +![](images/93a2b2dae32236dfa9fa7bef1ef57979fb96b366bb6edc128e3539346322016d.jpg) + +![](images/d89af1c79e890de5a36672e84bf161e8ae2b634a2c2b5ebcdd3b12f59cd16607.jpg) + +![](images/a6c0b778574d4a8e69eda7d94c35019ca50e7b7c52a8059c94b8d2d1d6fb41b1.jpg) + +![](images/31944f6c2f91a5e188cb4f952da1a0ad71d688a678d3ab562ed264d02e0f8c6e.jpg) + +![](images/57e5219d890c0eb822a0edee10d4c3fc90573a1d9e3be1e4878ea7c5ddbca299.jpg) + +![](images/e86852118040291f9a85e774f6b7fb0fa2ee6f8a68ea7b1fc5199010a9895898.jpg) + +![](images/1046e087ac530a70e4fb223e8d34ea0cc76f7c6cd8ea852ddb165ac8066f43f1.jpg) + +![](images/a14e80c65fa96332c23490d4f0e90d3c1df41bf8544d1aba995e0d71d19303bc.jpg) + +![](images/355d4a99c1171d2627f8c4607b9744496cac54681a3a1357387a03ef2cbc1c61.jpg) +Figure 13. Linear fit of few-shot performance of CE vs the number of training classes on logit-transformed axes. "r" refers to the correlation coefficient between two axes of data. + +![](images/e1cce8d0315469c58824275f0dfaab12584b0af5d7f02079b1edefa1049d8f0d.jpg) + +![](images/277454cc088b47f7f5ae606609b5285e70e46e5aee22c0bc25d27e88e32f4053.jpg) + +![](images/4dd9a3acb3dda36649c8601270af2af840e81fc8229d17641047533e0024bb01.jpg) + +![](images/a7ad291feba77976757e5be69e02b8058c4e1279f8f8ecd6b9c192a5b6f7e351.jpg) + +![](images/d438117a703d95c2c40cd21bd49f611772225cc199dd7619e9b0a6cd5a769ac8.jpg) + +![](images/e730085a5941f9dfadec177d8c876222fb1148fa7b307d83eac7249d8570e1eb.jpg) + +![](images/a412204546b5e997290882d5e7ee95246e39bc304a578e79d8f7750de32fa709.jpg) + +![](images/ff0db25e5d8764a0fadd951ebea1686367abf61489d3cd27755774caeaa1b327.jpg) + +![](images/9e2eac23b5cab271013bf27fc7465430ee9da3dab752286a84a09bbd325a0fa7.jpg) + +![](images/87f16fc7c5730a6a6523ddd51c6b0d021c48bb049456aff30280cb69379680ce.jpg) + +![](images/c766a734b0a6f0a7ac0f99082d6f2a7a960c8281ace5141ebd33e2894c01e760.jpg) +Figure 14. Linear fit of few-shot performance of PN vs the number of training classes on logit-transformed axes. "r" refers to the correlation coefficient between two axes of data. + +![](images/d1f9f17d1fbf78c75a711de862b78797c58167404d4ce25a10225a1c68357f40.jpg) +Number of classes used for training (PN) + +![](images/8a95d9113315d61a7812df5a9e67e4c61570894a58a9ba59d9cc11348425be52.jpg) + +![](images/d79a1ce61c18a06e05cd73f01e8086e513f0d6300e7e6b985d12ce9adf5a17c8.jpg) + +![](images/64779f80ed5e88ed16eafd1923ecf27a0086ac9634fdd42fc4627b3456b4feba.jpg) + +![](images/f4a74e26ba7be7bebcbb4187c76ab2571056290a8af32d22848754377c25e0f6.jpg) + +![](images/d0a900ec469eaa1506c89f43ca0a22acf9c4347031191d8ffda516dcaccb17d6.jpg) + +![](images/6f57ecd7405a71f1532c4b03f9f39791d5ef00dc743f24d01e699bdc2a275a31.jpg) + +![](images/17e4b92a9fc65361f1aecd1778a198d3eaeed78ae89966e5496affcc836a657a.jpg) + +![](images/8c62a20e5348b875e6fdc012096113755c069ffbb484aab9a05ee11100b95732.jpg) + +![](images/143db65b888426098dd5deab6c4e8b4d420d52b253fd16f9fd4a3aa649df6ef8.jpg) + +![](images/089ad86a23f057d23ea5aca03c41a49c6380fed40262ce5ff3e173f12b134f94.jpg) +Figure 15. Linear fit of few-shot performance of MoCo vs the number of training classes on logit-transformed axes. "r" refers to the correlation coefficient between two axes of data. + +![](images/3b541d1ca4c70e9d0d92a707c1c238f52b7c386fc2da64000a11c82da32ca890.jpg) + +![](images/025d4590a2d2122be0617fb2178e33a961dfdefc947b500d4c70ffa8b7479b08.jpg) + +![](images/2b03833d57a665344c24183d29b2de87561cabdd590d87f9cbfc629bf072fe86.jpg) + +![](images/f7a6baecb5c74e3ff8dac5e0ded99b250d056177bf03175ab372885a07e073c6.jpg) + +![](images/f8580ca96bcad5af1c4a3c2d8a20b1cabc730363bd2e941abd05d980439f61c7.jpg) + +Table 6. Detailed results of supervised CE models in Figure 3. Bold/underline is the best/second best in each column. + +
ArchitectureImageNet Top-1Avg few-shotImageNet-valOmniglotAircraftBirdsTexturesQuick DrawFungiVGG FlowerTraffic SignsMSCOCO
ResNet-1868.5579.2996.7692.7359.1990.9579.8170.1673.9794.3178.2274.24
ResNet-3472.5079.1897.6692.7658.6591.7181.5768.5773.5493.8076.2775.77
ResNet-5075.2779.3398.1592.9359.5192.0282.2667.6772.6894.3375.1777.41
ResNet-10176.7479.8998.4692.9860.1092.9081.9769.1074.0994.5475.5077.84
ResNet-15277.7379.0298.6291.3357.2093.3682.3668.1273.8594.2672.3778.37
Swin-T80.7480.8699.1494.1758.2693.4082.7073.7074.7795.2376.3079.20
Swin-S82.5979.4199.3393.1756.9491.8981.0774.1472.0193.2572.6879.58
Swin-B83.0079.2799.3394.8755.2691.2580.6374.5470.7193.9972.3279.82
ViT-B80.7480.3698.9294.9858.1692.2380.4873.0271.7193.4581.3377.83
ViT-L79.5080.3498.8093.8559.2693.0481.3274.5372.0794.8078.2176.02
DenseNet-12173.6080.7897.5294.8861.6292.8981.6271.9574.3094.7379.5875.49
DenseNet-16176.4481.4298.0593.9265.8793.0082.2170.7174.4295.4080.0977.12
DenseNet-16975.0780.6597.7893.6061.7192.4381.7769.5574.2894.9881.2176.29
DenseNet-20175.8681.4097.9794.9161.9793.3282.2473.3173.0895.3381.3377.09
RegNetY-1.6GF76.0181.5397.8894.1962.7293.8582.8472.0077.0895.9777.8277.31
RegNetY-3.2GF77.6381.4998.2293.8463.2594.0782.7072.2677.6695.8475.8977.93
RegNetY-16GF79.3981.2198.5794.8262.1694.0282.4672.3475.7995.6875.0378.62
RegNetY-32GF79.7980.3798.6994.2459.7293.5782.2372.4174.3795.8072.0678.94
RegNetX-400MF71.4579.1097.1693.2057.7691.5780.9170.0673.4694.2575.5075.14
RegNetX-800MF73.8680.2497.6593.6259.1392.3682.3369.6975.7895.0777.4976.70
MobileNetV270.5480.9096.8694.2661.0391.8780.6173.3076.1395.5680.6474.70
MobileNetV3-L72.9180.4894.7194.9156.6391.4580.6876.1174.6596.4981.2272.21
MobileNetV3-S66.1078.0691.7893.4553.7988.0577.0374.6472.5094.1680.2168.72
VGG-1167.9775.9993.1393.0854.2185.1978.8965.6170.5793.5972.8169.95
VGG-11-BN69.5477.9093.9994.1458.4887.4581.0164.4673.2894.8376.7670.66
VGG-1368.9376.7893.9693.9854.9487.1679.7166.6170.6493.2973.9170.78
VGG-13-BN70.6478.0194.6492.8458.8388.8781.5664.8174.2694.8374.4371.64
VGG-1670.8677.2495.6392.6655.6389.9179.8862.2572.1693.6276.0073.02
VGG-16-BN72.6878.5696.3391.6560.8591.3281.8462.0874.4593.8276.7074.33
VGG-1971.4177.7696.2594.9657.4290.7879.5264.1671.0891.4376.4374.03
VGG-19-BN73.2679.5896.7792.1864.2991.8081.5765.2373.4393.1579.8274.80
ConvNeXt-T81.6978.2297.9194.7654.7891.2278.4572.7465.8893.6277.4475.12
ConvNeXt-S82.8477.4198.4295.5453.5888.6078.1872.5367.2692.6376.1072.29
ConvNeXt-B83.3577.3798.6695.6554.5889.0976.7971.8666.9992.1674.4574.73
ConvNeXt-L83.6976.6298.9994.3153.5088.7276.9269.4466.0492.2272.3276.10
+ +Table 7. Detailed results of self-supervised models in Figure 4. Bold/underline is the best/second best in each column. + +
AlgorithmArchitectureImageNet Top-1Avg few-shotImageNet-valOmniglotAircraftBirdsTexturesQuick DrawFungi VGGFlower Traffic SignsMSCOCO
BYOLResNet-5062.2077.9192.7292.9652.9980.7883.8173.3470.7796.2581.0469.30
SwAVResNet-5062.1074.5393.3792.6645.3771.1285.2065.7169.8495.1873.7271.98
SwAVResNet-50-x262.5974.9392.5794.7145.4168.1185.1768.3470.1695.7075.4071.37
SwAVResNet-50-x463.6574.6092.4093.8944.9966.2685.7167.7170.1695.5376.3870.80
SwAVResNet-50-x561.3775.9993.3892.7146.4169.6586.7767.2772.1696.6079.7272.62
DINOViT-S/876.9483.3398.1696.6161.2095.3385.9373.4880.0698.1080.5078.71
DINOViT-S/1672.4881.5297.2895.0156.8894.8085.6373.0079.5197.8874.8176.19
DINOViT-B/1674.1581.3997.9195.7750.7292.3986.1573.5879.7798.2878.2577.63
DINOViT-B/875.7482.8598.3496.8364.6789.7187.0272.3978.8398.2179.0378.96
DINOResNet-5064.0977.3793.9893.7251.6077.4884.7865.0775.5196.9878.8472.37
MoCo-v1ResNet-5041.2767.6787.9888.0541.4461.7777.9661.0661.6989.3962.6465.01
MoCo-v2-200epochResNet-5051.7270.3393.1090.7936.1265.4382.2867.4960.5291.0068.5270.81
MoCo-v2ResNet-5059.1971.2494.7089.7334.3870.3284.0366.1361.7491.9270.7872.10
MoCo-v3ResNet-5066.6179.9594.9194.6155.4587.3184.7572.2772.7596.6883.4472.32
MoCo-v3ViT-S65.4676.7594.2293.4145.9484.6683.7773.2169.5694.9972.8172.39
MoCo-v3ViT-B69.3278.4095.8094.6647.0885.2984.7475.1972.5396.0476.3373.70
SimSiamResNet-5053.5773.8892.0792.8744.3868.0181.8470.0566.6794.6777.0569.37
Barlow TwinsResNet-5063.2677.0493.8392.2349.8979.0784.7368.3171.2396.3581.0470.53
MAEViT-B20.6646.7739.9493.4526.8935.5433.0472.0730.6652.641.6435.01
MAEViT-L42.6360.3872.4095.6140.4249.9161.7675.8546.7477.0743.4052.70
MAEViT-H38.5061.4372.3295.3640.9650.9763.6475.1148.9180.0244.6453.27
IBOTSwin-T/773.6181.2697.7497.0552.3788.3685.4077.1677.0597.4679.1677.37
IBOTSwin-T/1474.5081.7997.9796.6551.6793.2185.6276.8679.6497.7576.9077.83
IBOTViT-S73.1281.2597.5495.6753.9793.9185.3273.7778.2397.6675.8276.86
IBOTViT-B75.2880.1698.0495.847.0191.5785.2173.7876.5797.8175.6378.02
IBOTViT-L76.3778.5998.2796.1845.6084.7884.0276.2772.9397.1870.9279.46
EsViTResNet-5069.9175.1497.2188.2142.8780.4584.8562.8770.3395.0475.9075.70
EsViTSwin-T74.3281.3197.8496.2550.7894.4485.5274.8078.5797.8375.6477.72
EsViTSwin-S76.1979.4398.5594.9346.5086.5085.5272.7776.4197.1575.7179.33
EsViTSwin-B77.3377.7798.7795.5937.7483.5783.7671.8873.9896.6276.6480.19
oBoWResNet-5059.0970.9393.7992.8537.9868.8578.8667.9162.9389.4567.0972.49
InstDiscResNet-5038.1366.8584.7087.1843.2560.7274.2363.8461.3489.5459.4262.14
\ No newline at end of file diff --git a/acloserlookatfewshotclassificationagain/images.zip b/acloserlookatfewshotclassificationagain/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..816e94b93ae128c6a24a5086b77a67db92b9d6bb --- /dev/null +++ b/acloserlookatfewshotclassificationagain/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6df45a7dcfee3144f71360bcd11cfd5b85f8cd09a355b2de990648f28ef90df0 +size 1963392 diff --git a/acloserlookatfewshotclassificationagain/layout.json b/acloserlookatfewshotclassificationagain/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d9c6bdbd8a987b5171229563a06dedd2d09fd2fa --- /dev/null +++ b/acloserlookatfewshotclassificationagain/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce703cf17f3a2b4c533b568eab89135354ad0490265f4f90d4f3b510f4440ebb +size 763955 diff --git a/acloserlookatselfsupervisedlightweightvisiontransformers/7e5a55d5-3963-44b6-830b-97d8f7478c51_content_list.json b/acloserlookatselfsupervisedlightweightvisiontransformers/7e5a55d5-3963-44b6-830b-97d8f7478c51_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6c88b0d856dd8285ecbbe3f77178e467b3cfb180 --- /dev/null +++ b/acloserlookatselfsupervisedlightweightvisiontransformers/7e5a55d5-3963-44b6-830b-97d8f7478c51_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7642ee0c9fbe4c214b9caf897f6128b3e9ebd7a29609dac2f892a4e2b884385c +size 127459 diff --git a/acloserlookatselfsupervisedlightweightvisiontransformers/7e5a55d5-3963-44b6-830b-97d8f7478c51_model.json b/acloserlookatselfsupervisedlightweightvisiontransformers/7e5a55d5-3963-44b6-830b-97d8f7478c51_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0edc27831d7a41adaadc43ea844a29a67effe7bc --- /dev/null +++ b/acloserlookatselfsupervisedlightweightvisiontransformers/7e5a55d5-3963-44b6-830b-97d8f7478c51_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:10850bfcb27adb5c695c29c3f47e1ee4ab52c43b4fc66a6bd707d702657f7e02 +size 151140 diff --git a/acloserlookatselfsupervisedlightweightvisiontransformers/7e5a55d5-3963-44b6-830b-97d8f7478c51_origin.pdf b/acloserlookatselfsupervisedlightweightvisiontransformers/7e5a55d5-3963-44b6-830b-97d8f7478c51_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..03c91e025dc3c2c1c1e2affbdfa3eb657b8b1500 --- /dev/null +++ b/acloserlookatselfsupervisedlightweightvisiontransformers/7e5a55d5-3963-44b6-830b-97d8f7478c51_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:baeadcb53017c1a8c98b71cf1da59f1c3a6e1400e8f228fcd872db6498d2ee0a +size 3634900 diff --git a/acloserlookatselfsupervisedlightweightvisiontransformers/full.md b/acloserlookatselfsupervisedlightweightvisiontransformers/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3465e801fd9776c80ef89cac9b5e27b43d048465 --- /dev/null +++ b/acloserlookatselfsupervisedlightweightvisiontransformers/full.md @@ -0,0 +1,495 @@ +# A Closer Look at Self-Supervised Lightweight Vision Transformers + +Shaoru Wang $^{12}$ Jin Gao $^{12}$ Zeming Li $^{3}$ Xiaoqin Zhang $^{4}$ Weiming Hu $^{125}$ + +# Abstract + +Self-supervised learning on large-scale Vision Transformers (ViTs) as pre-training methods has achieved promising downstream performance. Yet, how much these pre-training paradigms promote lightweight ViTs' performance is considerably less studied. In this work, we develop and benchmark several self-supervised pre-training methods on image classification tasks and some downstream dense prediction tasks. We surprisingly find that if proper pre-training is adopted, even vanilla lightweight ViTs show comparable performance to previous SOTA networks with delicate architecture design. It breaks the recently popular conception that vanilla ViTs are not suitable for vision tasks in lightweight regimes. We also point out some defects of such pre-training, e.g., failing to benefit from large-scale pre-training data and showing inferior performance on data-insufficient downstream tasks. Furthermore, we analyze and clearly show the effect of such pre-training by analyzing the properties of the layer representation and attention maps for related models. Finally, based on the above analyses, a distillation strategy during pre-training is developed, which leads to further downstream performance improvement for MAE-based pre-training. Code is available at https://github.com/wangsr126/mae-lite. + +# 1. Introduction + +Self-supervised learning (SSL) has shown great progress in representation learning without heavy reliance on expen + +$^{1}$ State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences $^{2}$ School of Artificial Intelligence, University of Chinese Academy of Sciences $^{3}$ Megvii Research $^{4}$ Key Laboratory of Intelligent Informatics for Safety & Emergency of Zhejiang Province, Wenzhou University $^{5}$ School of Information Science and Technology, ShanghaiTech University. Correspondence to: Jin Gao . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +sive labeled data. SSL focuses on various pretext tasks for pre-training. Among them, several works (He et al., 2020; Chen et al., 2020; Grill et al., 2020; Caron et al., 2020; Chen et al., 2021a; Caron et al., 2021) based on contrastive learning (CL) have achieved comparable or even better accuracy than supervised pre-training when transferring the learned representations to downstream tasks. Recently, another trend focuses on masked image modeling (MIM) (Bao et al., 2021; He et al., 2021; Zhou et al., 2022), which perfectly fits Vision Transformers (ViTs) (Dosovitskiy et al., 2020) for vision tasks, and achieves improved generalization performance. Most of these works, however, involve large networks with little attention paid to smaller ones. Some works (Fang et al., 2020; Abbasi Koohpayegani et al., 2020; Choi et al., 2021) focus on CL on small convolutional networks (ConvNets) and improve the performance by distillation. However, the pre-training of lightweight ViTs is considerably less studied. + +Efficient neural networks are essential for modern ondevice computer vision. Recent studies on achieving top-performing lightweight models mainly focus on designing network architectures (Sandler et al., 2018; Howard et al., 2019; Graham et al., 2021; Ali et al., 2021; Heo et al., 2021; Touvron et al., 2021b; Mehta & Rastegari, 2022; Chen et al., 2021b; Pan et al., 2022), while little attention is paid to how to optimize the training strategies for these models. We believe the latter is also of vital importance, and the utilization of pre-training is one of the most hopeful approaches along this way, since it has achieved great progress on large models. To this end, we develop and benchmark recently popular self-supervised pre-training methods, e.g., CL-based MoCo-v3 (Chen et al., 2021a) and MIM-based MAE (He et al., 2021), along with fully-supervised pre-training for lightweight ViTs as the baselines on ImageNet and other classification tasks, as well as some dense prediction tasks, e.g., object detection and segmentation. We surprisingly find that if proper pre-training is adopted, even vanilla lightweight ViTs show comparable performance to previous SOTA networks with delicate design, e.g., we achieve $79.0\%$ top-1 accuracy on ImageNet with vanilla ViT-Tiny (5.7M). The finding is intriguing since the result indicates that proper pre-training could bridge the performance gap between naive network architectures and delicately designed ones to a great extent, while naive architectures usually have + +faster inference speed, by getting rid of some complicated operators. We also point out some defects of such pretraining, e.g., failing to benefit from large-scale pre-training data and showing inferior performance on data-insufficient downstream tasks. + +These findings motivate us to dive deep into the working mechanism of these pre-training methods for lightweight ViTs. More specifically, we introduce a variety of model analysis methods to study the pattern of layer behaviors during pre-training and fine-tuning, and investigate what really matters for downstream performance. First, we find that lower layers of the pre-trained models matter more than higher ones if sufficient downstream data is provided, while higher layers matter in data-insufficient downstream tasks. Second, we observe that the pre-training with MAE makes the attention of the downstream models more local and concentrated, i.e., introduces locality inductive bias, which may be the key to the performance gain. Based on the above analyses, we also develop a distillation strategy for MAE-based pre-training, which significantly improves the pre-training of lightweight ViTs. Better downstream performance is achieved especially on data-insufficient classification tasks and detection tasks. + +# 2. Preliminaries and Experimental Setup + +ViTs. We use ViT-Tiny (Touvron et al., 2021a) in our study to examine the effect of the pre-training on downstream performance, which contains 5.7M parameters. We adopt the vanilla architecture, consisting of a patch embedding layer and 12 Transformer blocks with an embedding dimension of 192, except that the number of heads is increased to 12 as we find it can improve the model's expressive power. ViT-Tiny is chosen for study because it is an ideal experimental object, on which almost all existing pre-training methods can be directly applied. And it has a rather naive architecture: non-hierarchical, and with low human inductive bias in design. Thus the influence of the model architecture design on our analyses can be eliminated to a great extent. + +Evaluation Metrics. We adopt fine-tuning as the default evaluation protocol considering that it is highly correlated with utility (Newell & Deng, 2020), in which all the layers are tuned by initializing them with the pre-trained models. By default, we do the evaluation on ImageNet (Deng et al., 2009) by fine-tuning on the training set and evaluating on the validation set. Several other downstream classification datasets (e.g., Flowers (Nilsback & Zisserman, 2008), Aircraft (Maji et al., 2013), CIFAR100 (Krizhevsky et al., 2009), etc.) and object detection and segmentation tasks on COCO (Lin et al., 2014) are also exploited for comparison. For a more thorough study, analyses based on linear probing evaluation are presented in Appendix B.2. + +Compared Methods. Baseline: We supervisedly train a ViT-Tiny from scratch for 300 epochs on the training set of ImageNet-1k (dubbed IN1K). It achieves $74.5\%$ top-1 accuracy on the validation set of ImageNet-1k, surpassing that in the original architecture ( $72.2\%$ (Touvron et al., 2021a)) through modifying the number of heads to 12 from 3, and further reaches $75.8\%$ by adopting our improved training recipe (see Appendix A.1), which finally serves as our strong baseline to examine the pre-training. We denote this model from supervised training as DeiT-Tiny. + +MAE: MAE (He et al., 2021) is selected as a representative for MIM-based pre-training methods, which has a simple framework with low training cost. We largely follow the design of MAE except that the encoder is altered to ViT-Tiny. Several basic factors and components are adjusted to fit the smaller encoder (see Appendix A.2). By default, we do pre-training on IN1K for 400 epochs, and denote the pre-trained model as MAE-Tiny. + +MoCov3: We also implement a contrastive SSL pre-training counterpart, MoCo-v3 (Chen et al., 2021a), which is selected for its simplicity. We also do 400-epoch pre-training and denote the pre-trained model as MoCov3-Tiny. Details are provided in Appendix A.3. + +Some other methods, e.g., MIM-based SimMIM (Xie et al., 2022) and CL-based DINO (Caron et al., 2021) are also involved, but are moved to Appendix B.3 due to space limitation. + +# 3. How Well Does Pre-Training Work on Lightweight ViTs? + +In this section, we first benchmark the aforementioned pretrained models on ImageNet, and then further evaluate their transferability to other datasets and tasks. + +# 3.1. Benchmarks on ImageNet Classification Tasks + +Which pre-training method performs best? We first develop and benchmark the pre-training methods on ImageNet, involving the baseline that does not adopt any pre-training, supervised pre-training on the training set of ImageNet-21k (a bigger and more diverse dataset, as roughly ten times the size of IN1K, dubbed IN21K) and the aforementioned self-supervised pre-training with MoCo-v3 and MAE. As reported in Tab. 1, most of these supervised and self-supervised pre-training methods improve the downstream performance, whilst MAE outperforms others and consumes moderate training cost. The results indicate that the vanilla ViTs have great potential, which can be unleashed via proper pre-training. It encourages us to further explore how the enhanced ViTs perform compared to recent SOTA ConvNets and ViT derivatives. + +Table 1. Comparisons on pre-training methods. We report top-1 accuracy on the validation set of ImageNet-1k. IN1K and IN21K indicate the training set of ImageNet-1k and ImageNet-21k. The pre-training time is measured on $8 \times \mathrm{V}100$ GPU machine. 'ori.' represents the supervised training recipe from Touvron et al. (2021a) and 'impr.' represents our improved recipe (see Appendix A.1). + +
MethodsPre-training DataEpochsTime (hour)Fine-tuning
recipeTop-1 Acc. (%)
----ori.74.5
----impr.75.8
Supervised (Steiner et al., 2021)IN21K w/ labels3020impr.76.9
Supervised (Steiner et al., 2021)IN21K w/ labels300200impr.77.8
MoCo-v3 (Chen et al., 2021a)IN1K w/o labels40052impr.76.8†
MAE (He et al., 2021)IN1K w/o labels40023impr.78.0
+ +$\dagger$ Global average pooling is used instead of the default configuration based on the class token during the fine-tuning. See Appendix A.1 for details. + +How do the enhanced ViTs with pre-training rank among SOTA lightweight networks? To answer the question, we further compare the enhanced ViT-Tiny with MAE pre-training to previous lightweight ConvNets and ViT derivatives. We report top-1 accuracy along with the model parameter count and the throughput in Tab. 3. We denote the fine-tuned model based on MAE-Tiny as MAETiny-FT. Specifically, we extend the fine-tuning epochs to 1000 following Touvron et al. (2021a) and adopt relative position embedding. Under this strong fine-tuning recipe, the pre-training still contributes a 1.2 performance gain, ultimately reaching $79.0\%$ top-1 accuracy. It sets a new record for lightweight vanilla ViTs, even without distillation during the supervised training phase on IN1K. It can also be seen that the pre-training can accelerate the downstream convergence, which helps to surpass that trained from scratch for 1000 epochs $(77.8\%)$ with only 300-epoch fine-tuning $(78.5\%)$ . + +We conclude that the enhanced ViT-Tiny is on par with or even outperforms most previous ConvNets and ViT derivatives with comparable parameters or throughput. This demonstrates that we can also achieve SOTA performance based on a naive network architecture by adopting proper pre-training, rather than designing complex ones. Significantly, naive architecture usually has faster inference speed and is friendly to deployment. + +We also notice that there are some works applying supervised pre-training (Ridnik et al., 2021), CL-based self-supervised pre-training (Fang et al., 2020) and MIM-based self-supervised pre-training (Woo et al., 2023) on lightweight ConvNets. However, we find that ViT-Tiny benefits more from the pre-training (e.g., +1.2 vs. +0.5 for ConvNeXt V2-F). We attribute it to that the plain architecture of ViT-Tiny with less artificial design may possess more model capacity. + +Can the pre-training benefit from more data? One may be curious about whether it is possible to achieve better downstream performance by involving more pre-training data, as it does on large models. Unfortunately, the answer + +Table 2. Effect of pre-training data. Top-1 accuracy is reported. + +
DatasetsMoCo-v3MAE
IN1K76.878.0
1% IN1K76.2 (-0.6)77.9 (-0.1)
10% IN1K76.5 (-0.3)78.0 (+0.0)
IN1K-LT76.1 (-0.7)77.9 (-0.1)
IN21K76.9 (+0.1)78.0 (+0.0)
+ +is no for the examined pre-training methods. We consider IN21K, a much larger dataset. The number of pre-training iterations is kept constant for a fair comparison. However, few improvements are observed for both MoCo-v3 and MAE as shown in Tab. 2. We further consider two subsets of IN1K containing $1\%$ and $10\%$ of the total examples (1% IN1K and $10\%$ IN1K) balanced in terms of classes (Assran et al., 2021) and one subset with long-tailed class distribution (Liu et al., 2019) (IN1K-LT). Surprisingly, marginal performance declines are observed for MAE when pre-training on these subsets, showing more robustness than MoCo-v3 in terms of the pre-training data scale and class distribution. + +# 3.2. Benchmarks on Transfer Performance + +We further examine the transferability of these models pretrained on IN1K, involving their transfer performance on some other classification tasks and dense prediction tasks. In addition to the self-supervised MAE-Tiny and MoCov3-Tiny, DeiT-Tiny is also involved, as a fully-supervised counterpart which is trained on IN1K for 300 epochs. + +Can the pre-trained models transfer well on data-insufficient tasks? We introduce several classification tasks (Nilsback & Zisserman, 2008; Parkhi et al., 2012; Maji et al., 2013; Krause et al., 2013; Krizhevsky et al., 2009; Van Horn et al., 2018) to investigate their transferability. We conduct the transfer evaluation by fine-tuning these pre-trained models on these datasets (see Appendix A.4 for more details). As shown in Tab. 4, using various pre-training methods shows better performance than using random initialization, but the relative superiority and inferiority comparisons between these pre-training methods exhibit distinct characteristics from those on ImageNet. We find that down- + +Table 3. Comparisons with previous SOTA networks on ImageNet-1k. We report top-1 accuracy along with throughput and parameter count. The throughput is borrowed from timm (Wightman, 2019), which is measured on a single RTX 3090 GPU with a batch size fixed to 1024 and mixed precision. $\ddagger$ indicates that distillation is adopted during the supervised training (or fine-tuning). \*\* indicates the original architecture of ViT-Tiny (the number of attention heads is 3). + +
Methodspre-train datafine-tuning epochs#param.throughput (image/s)Accuracy Top-1 (%)
ConvNets
ResNet-18 (He et al., 2016)-10011.7M895169.7
ResNet-50 (He et al., 2016; Wightman et al., 2021)-60025.6M269680.4
EfficientNet-B0 (Tan & Le, 2019)-4505.3M536977.7
EfficientNet-B0 (Fang et al., 2020)IN1K w/o labels4505.3M536977.2 (-0.5)
EfficientNet-B1 (Tan & Le, 2019)-4507.8M295378.8
MobileNet-v2 (Sandler et al., 2018)-4803.5M790972.0
MobileNet-v3 (Howard et al., 2019)-6005.5M911375.2
MobileNet-v3†(Ridnik et al., 2021)IN21K6005.5M911378.0
ConvNeXt V1-F (Liu et al., 2022)-6005.2M-77.5
ConvNeXt V2-F (Woo et al., 2023)-6005.2M181678.0
ConvNeXt V2-F (Woo et al., 2023)IN1K w/o labels6005.2M181678.5 (+0.5)
Vision Transformers Derivative
LeViT-128 (Graham et al., 2021)-10009.2M1327678.6
LeViT-192 (Graham et al., 2021)-100011.0M1138980.0
XCiT-T12/16†(Ali et al., 2021)-4006.7M315778.6
PiT-Ti†(Heo et al., 2021)-10005.1M454776.4
CaiT-XXS-24†(Touvron et al., 2021b)-40012.0M135178.4
Swin-1G (Liu et al., 2021; Chen et al., 2021b)-4507.3M-77.3
Mobile-Former-294M (Chen et al., 2021b)-45011.4M-77.9
MobileViT-S (Mehta & Rastegari, 2022)-3005.6M190078.3
EdgeViT-XS (Pan et al., 2022)-3006.7M-77.5
Vanilla Vision Transformers
DeiT-Tiny* (Touvron et al., 2021a)-3005.7M484472.2
DeiT-Tiny*†(Touvron et al., 2021a)-10005.7M476476.6
DeiT-Tiny-3005.7M402076.2
MAE-Tiny-FTIN1K w/o labels3005.7M402078.5 (+2.3)
DeiT-Tiny-10005.7M402077.8
MAE-Tiny-FTIN1K w/o labels10005.7M402079.0 (+1.2)
+ +Table 4. Transfer evaluation on classification tasks and dense-prediction tasks. Self-supervised pre-training approaches generally show inferior performance to the fully-supervised counterpart. Top-1 accuracy is reported for classification tasks and AP is reported for object detection (det.) and instance segmentation (seg.) tasks. The description of each dataset is represented as (train-size/test-size/#classes). + +
Datasets Init.Flowers (2k/6k/102)Pets (4k/4k/37)Aircraft (7k/3k/100)Cars (8k/8k/196)CIFAR100 (50k/10k/100)iNat18 (438k/24k/8142)COCO(det.) (118k/50k/80)COCO(seg.)
Random30.226.19.46.842.758.732.728.9
supervised DeiT-Tiny96.493.173.585.685.863.640.435.5
self-supervised MoCov3-Tiny94.887.873.783.983.954.539.735.1
MAE-Tiny85.876.564.678.878.960.639.935.4
+ +stream data scale matters. The self-supervised pre-training approaches achieve downstream performance far behind the fully-supervised counterpart, while the performance gap is narrowed more or less as the data scale of the downstream + +task increases. Moreover, MAE even shows inferior results to MoCo-v3. We conjecture that it is due to their different layer behaviors during pre-training and fine-tuning, which will be discussed in detail in the following section. + +Can the pre-trained models transfer well on dense prediction tasks? For a more thorough study, we further conduct evaluations on downstream object detection and segmentation tasks on COCO (Lin et al., 2014), based on Li et al. (2021) (see Appendix A.5 for details) with different pre-trained models as initialization of the backbone. The results are shown in Tab. 4. The self-supervised pre-training also lags behind the fully-supervised counterpart. + +# 4. Revealing the Secrets of the Pre-Training + +In this section, we introduce some model analysis methods to study the pattern of layer behaviors during pre-training and fine-tuning, and investigate what matters for downstream performances. + +# 4.1. Layer Representation Analyses + +We first adopt Centered Kernel Alignment (CKA) method (Cortes et al., 2012; Nguyen et al., 2020) to analyze the layer representation similarity across and within networks. Specifically, CKA computes the normalized similarity in terms of the Hilbert-Schmidt Independence Criterion (HSIC (Song et al., 2012)) between two feature maps or representations, which is invariant to the orthogonal transformation of representations and isotropic scaling (detailed in Appendix A.6). + +Lower layers matter more than higher ones if sufficient downstream data is provided. We visualize the layer representation similarity between several pre-trained models and DeiT-Tiny as heatmaps in Fig. 1. We choose DeiT-Tiny, a classification model fully-supervisedly trained on IN1K, as the reference because we consider the higher similarity of the examined model's layer to that of DeiT-Tiny indicates its more relevance to recognition. Although the similarity does not directly indicate whether the downstream performance is good, it indeed reflects the pattern of layer representation to a certain extent. The similarity within DeiT-Tiny is also presented (the left column). + +First, We observe a relatively high similarity between MAETiny and DeiT-Tiny for lower layers, while low similarity for higher layers. In Appendix B.1, we observe similar phenomena with several additional supervisedly trained ViTs as the reference models. It indicates that fewer semantics are extracted for MAE-Tiny at a more abstract level in higher layers. In contrast, MoCov3-Tiny aligns DeiT-Tiny well across almost all layers. However, the fine-tuning evaluation in Tab. 1 shows that adopting the MAE-Tiny as initialization improves the performance more significantly than MoCov3-Tiny. Thus, we hypothesize that lower layers matter much more than higher ones for the pre-trained models. In order to verify the hypothesis, we design another experiment by + +only reserving several leading blocks of pre-trained models and randomly initializing the others, and then fine-tuning them on IN1K (for the sake of simplicity, we only fine-tune these models for 100 epochs). Fig. 2 shows that reserving only a certain number of leading blocks achieves a significant performance gain over randomly initializing all the blocks (i.e., totally training from scratch) for both MAE-Tiny and MoCov3-Tiny. Whereas, further reserving higher layers leads to marginal gain for MAE-Tiny and MoCov3-Tiny, which demonstrates our hypothesis. + +Higher layers matter in data-insufficient downstream tasks. Previous works (Touvron et al., 2021a; Raghu et al., 2021) demonstrate the importance of a relatively large dataset scale for fully-supervised high-performance ViTs with large model sizes. We also observe a similar phenomenon on lightweight ViTs even when the self-supervised pre-training is adopted as discussed in Sec. 3.2. It motivates us to study the key factor of downstream performance on data-insufficient tasks. + +We conduct similar experiments as those in Fig. 2 on small-scale downstream datasets. The results are shown in Fig. 3. We observe consistent performance improvement as the number of reserved pre-trained models' blocks increases. And the smaller the dataset scale, the more the performance benefits from the higher layers. It demonstrates that higher layers are still valuable and matter in data-insufficient downstream tasks. Furthermore, we observe comparable performance for the transfer performance of MAE-Tiny and MoCov3-Tiny when only a certain number of lower layers are reserved, while MoCov3-Tiny surpasses when higher layers are further reserved. It indicates that the higher layers of MoCov3-Tiny work better than MAE-Tiny on data-insufficient downstream tasks, which is also consistent with our CKA-based analyses shown in Fig. 1, that MoCov3-Tiny learns more semantics at an abstract level relevant to recognition in higher layers (high similarity to reference recognition models in higher layers) than MAE-Tiny. + +# 4.2. Attention Map Analyses + +The attention maps reveal the behaviors for aggregating information in the attention mechanism of ViTs, which are computed from the compatibility of queries and keys by dot-product operation. We introduce two metrics for further analyses on the pre-trained models, i.e., attention distance and attention entropy. The attention distance for the $j$ -th token of $h$ -th head is calculated as: + +$$ +\boldsymbol {D} _ {h, j} = \sum_ {i} \operatorname {s o f t m a x} \left(\boldsymbol {A} _ {h}\right) _ {i, j} \boldsymbol {G} _ {i, j}, \tag {1} +$$ + +where $\mathbf{A}_h \in \mathbb{R}^{l \times l}$ is the attention map for the $h$ -th attention head, and $\mathbf{G}_{i,j}$ is the Euclidean distance between the spatial locations of the $i$ -th and $j$ -th tokens. $l$ is the number of + +![](images/2950b17be003ba6ad5c566596a5b2a9217d533adbc4e203d096086abbcf72c80.jpg) +Figure 1. Layer representation similarity within and across models as heatmaps, with x and y axes indexing the layers (the 0 index indicates the patch embedding layer), and higher values indicate higher similarity. + +![](images/0fdc7a4ac9c0fddab0cc8ef6690edc5cc72964ddcd5c4a8cab6513a5c29c1124.jpg) + +![](images/27dba6c0a698d625c93a31827ae990e1d9b7e240ce7129ba619778e4562abf89.jpg) +Figure 2. Lower layers of pre-trained models contribute to most gains on downstream ImageNet dataset. + +![](images/a8613789e630fefa65b586d2dd53ca2fc7bdf3d7522e4e3c0ad815454acac8a8.jpg) + +![](images/65510b498963f4597c353152666980d895fe5c4ea2620973dd0c28f37b682733.jpg) +Figure 3. The contributions on performance gain from higher layers of pre-trained models increase as the downstream dataset scale shrinks, which indicates that higher layers matter in data-insufficient downstream tasks. + +![](images/800972bc023cc520195ded29c828f2f7d1b618bad6640ed8bdbd07341ba21a92.jpg) + +![](images/39b7b5142e37ca726ace601fadb1b531798575e1dbdf0662e63de8479d496b91.jpg) + +tokens. And the attention entropy is calculated as: + +$$ +\boldsymbol {E} _ {h, j} = - \sum_ {i} \operatorname {s o f t m a x} \left(\boldsymbol {A} _ {h}\right) _ {i, j} \log \left(\operatorname {s o f t m a x} \left(\boldsymbol {A} _ {h}\right) _ {i, j}\right), \tag {2} +$$ + +Specifically, the attention distance reveals how much local vs. global information is aggregated, and a lower distance indicates that each token focuses more on neighbor tokens. The attention entropy reveals the concentration of the attention distribution, and lower entropy indicates that each token attends to fewer tokens. We analyze the distributions of the average attention distance and entropy across all the tokens in different attention heads, as shown in Fig. 4. + +The pre-training with MAE makes the attention of the downstream models more local and concentrated. First, we compare MAE-Tiny-FT with DeiT-Tiny. The former adopts MAE-Tiny as initialization and then is fine-tuned on IN1K, and the latter is supervisedly trained from scratch (Random Init.) on IN1K. As shown in Fig. 4, we observe very similar attention behaviors between them, except that the attention of MAE-Tiny-FT (the purple box-whisker) is more local (with lower attention distance) and concentrated (with lower attention entropy) in middle layers compared with DeiT-Tiny (the red box-whisker). We attribute it to the introduction of the MAE-Tiny as pre-training (the orange box-whisker), which has lower attention distance and entropy, and may bring locality inductive bias compared with random initialization (the blue box-whisker). It is noteworthy that the locality inductive bias does not mean that + +tokens in all attention heads attend to solely a few nearby tokens. The attention distance and entropy for different heads are still distributed in a wide range (except for several last layers), which indicates that the heads have diverse specializations, making the models aggregate both local and global tokens with both concentrated and broad focuses. + +Then, we focus on the comparison between MAE-Tiny and MoCov3-Tiny, trying to give some explanations for their diverse downstream performances observed in Sec. 3. As shown in Fig. 4, we observe that MoCov3-Tiny (the green box-whisker) generally has more global and broad attention than MAE-Tiny (the orange box-whisker). Even several leading blocks have a narrower range of attention distance and entropy than MAE-Tiny. We think this characteristic of MoCov3-Tiny makes the downstream fine-tuning with it as initialization take "shortcuts", i.e., directly paying attention to global features and overlooking local patterns, which may be unfavorable for fine-grained recognition. It leads to inferior downstream performance on ImageNet, but fair on Flowers, CIFAR100, etc., for which the "shortcuts" may be barely adequate. As for MAE-Tiny, its distinct behaviors in higher layers with rather low attention distance and entropy may make it hard to transfer to data-insufficient downstream tasks, thus resulting in inferior performance on these tasks. + +# 5. Distillation Improves Pre-Trained Models + +In the previous section, we have conjectured that it is hard for MAE to learn good representation relevant to recognition + +![](images/13bdb0208c9ea74e3475d866a9095e3b93c41796f3d52064c8f66d63d4a1e160.jpg) + +![](images/dfec2259ce7acbfd7cd4a5b1c18794168713d6f60828b461dd595e66a5617812.jpg) + +![](images/407337fdac4b9437d1fd8214b557adb9e5257c0d28a0e4255e74a2d680d1e630.jpg) + +![](images/fdd673a2792e95f55d9c0e283c1f8618b8a4095292cac03ed69dfbc34b03c6fc.jpg) +Figure 4. Attention distance and entropy analyses. We visualize the distributions of the average attention distance and entropy across all tokens in different attention heads w.r.t. the layer number with box-whisker plots. + +![](images/6d01caa1173f9f2909d10c0357e21098f33adda3b346822bc6fb5a3c543db2c7.jpg) +Figure 5. Distillation compresses the good representation of the teacher (MAE-Base) to the student (D-MAE-Tiny). + +![](images/466c6f07492cd27e52c243cdd6dbb070a257720a80a6441fb4b82941af6c0560.jpg) + +![](images/50542144dc423f8e2d36a8b8fb8f7543ee483f05a1d33558bc1e041b2c1ab336.jpg) +Figure 6. Distillation on attention maps of higher layers improves performance most. + +in higher layers, which results in unsatisfactory performance on data-insufficient downstream tasks. A natural question is that can it gain more semantic information by scaling up the models. We further examine a large pre-trained model, MAE-Base (He et al., 2021), and find it achieves a better alignment with the reference model, as shown in the top subfigure of Fig. 5. It indicates that it is possible to extract features relevant to recognition in higher layers for the scaled-up encoder in MAE pre-training. These observations motivate us to compress the knowledge of large pre-trained models to tiny ones with knowledge distillation under the MIM framework. + +Distillation methods. Specifically, a pre-trained MAE-Base (He et al., 2021) is introduced as the teacher network. The distillation loss is constructed based on the similarity between the attention maps of the corresponding teacher's + +and student's layers. It is formulated as: + +$$ +L _ {\mathrm {a t t n}} = \operatorname {M S E} \left(\boldsymbol {A} ^ {T}, \boldsymbol {M} \boldsymbol {A} ^ {S}\right), \tag {3} +$$ + +where $\mathbf{A}^T\in \mathbb{R}^{h\times l\times l}$ and $\mathbf{A}^S\in \mathbb{R}^{h'\times l\times l}$ refer to the attention maps of the corresponding teacher's and student's layers, with $h$ and $h^\prime$ attention heads respectively. $l$ is the number of tokens. A learnable mapping matrix $M\in \mathbb{R}^{h\times h'}$ is introduced to align the number of heads. MSE denotes mean squared error. + +During the pre-training, the teacher processes the same un-masked image patches as the student encoder. The parameters of the student network are updated based on the joint backward gradients from the distillation loss and the original MAE's reconstruction loss, while the teacher's parameters remain frozen throughout the pre-training process. + +Distill on lower or higher layers? We first examine applying the above layer-wise distillation on which pair of teacher's and student's layers contributes to the most performance gain. We conduct experiments by constructing the above attention-based distillation loss between pair of layers at $1/4$ , $2/4$ , $3/4$ , or $4/4$ depth of the teacher and student respectively, i.e., the 3rd, 6th, 9th, or 12th layer for both the teacher (MAE-Base) and the student (MAETiny). As shown in Fig. 6, distilling on the attention maps of the last transformer blocks promotes the performance most, surpassing those distilling on lower layers (for the sake of simplicity, we only fine-tune the pre-trained models on IN1K for 100 epochs). It is consistent with the analyses + +Table 5. Distillation improves downstream performance on classification tasks and object detection and segmentation tasks. Top-1 accuracy is reported for classification tasks and AP is reported for object detection (det.) and instance segmentation (seg.) tasks. + +
Datasets Init.FlowersPetsAircraftCarsCIFAR100iNat18ImageNetCOCO(det.)COCO(sec.)
supervised DeiT-Tiny96.493.173.585.685.863.6-40.435.5
self-supervised MAE-Tiny85.876.564.678.878.960.678.039.935.4
D-MAE-Tiny95.2 (+9.4)89.1 (+12.6)79.2 (+14.6)87.5 (+8.7)85.0 (+6.1)63.6 (+3.0)78.4 (+0.4)42.3 (+2.4)37.4 (+2.0)
+ +in Sec. 4. Specifically, the lower layers learn good representation themselves during the pre-training with MAE, and thus distilling on these layers contributes to marginal improvement, while the higher layers rely on a good teacher to guide them to capture rich semantic features. + +Distillation improves downstream performance. We further evaluate the distilled pre-trained model on several downstream tasks. For simplicity, we only apply distillation on the last layers. The resulting model is denoted as D-MAE-Tiny. The visualization result at the bottom of Fig. 5 shows that the good representation relevant to the recognition of the teacher is compressed to the student. Especially the quality of higher layers is improved. The distillation contributes to better downstream performance as shown in Tab. 5, especially on data-insufficient classification tasks and dense prediction tasks. In Appendix C.3, we also show that our distillation technique can help other ViT students beyond ViT-Tiny to achieve better downstream performance. + +# 6. Related Works + +Self-supervised learning (SSL) focuses on different pretext tasks (Gidaris et al., 2018; Zhang et al., 2016; Noroozi & Favaro, 2016; Dosovitskiy et al., 2014) for pre-training without using manually labeled data. Among them, contrastive learning (CL) has been popular and shows promising results on various convolutional networks (ConvNets) (He et al., 2020; Chen et al., 2020; Grill et al., 2020; Caron et al., 2020) and ViTs (Chen et al., 2021a; Caron et al., 2021). Recently, methods based on masked image modeling (MIM) achieve state-of-the-art on ViTs (He et al., 2021; Bao et al., 2021; Zhou et al., 2022). It has been demonstrated that these methods can scale up well on larger models, while their performance on lightweight ViTs is seldom investigated. + +Vision Transformers (ViTs) (Dosovitskiy et al., 2020) apply a Transformer architecture (a stack of attention modules (Vaswani et al., 2017)) on image patches and show very competitive results in various visual tasks (Touvron et al., 2021a; Liu et al., 2021; Li et al., 2022). The performance of ViTs has been largely improved thanks to better training recipes (Touvron et al., 2021a; Steiner et al., 2021; Touvron et al., 2022). As for lightweight ViTs, most works focus on + +integrating ViTs and ConvNets (Graham et al., 2021; Heo et al., 2021; Mehta & Rastegari, 2022; Chen et al., 2021b), while few works focus on how to optimize the networks. + +Knowledge Distillation is a mainstream approach for model compression (Bucilua et al., 2006), in which a large teacher network is trained first and then a more compact student network is optimized to approximate the teacher (Hinton et al., 2015; Romero et al., 2014). Touvron et al. (2021a) achieves better accuracy on ViTs by adopting a ConvNet as the teacher. With regard to the compression of the pre-trained networks, some works (Sanh et al., 2019; Jiao et al., 2020; Wang et al., 2021; Sun et al., 2020) attend to distill large-scale pre-trained language models. In the area of computer vision, a series of works (Fang et al., 2020; Abbasi Koohpayegani et al., 2020; Choi et al., 2021) focus on transferring knowledge of large pre-trained networks based on CL to lightweight ConvNets. There are few works focusing on improving the quality of lightweight pre-trained ViTs based on MIM by distillation thus far. + +# 7. Discussions + +Limitations. Our study is restricted to classification tasks and some dense-prediction tasks. We leave the exploration of more tasks for further work. + +Conclusions. We investigate the self-supervised pretraining of lightweight ViTs, and demonstrate the usefulness of the advanced lightweight ViT pre-training strategy in improving the performance of downstream tasks, even comparable to most delicately-designed SOTA networks on ImageNet. Some properties about the pre-training are revealed, e.g., these methods fail to benefit from large-scale pre-training data, and show more dependency on the downstream dataset scale. We also present some insights on what matters for downstream performance. They may indicate potential future directions in improving pre-training on lightweight models, the value of which has also been demonstrated as it guides the design of our proposed distillation strategy and helps to achieve much better downstream performance. We expect our research may provide useful experience and advance the study of self-supervised learning on lightweight ViTs. + +Acknowledgment. The authors would like to thank the anonymous reviewers for their valuable comments and suggestions. This work was supported in part by the National Key R&D Program of China (Grant No. 2020AAA0105802, 2020AAA0105800), the Natural Science Foundation of China (Grant No. U22B2056, 61972394, U2033210, 62036011, 62192782, 61721004, 62172413), the Beijing Natural Science Foundation (Grant No. L223003, JQ22014), the Major Projects of Guangdong Education Department for Foundation Research and Applied Research (Grant No. 2017KZDXM081, 2018KZDXM066), the Guangdong Provincial University Innovation Team Project (Grant No. 2020KCXTD045), the Zhejiang Provincial Natural Science Foundation (Grant No. LDT23F02024F02). Jin Gao was also supported in part by the Youth Innovation Promotion Association, CAS. + +# References + +Abbasi Koohpayegani, S., Tejankar, A., and Piriavihav, H. Compress: Self-supervised learning by compressing representations. Adv. Neural Inform. Process. Syst., 33:12980-12992, 2020. +Ali, A., Touvron, H., Caron, M., Bojanowski, P., Douze, M., Joulin, A., Laptev, I., Neverova, N., Synnaeve, G., Verbeek, J., et al. Xcit: Cross-covariance image transformers. Adv. Neural Inform. Process. Syst., 34, 2021. +Assran, M., Caron, M., Misra, I., Bojanowski, P., Joulin, A., Ballas, N., and Rabbat, M. Semi-supervised learning of visual features by non-parametrically predicting view assignments with support samples. In Int. Conf. Comput. Vis., pp. 8443-8452, 2021. +Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization. ArXiv, abs/1607.06450, 2016. +Bao, H., Dong, L., and Wei, F. Beit: Bert pre-training of image transformers. ArXiv, abs/2106.08254, 2021. +Bucilua, C., Caruana, R., and Niculescu-Mizil, A. Model compression. In ACM Int. Conf. on Knowledge Discovery and Data Mining, pp. 535-541, 2006. +Caron, M., Misra, I., Mairal, J., Goyal, P., Bojanowski, P., and Joulin, A. Unsupervised learning of visual features by contrasting cluster assignments. In Adv. Neural Inform. Process. Syst., 2020. +Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., and Joulin, A. Emerging properties in self-supervised vision transformers. In Int. Conf. Comput. Vis., pp. 9650-9660, 2021. +Chen, X., Fan, H., Girshick, R., and He, K. Improved baselines with momentum contrastive learning. *ArXiv*, abs/2003.04297, 2020. + +Chen, X., Xie, S., and He, K. An empirical study of training self-supervised vision transformers. In Int. Conf. Comput. Vis., pp. 9640-9649, 2021a. +Chen, Y., Dai, X., Chen, D., Liu, M., Dong, X., Yuan, L., and Liu, Z. Mobile-former: Bridging mobilenet and transformer. ArXiv, abs/2108.05895, 2021b. +Cho, J. H. and Hariharan, B. On the efficacy of knowledge distillation. In Int. Conf. Comput. Vis., pp. 4794-4802, 2019. +Choi, H. M., Kang, H., and Oh, D. Unsupervised representation transfer for small networks: I believe i can distill on-the-fly. In Adv. Neural Inform. Process. Syst., 2021. +Cortes, C., Mohri, M., and Rostamizadeh, A. Algorithms for learning kernels based on centered alignment. The Journal of Machine Learning Research, 13:795-828, 2012. +Cubuk, E. D., Zoph, B., Shlens, J., and Le, Q. Randaugment: Practical automated data augmentation with a reduced search space. In Adv. Neural Inform. Process. Syst., volume 33, pp. 18613-18624, 2020. +Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In IEEE Conf. Comput. Vis. Pattern Recog., pp. 248-255, 2009. +Dosovitskiy, A., Springenberg, J. T., Riedmiller, M., and Brox, T. Discriminative unsupervised feature learning with convolutional neural networks. Adv. Neural Inform. Process. Syst., 27:766-774, 2014. +Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al. An image is worth 16x16 words: Transformers for image recognition at scale. In Int. Conf. Learn. Represent., 2020. +Fang, Z., Wang, J., Wang, L., Zhang, L., Yang, Y., and Liu, Z. Seed: Self-supervised distillation for visual representation. In Int. Conf. Learn. Represent., 2020. +Gidaris, S., Singh, P., and Komodakis, N. Unsupervised representation learning by predicting image rotations. In Int. Conf. Learn. Represent., 2018. +Goyal, P., Dólar, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., and He, K. Accurate, large minibatch sgd: TrainingImagenet in 1 hour. ArXiv, abs/1706.02677, 2017. +Graham, B., El-Nouby, A., Touvron, H., Stock, P., Joulin, A., Jégou, H., and Douze, M. Levit: A vision transformer in convnet's clothing for faster inference. In Int. Conf. Comput. Vis., pp. 12259-12269, 2021. + +Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Pires, B., Guo, Z., Azar, M., et al. Bootstrap your own latent: A new approach to self-supervised learning. In Adv. Neural Inform. Process. Syst., 2020. +He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In IEEE Conf. Comput. Vis. Pattern Recog., pp. 770-778, 2016. +He, K., Fan, H., Wu, Y., Xie, S., and Girshick, R. Momentum contrast for unsupervised visual representation learning. In IEEE Conf. Comput. Vis. Pattern Recog., pp. 9729-9738, 2020. +He, K., Chen, X., Xie, S., Li, Y., Dollar, P., and Girshick, R. Masked autoencoders are scalable vision learners. ArXiv, abs/2111.06377, 2021. +Heo, B., Yun, S., Han, D., Chun, S., Choe, J., and Oh, S. J. Rethinking spatial dimensions of vision transformers. In Int. Conf. Comput. Vis., pp. 11936-11945, 2021. +Hinton, G., Vinyals, O., and Dean, J. Distilling the knowledge in a neural network. ArXiv, abs/1503.02531, 2015. +Howard, A., Sandler, M., Chu, G., Chen, L.-C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., Le, Q. V., and Adam, H. Searching for mobilenetv3. In Int. Conf. Comput. Vis., 2019. +Huang, G., Sun, Y., Liu, Z., Sedra, D., and Weinberger, K. Q. Deep networks with stochastic depth. In *Eur. Conf. Comput. Vis.*, pp. 646-661, 2016. +Jiao, X., Yin, Y., Shang, L., Jiang, X., Chen, X., Li, L., Wang, F., and Liu, Q. Tinybert: Distilling bert for natural language understanding. In Findings of Empirical Methods in Natural Language Process., pp. 4163-4174, 2020. +Jin, X., Peng, B., Wu, Y., Liu, Y., Liu, J., Liang, D., Yan, J., and Hu, X. Knowledge distillation via route constrained optimization. In Int. Conf. Comput. Vis., pp. 1345-1354, 2019. +Krause, J., Stark, M., Deng, J., and Fei-Fei, L. 3d object representations for fine-grained categorization. In Int. Conf. Comput. Vis. Worksh., pp. 554-561, 2013. +Krizhevsky, A. et al. Learning multiple layers of features from tiny images. Technical Report, 2009. +Li, Y., Xie, S., Chen, X., Dollar, P., He, K., and Girshick, R. Benchmarking detection transfer learning with vision transformers. *ArXiv*, abs/2111.11429, 2021. + +Li, Y., Mao, H., Girshick, R., and He, K. Exploring plain vision transformer backbones for object detection. ArXiv, abs/2203.16527, 2022. +Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., and Zitnick, C. L. Microsoft coco: Common objects in context. In *Eur. Conf. Comput. Vis.*, pp. 740-755, 2014. +Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., and Yu, S. X. Large-scale long-tailed recognition in an open world. In IEEE Conf. Comput. Vis. Pattern Recog., 2019. +Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Int. Conf. Comput. Vis., pp. 10012-10022, 2021. +Liu, Z., Mao, H., Wu, C.-Y., Feichtenhofer, C., Darrell, T., and Xie, S. A convnet for the 2020s. In IEEE Conf. Comput. Vis. Pattern Recog., pp. 11966-11976, 2022. +Loshchilov, I. and Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. ArXiv, abs/1608.03983, 2016. +Maji, S., Rahtu, E., Kannala, J., Blaschko, M., and Vedaldi, A. Fine-grained visual classification of aircraft. *ArXiv*, abs/1306.5151, 2013. +Mehta, S. and Rastegari, M. Mobilevit: Light-weight, general-purpose, and mobile-friendly vision transformer. In Int. Conf. Learn. Represent., 2022. +Mirzadeh, S. I., Farajtabar, M., Li, A., Levine, N., Matsukawa, A., and Ghasemzadeh, H. Improved knowledge distillation via teacher assistant. In AAAI Conf. on Artificial Intelligence, volume 34, pp. 5191-5198, 2020. +Newell, A. and Deng, J. How useful is self-supervised pretraining for visual tasks? In IEEE Conf. Comput. Vis. Pattern Recog., pp. 7345-7354, 2020. +Nguyen, T., Raghu, M., and Kornblith, S. Do wide and deep networks learn the same things? uncovering how neural network representations vary with width and depth. In Int. Conf. Learn. Represent., 2020. +Nilsback, M.-E. and Zisserman, A. Automated flower classification over a large number of classes. In Indian Conference on Computer Vision, Graphics & Image Processing, pp. 722-729, 2008. +Noroozi, M. and Favaro, P. Unsupervised learning of visual representations by solving jigsaw puzzles. In *Eur. Conf. Comput. Vis.*, pp. 69-84, 2016. +Pan, J., Bulat, A., Tan, F., Zhu, X., Dudziak, L., Li, H., Tzimiropoulos, G., and Martinez, B. Edgevits: Competing light-weight cnns on mobile devices with vision transformers. ArXiv, abs/2205.0343, 2022. + +Parkhi, O. M., Vedaldi, A., Zisserman, A., and Jawahar, C. V. Cats and dogs. In IEEE Conf. Comput. Vis. Pattern Recog., pp. 3498-3505, 2012. +Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., and Dosovitskiy, A. Do vision transformers see like convolutional neural networks? Adv. Neural Inform. Process. Syst., 34, 2021. +Ridnik, T., Ben-Baruch, E., Noy, A., and Zelnik-Manor, L. Imagenet-21k pretraining for the masses. *ArXiv*, abs/2104.10972, 2021. +Romero, A., Ballas, N., Kahou, S. E., Chassang, A., Gatta, C., and Bengio, Y. Fitnets: Hints for thin deep nets. *ArXiv*, abs/1412.6550, 2014. +Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.-C. Mobilenetv2: Inverted residuals and linear bottlenecks. In IEEE Conf. Comput. Vis. Pattern Recog., pp. 4510-4520, 2018. +Sanh, V., Debut, L., Chaumont, J., and Wolf, T. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. ArXiv, abs/1910.01108, 2019. +Song, L., Smola, A., Gretton, A., Bedo, J., and Borgwardt, K. Feature selection via dependence maximization. The Journal of Machine Learning Research, 13(5), 2012. +Steiner, A., Kolesnikov, A., Zhai, X., Wightman, R., Uszkoreit, J., and Beyer, L. How to train your vit? data, augmentation, and regularization in vision transformers. ArXiv, abs/2106.10270, 2021. +Sun, Z., Yu, H., Song, X., Liu, R., Yang, Y., and Zhou, D. Mobilebert: a compact task-agnostic bert for resource-limited devices. In Association for Computational Linguistics, pp. 2158-2170, 2020. +Tan, M. and Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Int. Conf. Machine Learning., pp. 6105-6114, 2019. +Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., and Jegou, H. Training data-efficient image transformers & distillation through attention. In Int. Conf. Machine Learning., volume 139, pp. 10347-10357, 2021a. +Touvron, H., Cord, M., Sablayrolles, A., Synnaeve, G., and Jégou, H. Going deeper with image transformers. In Int. Conf. Comput. Vis., pp. 32-42, 2021b. +Touvron, H., Cord, M., and Jégou, H. Deit iii: Revenge of the vit. ArXiv, abs/2204.07118, 2022. +Van Horn, G., Mac Aodha, O., Song, Y., Cui, Y., Sun, C., Shepard, A., Adam, H., Perona, P., and Belongie, S. The inaturalist species classification and detection dataset. In IEEE Conf. Comput. Vis. Pattern Recog., 2018. + +Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need. Adv. Neural Inform. Process. Syst., 30, 2017. +Wang, W., Bao, H., Huang, S., Dong, L., and Wei, F. Minilmv2: Multi-head self-attention relation distillation for compressing pretrained transformers. In Findings of Int. Joint Conf. on Natural Language Process., pp. 2140–2151, 2021. +Wightman, R. Pytorch image models. https://github.com/rwrightman/pytorch-image-models, 2019. +Wightman, R., Touvron, H., and Jégou, H. Resnet strikes back: An improved training procedure in timm. ArXiv, abs/2110.00476, 2021. +Woo, S., Debnath, S., Hu, R., Chen, X., Liu, Z., Kweon, I.-S., and Xie, S. Convnext v2: Co-designing and scaling convnets with masked autoencoders. ArXiv, abs/2301.00808, 2023. +Xie, Z., Zhang, Z., Cao, Y., Lin, Y., Bao, J., Yao, Z., Dai, Q., and Hu, H. Simmim: A simple framework for masked image modeling. In IEEE Conf. Comput. Vis. Pattern Recog., 2022. +Yun, S., Han, D., Oh, S. J., Chun, S., Choe, J., and Yoo, Y. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Int. Conf. Comput. Vis., pp. 6023-6032, 2019. +Zhang, H., Cisse, M., Dauphin, Y. N., and Lopez-Paz, D. mixup: Beyond empirical risk minimization. In Int. Conf. Learn. Represent., 2018. +Zhang, R., Isola, P., and Efros, A. A. Colorful image colorization. In Eur. Conf. Comput. Vis., pp. 649-666, 2016. +Zhou, J., Wei, C., Wang, H., Shen, W., Xie, C., Yuille, A., and Kong, T. ibot: Image bert pre-training with online tokenizer. Int. Conf. Learn. Represent., 2022. + +# A. Experimental Details + +# A.1. Evaluation Details for MAE and MoCo-v3 on ImageNet + +We follow the common practice of supervised ViT training (Touvron et al., 2021a) for fine-tuning evaluation except for some hyper-parameters of augmentation. The default setting is in Tab. A1. We use the linear $lr$ scaling rule (Goyal et al., 2017): $lr = \text{base} lr \times \text{batchsize} / 256$ . We use layer-wise $lr$ decay following (Bao et al., 2021; He et al., 2021), and the decay rate is tuned respectively for MAE and MoCo-v3. + +Besides, we use global average pooling (GAP) after the final block during the fine-tuning of both the MAE and MoCo-v3-based pre-trained models, which is, however, not the common practice for MoCo-v3 (Chen et al., 2021a). We adopt it as it significantly helps to surpass the model using the original configuration based on a class token ( $76.8\%$ vs. $73.7\%$ top-1 accuracy) for the lightweight ViT-Tiny. + +Table A1. Fine-tuning evaluation settings. + +
configvalue
optimizerAdamW
base learning rate1e-3
weight decay0.05
optimizer momentumβ1, β2 = 0.9, 0.999
layer-wise lr decay (Bao et al., 2021)0.85 (MAE), 0.75 (MoCo-v3)
batch size1024
learning rate schedulecosine decay (Loshchilov & Hutter, 2016)
warmup epochs5
training epochs{100, 300, 1000}
augmentationRandAug(10, 0.5) (Cubuk et al., 2020)
colorjitter0.3
label smoothing0
mixup (Zhang et al., 2018)0.2
cutmix (Yun et al., 2019)0
drop path (Huang et al., 2016)0
+ +Table A2. Pre-training setting for MoCo-v3. + +
configvalue
optimizerAdamW
base learning rate1.5e-4
weight decay0.1
optimizer momentumβ1, β2 = 0.9, 0.999
batch size1024
learning rate schedulecosine decay
warmup epochs40
training epochs400
momentum coefficient0.99
temperature0.2
+ +# A.2. Pre-Training Details of MAE + +Our experimental setup on MAE largely follows those of MAE (He et al., 2021), including the optimizer, learning rate, batch size, argumentation, etc. But several basic factors and components are adjusted to fit the smaller encoder. We find MAE prefers a much more lightweight decoder when the encoder is small, thus a decoder with only one Transformer block is adopted by default and the width is 192. We sweep over 5 masking ratios $\{0.45, 0.55, 0.65, 0.75, 0.85\}$ and find 0.75 achieves the best performance. + +# A.3. Pre-Training Details of MoCo-v3 + +We reimplement MoCo-v3 (Chen et al., 2021a) with ViT-Tiny as encoder and largely follow the original setups. The default setting is in Tab. A2. + +Chen et al. (2021a) observes that instability is a major issue that impacts self-supervised ViT training and causes mild degradation in accuracy, and a simple trick by adopting fixed random patch projection (the first layer of a ViT model) is proposed to improve stability in practice. However, we find that stability is not the main issue for small networks. Higher performance is achieved with a learned patch projection layer. Thus, this technique is not used by default. + +# A.4. Transfer Evaluation Details on Classification Tasks + +We evaluate several pre-trained models with transfer learning in order to measure the generalization ability of these models. We use 6 popular vision datasets: Flowers-102 (Flowers for short) (Nilsback & Zisserman, 2008), Oxford-IIIT Pets (Pets) (Parkhi et al., 2012), FGVC-Aircraft (Aircraft) (Maji et al., 2013), Stanford Cars (Cars) (Krause et al., 2013), Cifar100 (Krizhevsky et al., 2009), iNaturalist 2018 (iNat18) (Van Horn et al., 2018). For all these datasets except iNat18, we fine-tune with SGD (momentum=0.9), and the batch size is set to 512. The learning rates are swept over 3 candidates and the training epochs are swept over 2 candidates per dataset as detailed in Tab. A3. We adopt a cosine decay learning rate schedule (Loshchilov & Hutter, 2016) with a linear warm-up. we resize images to ${224} \times {224}$ . We adopt random resized crop and random horizontal flipping as augmentations and do not use any regularization (e.g., weight decay, dropout, or the stochastic + +Table A3. Transfer evaluation details. + +
DatasetLearning rateTotal epochs and warm-up epochslayer-wise lr decay
Flowers{0.01, 0.03, 0.1}{(150,30),(250,50)}{1.0, 0.75}
Pets{0.01, 0.03, 0.1}{(70,14),(150,30)}{1.0, 0.75}
Aircraft{0.01, 0.03, 0.1}{(50,10),(100,20)}{1.0, 0.75}
Cars{0.01, 0.03, 0.1}{(50,10),(100,20)}{1.0, 0.75}
CIFAR100{0.03, 0.1, 0.3}{(25, 5),(50,10)}{1.0, 0.75}
+ +depth regularization technique (Huang et al., 2016)). For iNat18, we follow the same training configurations as those on ImageNet. + +# A.5. Transfer Evaluation Details on Dense Prediction Tasks + +We reproduce the setup in (Li et al., 2021), except for replacing the backbone with ViT-Tiny and decreasing the input image size from 1024 to 640 to make it trainable on a single machine with 8 NVIDIA V100. We fine-tune for up to 100 epochs on COCO (Lin et al., 2014), with different pre-trained models as initialization of the backbone. We do not use layer-wise $lr$ decay since we find it useless for the tiny backbone on the detection tasks. The weight decay is 0.05 and the stochastic depth regularization (Huang et al., 2016) is not used. + +# A.6. Analysis Methods + +We adopt the Centered Kernel Alignment (CKA) metric to analyze the representation similarity $(S_{rep})$ within and across networks. Specifically, CKA takes two feature maps (or representations) $X$ and $Y$ as input and computes their normalized similarity in terms of the Hilbert-Schmidt Independence Criterion (HSIC) as + +$$ +S _ {r e p} (\boldsymbol {X}, \boldsymbol {Y}) = \operatorname {C K A} (\boldsymbol {K}, \boldsymbol {L}) = \frac {\operatorname {H S I C} (\boldsymbol {K} , \boldsymbol {L})}{\sqrt {\operatorname {H S I C} (\boldsymbol {K} , \boldsymbol {K}) \operatorname {H S I C} (\boldsymbol {L} , \boldsymbol {L})}}, \tag {A1} +$$ + +where $\pmb{K} = \pmb{X}\pmb{X}^{\mathrm{T}}$ and $\pmb{L} = \pmb{Y}\pmb{Y}^{\mathrm{T}}$ denote the Gram matrices for the two feature maps. A minibatch version is adopted by using an unbiased estimator of HSIC (Nguyen et al., 2020) to work at scale with our networks. We select the normalized version of the output representation of each Transformer block (consisting of a multi-head self-attention (MHA) block and an MLP block). Specifically, we select the feature map after the first LayerNorm (LN) (Ba et al., 2016) of the next block as the representation of this Transformer block as depicted in Fig. A1. + +![](images/e2dde7ddd385ff1f49b12bbefda66d5d513a77f254b707058ee4f61887cff35f.jpg) +Figure A1. Transformer block. + +# B. More Analyses on the Pre-Training + +# B.1. Analyses with More Models as Reference + +In Sec. 4, the analyses are mainly conducted by adopting the supervisedly trained DeiT-Tiny as the reference model. Here, we additionally introduce stronger recognition models as references to demonstrate the generalizability of our analyses. Specifically, we use ViT-Base models trained with various recipes as references, e.g., DeiT-Base (supervisedly trained on IN1K following Touvron et al. (2021a) and achieves $82.0\%$ top-1 accuracy on ImageNet), ViT-Base-21k (supervisedly trained on IN21K following Steiner et al. (2021)), ViT-Base-21k-1k (first pre-trained on IN21K and then fine-tuned on IN1K following Steiner et al. (2021), achieving $84.5\%$ top-1 accuracy on ImageNet). The layer representation similarity is presented in Fig. A2. + +First, we observe that our default reference model, DeiT-Tiny, is aligned well with these larger models (as shown in the left column of Fig. A2). We conjecture that the supervisedly trained ViTs generally have similar layer representation structures. Based on these stronger reference models, we observe similar phenomena for MAE-Tiny and MoCov3-Tiny as discussed in Sec. 4, which demonstrates the robustness of our analyses and conclusions w.r.t. different reference models. + +Then, we analyze the larger MAE-Base with these newly introduced models as references, as shown in the last column of Fig. A2. We observe that MAE-Base still aligns relatively well with these much stronger recognition models, which supports + +![](images/6379b0a4989fabef567f7cc52c81823a97c839cc97931414c2de2bad9c6740ec.jpg) + +![](images/59c8b9ef6d99f2eac72289c1d4f61f8fa35887434d32fb61d96d196e4cf6440c.jpg) + +![](images/babef7d4ca8ab2c77562d4a75eb92c15ffabbdd2885ad1053adc4ecf793dbbf8.jpg) + +![](images/3136583be829c48b0c8779d262a088d9b8b453f795fc8d175bc4d0ae6dc6db62.jpg) + +![](images/5f555b7bfdab2f43bb27a26eea982b24e5e2fc4509e3455ec0e6e1834b34e218.jpg) + +![](images/b8cf178b458c7145918694cc455305687732826a8f357b8dcf780a18272efc13.jpg) + +![](images/a56a562af898f8f6734e6df15c18d61a730ecb169d50f26dc99ab9ee5aa02fb8.jpg) + +![](images/baaf297bb80eab5792f46f0df0ad162c91af5f46fba78436281aad80cea98d26.jpg) + +![](images/871dd51efef4dc3c677cd9982eec5faf8ea95c89afe251ae88389e39fe7026ff.jpg) +Figure A2. Layer representation analyses with DeiT-Base (supervisedly trained on IN1K, the top row), ViT-Base-21k (supervisedly trained on IN21K, the middle row), and ViT-Base-21k-1k (supervisedly pre-trained on IN21K and fine-tuned on IN1K, the bottom row) as the reference models. + +![](images/6b5582754b636ee1ced2d6f0edb72e8c4102e199ea8853ef7a29c99469ce22fe.jpg) + +![](images/c1fe760d8b2531260a620b4f2d1d0babf7a8a4f92976a35c23a23564fa52460d.jpg) + +![](images/979fbafc50c448f499b58823dd3bdd16a853daaa67ecc9ff23efdf9635656bf3.jpg) + +our claim in Sec. 5 that it is possible to extract features relevant to recognition in higher layers for the scaled-up encoder in MAE pre-training. It is the prerequisite for the improvement of the pre-trained models from the proposed distillation. + +# B.2. Analyses Based on Linear Probing Evaluation + +Our analyses are mainly based on the fine-tuning evaluation. In this section, we present some experimental results based on linear probing evaluation, in which only a classifier is tuned during the downstream training while the pre-trained representations are kept frozen. It reflects how the representations obtained by the pre-trained models are linearly separable w.r.t. semantic categories. + +As shown in Tab. A4, the linear probing performance is consistently lower than the fine-tuning performance. Coupled with the case that linear probing does not save much training time for evaluating lightweight models, it is not a proper way to utilize the pre-trained models compared to the fine-tuning setting. + +Furthermore, the linear probing evaluation results do not reflect fine-tuned performance according to Tab. A4 and Tab. 4, especially for those downstream tasks with relatively sufficient labeled data, e.g., iNat18, ImageNet, thus may lead to an underestimation of the value of some pre-trained models in the practical utility on downstream tasks. We attribute it to that linear probing only evaluates the final representation of the pre-trained models, which makes it overlook the value of providing good initialization for lower layers. For instance, MAE-Tiny is better at it than MoCov3-Tiny. + +Additionally, the inferior linear probing results of MAE-Tiny to MoCov3-Tiny also support our analyses in Sec. 4.1 that MoCov3-Tiny learns more semantics at an abstract level relevant to recognition in higher layers than MAE-Tiny. But our proposed distillation technique can improve the results to a certain extent. + +Table A4. Linear probing evaluation of pre-trained models on downstream classification tasks. Top-1 accuracy is reported. + +
Datasets Init.FlowersPetsAircraftCarsCIFAR100iNat18ImageNet
supervised DeiT-Tiny91.092.041.247.973.639.8-
self-supervised MoCov3-Tiny93.283.544.844.573.436.262.1
MAE-Tiny48.925.012.88.831.01.423.3
D-MAE-Tiny77.155.520.116.458.410.742.0
+ +Table A5. Comparisons on more pre-training methods. It is a revised version of Tab. 1 in the main paper with more self-supervised pre-training methods. + +
MethodsPre-training DataEpochsTime (hour)Fine-tuning
recipeTop-1 Acc. (%)
from scratch---ori.74.5
from scratch---impr.75.8
Supervised (Steiner et al., 2021)IN21K w/ labels3020impr.76.9
Supervised (Steiner et al., 2021)IN21K w/ labels300200impr.77.8
MoCo-v3 (Chen et al., 2021a)IN1K w/o labels40052impr.76.8
MAE (He et al., 2021)IN1K w/o labels40023impr.78.0
DINO (Caron et al., 2021)IN1K w/o labels40083impr.77.2
SimMIM (Xie et al., 2022)IN1K w/o labels40040impr.77.9
D-MAE-Tiny (ours)IN1K w/o labels40026impr.78.4
+ +Table A6. Transfer evaluation on classification tasks and dense-prediction tasks for more pre-training methods. It is a revised version of Tab. 4 in the main paper with more self-supervised pre-training methods. + +
Datasets Init.Flowers (2k/6k/102)Pets (4k/4k/37)Aircraft (7k/3k/100)Cars (8k/8k/196)CIFAR100 (50k/10k/100)iNat18 (438k/24k/8142)COCO(det.) (118k/50k/80)
supervised DeiT-Tiny96.493.173.585.685.863.640.435.5
self-supervised MoCov3-Tiny94.887.873.783.983.954.539.735.1
MAE-Tiny85.876.564.678.878.960.639.935.4
DINO-Tiny95.689.373.684.584.758.741.436.7
SimMIM-Tiny77.268.955.970.477.760.839.334.8
D-MAE-Tiny (ours)95.289.179.287.585.063.642.337.4
+ +# B.3. Analyses for More Self-Supervised Pre-Training Methods + +In the main paper, our analyses mainly focus on MAE (He et al., 2021) and MoCov3 (Chen et al., 2021a). In this section, more self-supervised pre-training methods are involved. Specifically, another MIM-based method, SimMIM (Xie et al., 2022), and another CL-based method, DINO (Caron et al., 2021), are evaluated based on the lightweight ViT-Tiny. The 400-epoch pre-trained models are denoted as SimMIM-Tiny and DINO-Tiny respectively. + +We first evaluate their downstream performance on ImageNet and other classification tasks, and object detection and segmentation tasks, as shown in Tab. A5 and Tab. A6. They are also revised versions of Tab. 1 and Tab. 4 in the main paper. According to the results, we find that MIM-based methods are generally superior to CL-based methods on data-sufficient tasks, e.g., ImageNet and iNat18, while inferior on data-insufficient tasks. Downstream data scale matters for all these methods and none of them achieve consistent superiority on all downstream tasks. + +Then we explore the layer representation of these models by CKA-based similarity analyses, as shown in Fig. A3. We observe similar layer representation structures for both MIM family and CL family. For instance, SimMIM-Tiny also learns poor semantics on higher layers. + +Finally, we carry out the attention analyses for these models, as shown in Fig. A4. We also observe consistent properties for MIM family and CL family. SimMIM-Tiny also tends to focus on local patterns with concentrated attention in higher layers like MAE-Tiny, while DINO-Tiny behaves like MoCov3-Tiny and has broad and global attention in higher layers. + +![](images/213f62d6b8e70d116daf940ea258964dc51a844bd637fecc27147213d6916b28.jpg) +Figure A3. Layer representation analyses for more self-supervised pre-trained models. + +![](images/1cf31734429d765940a6319e225908d0089cfef9b921b32f14ec70dcef03a565.jpg) + +![](images/1dd39baefb1ffe06507e2f82b0a1b6f6cac18d159fb533b8f25c3333f351cab9.jpg) + +![](images/af6775acd9215c11e940e213fb0799b5a6505a2c03fe6bb14752307dcfbd49c0.jpg) + +![](images/f5e77c4c1d7ff3939a7fe392004fd32e8bd0ca1e9f1557b4ce8896b7fdc9f16f.jpg) + +![](images/c39e083a51f7950486678d0f4141845e40b263fc6b175f70be82b7cb7eaaec7a.jpg) + +![](images/ce5bf4219127d33c84d97b7ae025a8dfdca1eab90a11a1fe3fd8198bd11ede2f.jpg) + +![](images/fa53a63b760450c66ab2182d5fbee5cc8b4775ec3a92afedf2b900505f26ab42.jpg) + +![](images/c95db76a8e4f3178da94eb19140bf3c55a5bcfee7e99024fd030bec89918fcb7.jpg) +Figure A4. Attention analyses for more self-supervised pre-trained models. + +![](images/9bd1c725a1b90c8a173871b7e668d0aeceb491a97de1ea5a1c0798dd19212624.jpg) + +![](images/0293687d67b3aeaf630f516e3974a5b1dafe2f55bf7a872ec298f3ae0b2967ac.jpg) + +![](images/e6843f042ea60b36e14363348f4f3dc4d6ac353512a3a5c26fbc7deb546081f3.jpg) + +# C. More Analyses on Distillation + +# C.1. Illustration of the Distillation Process + +We illustrate our distillation process in Fig. A5 for a better presentation and explanation. + +Based on the mask auto-encoder, we introduce a teacher ViT, which is pre-trained with MAE. During pre-training, the teacher processes the same visible image patches as the student encoder, and the attention-based distillation loss is calculated between the attention maps of the corresponding teacher's and student's layers. The parameters of the student are updated based on the joint backward gradients from the distillation loss and the original MAE's reconstruction loss, while the teacher's parameters remain frozen throughout the pre-training process. + +# C.2. Attention Map Analyses for the Distilled Pre-trained Models + +we analyze the attention distance and entropy of the distilled MAE-Tiny introduced in Sec. 5 (D-MAE-Tiny), which is only applied distillation on the attention map of the last layer during the pre-training with MAE. As shown in Fig. A6, we observe more global and broad attention in the higher layers of D-MAE-Tiny compared with MAE-Tiny, which behaves more like the teacher, MAE-Base. We attribute it to that the distillation on the final layer (i.e., the 12th layer) forces the distilled layer of the student to imitate the teacher's attention and also requires the several preceding layers to make changes to meet the imitation. We reckon that it may be useful to capture semantic features and improve downstream performance. + +We also find the attention distance of the last layer shows more diversity: some attention heads are rather global and the others are local, and all of them are concentrated. We reckon that it shows odd behaviors for the reason that the layer can not handle both training targets from the reconstruction task and distillation restricted to the model size. But the more plentiful supervision indeed improves the quality of previous layers and thus achieves better downstream performance. + +![](images/19b254a9d8c47e1dd423284666a3a1720d0b630bbaa3759a6392d6369ba64180.jpg) +Figure A5. Illustration of the distillation process. + +![](images/2543d57495834b7739ea6de866720ab066aade4ce00af1b01639627b770fd487.jpg) + +![](images/d1885e3b9747657d2d7b4e49be15c1239e14e5e163eab130d623aa8dc7ad8c20.jpg) + +![](images/cbe912f56761215fd0b3e9cb9f9e75a2cb7904bfa8a9e7aac517b07480ddd841.jpg) + +![](images/5ce65f4470f418ea3d9117d3512e6b810115929523bd0c6905ad097ab53af917.jpg) +Figure A6. Attention distance and entropy analyses for the distilled MAE-Tiny. + +![](images/ca137ae83ecd974cf38d9f9ad876064c0d16f53ad7d05513ebd21e26309af506.jpg) + +![](images/a36078f036e6e729a46743b9713302fe4731c14b448720f801d0866f5bf1bdad.jpg) + +# C.3. Applying Distillation on More Networks + +To further evaluate our proposed distillation method, we additionally apply it to the pre-training of ViT-Small also with MAE-Base as the teacher. The configurations of these models are presented in Tab. A7. The transfer evaluation results are presented in Tab. A8. The transfer performance of the distilled MAE-Small (D-MAE-Small) surpasses the baseline model, MAE-Small by a large margin, which shows the efficacy of the distillation. + +# C.4. Distilling with Larger Teachers + +We further conduct additional experiments with various models as teachers and compared their performance on various downstream tasks (see Tab. A9). The configurations of the student model (ViT-Tiny) and teacher models are presented in Tab. A7. The results indicate that an appropriately sized teacher model provides the most improvement gains in distillation, which is a common finding in the area of knowledge distillation (Cho & Hariharan, 2019; Jin et al., 2019; Mirzadeh et al., 2020). To further investigate the impact of teacher size, we conducted CKA-based layer representation analyses of these teachers, as shown in Fig. A7. It can be seen that a teacher that is too small (MAE-Small) also suffers from degraded representation on higher layers and can not provide sufficient knowledge, while a teacher that is too large (MAE-Large) would result in a mismatch of capacity with the tiny student model, considering it has over 50 times more parameters than ViT-Tiny with different depths and attention head numbers, which leads to a little distinct learned pattern compared to the reference tiny model, and may not be suitable for the student. + +Table A7. Configurations of ViTs. + +
Modelchannel dimension#heads#layers#params
ViT-Tiny19212126M
ViT-Small38412‡1222M
ViT-Base768121286M
ViT-Large10241624304M
+ +‡ Our ViT-Small is with heads=12 following Chen et al. (2021a). + +Table A8. Distillation on MAE-Small. Top-1 accuracy for the transfer performance on downstream classification tasks of pre-trained models w. or w/o. distillation is reported. + +
Datasets Init.FlowersPetsAircraftCarsCIFAR100iNat18ImageNet
supervised DeiT-Small97.494.277.688.289.266.580.2
self-supervised MAE-Small91.282.065.879.280.863.282.1
D-MAE-Small95.8 (+4.6)91.4 (+9.4)80.7 (+14.9)88.3 (+9.1)87.8 (+7.0)66.9 (+3.7)82.5 (+0.4)
+ +![](images/4717288b9b14bdb122dcc7686d5934aca6a8699c1f8a1152f835ff08ac24069b.jpg) +Figure A7. Layer representation analyses of the teachers for distillation. + +![](images/07deb676f7a3043a5e8772cd0ee16325af16688cc37b4ac910e2351dbc107fb0.jpg) + +![](images/431db19a4ef5abc9dc6e832f3cbef0402b23221e561fe57d91003af42d340a99.jpg) + +Table A9. Distillation with different sized teachers. Top-1 accuracy for the transfer performance on downstream classification tasks of the distilled pre-trained models is reported. + +
Pre-trainingFlowersPetsAircraftFine-tuning
StudentTeacherCarsCIFAR100iNat18ImageNet
MAE-Tiny-85.876.564.678.878.960.678.0
MAE-TinyMAE-Small89.478.665.278.979.661.578.1
MAE-TinyMAE-Base95.289.179.287.585.063.678.4
MAE-TinyMAE-Large94.087.377.185.284.263.178.3
\ No newline at end of file diff --git a/acloserlookatselfsupervisedlightweightvisiontransformers/images.zip b/acloserlookatselfsupervisedlightweightvisiontransformers/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..fcc1f7545ea1037eec93bca01af67cfa65523b24 --- /dev/null +++ b/acloserlookatselfsupervisedlightweightvisiontransformers/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:129197763b99288f50b30ddcc92499f274eb9148e509ca29080f73b96c572c46 +size 1413977 diff --git a/acloserlookatselfsupervisedlightweightvisiontransformers/layout.json b/acloserlookatselfsupervisedlightweightvisiontransformers/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..56ff2b7c01ffdf5166318f9dfa66905788f0bcda --- /dev/null +++ b/acloserlookatselfsupervisedlightweightvisiontransformers/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50c32f57e29c3528fcca3dbd60ecc4d1aecf9134543d65aa32d6dd4d0fdfc856 +size 563352 diff --git a/acloserlookattheinterventionprocedureofconceptbottleneckmodels/c96d4482-d4bb-492b-abd1-1d83d3c04a21_content_list.json b/acloserlookattheinterventionprocedureofconceptbottleneckmodels/c96d4482-d4bb-492b-abd1-1d83d3c04a21_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..782296e76082105ef973a5d962469d832b42c8b3 --- /dev/null +++ b/acloserlookattheinterventionprocedureofconceptbottleneckmodels/c96d4482-d4bb-492b-abd1-1d83d3c04a21_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe4ffa834a05d14baa40c86410e14a5c6822eb8a36603f58963cd89c35d6b34a +size 142017 diff --git a/acloserlookattheinterventionprocedureofconceptbottleneckmodels/c96d4482-d4bb-492b-abd1-1d83d3c04a21_model.json b/acloserlookattheinterventionprocedureofconceptbottleneckmodels/c96d4482-d4bb-492b-abd1-1d83d3c04a21_model.json new file mode 100644 index 0000000000000000000000000000000000000000..bf453f34bbd530c20df8b75a80070dc079cf7a6c --- /dev/null +++ b/acloserlookattheinterventionprocedureofconceptbottleneckmodels/c96d4482-d4bb-492b-abd1-1d83d3c04a21_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4710ace612c8554427c57c6afa5f380bad99b583699c48f09eb61284046b1455 +size 182077 diff --git a/acloserlookattheinterventionprocedureofconceptbottleneckmodels/c96d4482-d4bb-492b-abd1-1d83d3c04a21_origin.pdf b/acloserlookattheinterventionprocedureofconceptbottleneckmodels/c96d4482-d4bb-492b-abd1-1d83d3c04a21_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ded7c5f6ef5bc6bd3b6ff54d9f157d293eaa8cfa --- /dev/null +++ b/acloserlookattheinterventionprocedureofconceptbottleneckmodels/c96d4482-d4bb-492b-abd1-1d83d3c04a21_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d782b5e5ccd02dff32043059bec7dafc4220ae4e3e361daa5c911b1ead8fc432 +size 2862932 diff --git a/acloserlookattheinterventionprocedureofconceptbottleneckmodels/full.md b/acloserlookattheinterventionprocedureofconceptbottleneckmodels/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d2956a72afbe8ad85a6331c2b082c77316fea09a --- /dev/null +++ b/acloserlookattheinterventionprocedureofconceptbottleneckmodels/full.md @@ -0,0 +1,730 @@ +# A Closer Look at the Intervention Procedure of Concept Bottleneck Models + +Sungin Shin Yohan Jo 2* Sungsoo Ahn Namhoon Lee + +# Abstract + +Concept bottleneck models (CBMs) are a class of interpretable neural network models that predict the target response of a given input based on its high-level concepts. Unlike the standard end-to-end models, CBMs enable domain experts to intervene on the predicted concepts and rectify any mistakes at test time, so that more accurate task predictions can be made at the end. While such intervenability provides a powerful avenue of control, many aspects of the intervention procedure remain rather unexplored. In this work, we develop various ways of selecting intervening concepts to improve the intervention effectiveness and conduct an array of in-depth analyses as to how they evolve under different circumstances. Specifically, we find that an informed intervention strategy can reduce the task error more than ten times compared to the current baseline under the same amount of intervention counts in realistic settings, and yet, this can vary quite significantly when taking into account different intervention granularity. We verify our findings through comprehensive evaluations, not only on the standard real datasets, but also on synthetic datasets that we generate based on a set of different causal graphs. We further discover some major pitfalls of the current practices which, without a proper addressing, raise concerns on reliability and fairness of the intervention procedure. + +# 1. Introduction + +While deep learning has made rapid strides in recent years (LeCun et al., 2015; Jordan & Mitchell, 2015), the standard neural network models are not quite explainable, in that their decision-making process is neither straightforward to account for nor easy to control. To tackle this issue, various + +![](images/d459e784904bcfb2a6be5797aca454844a3570c72ce6c804870f09d3d89a8bc7.jpg) +(a) Diagram of CBMs + +![](images/6c7333a8275b46aa3aa6f1e7dd0fbdadccb33fa17ec46fc08b387606db9c1d37.jpg) +(b) Task vs. Concept errors +Figure 1: (a) Given input data CBMs first predict its concepts $(g : x \to c)$ , and then based on which it makes a subsequent prediction for the target response $(f : c \to y)$ . (b) Average task error (mis-classification rate) vs. the number of incorrectly predicted concepts (on the CUB dataset). The task error increases rapidly as more mistakes are made in concept prediction; e.g., making a single mistake yields $25\%$ increase in task error. + +interpretable models have been proposed including, for example, those using concept activation vectors (Kim et al., 2018; Ghorbani et al., 2019), relating pixel contributions to image classification (Zhou et al., 2016; Selvaraju et al., 2017), or building intrinsically interpretable architectures (Alvarez Melis & Jaakkola, 2018). + +Concept bottleneck models (CBMs) are among these to empower interpretability (Koh et al., 2020; Bahadori & Heckerman, 2021; Margeloiu et al., 2021; Mahinpei et al., 2021; Sawada & Nakamura, 2022; Zarlenga et al., 2022). Unlike standard end-to-end models, CBMs work in two steps: they first predict human-interpretable properties of a given input called concepts, and based on which, they subsequently make the final prediction for the given task. For instance, CBMs may classify the species of a bird based on its wing pattern or leg color rather than straight from the raw pixel values (see Figure 1a). + +Revisited recently by Koh et al. (2020), this classic idea further facilitates human-model interaction in addition to plain interpretability, in that it allows one to intervene on the predicted concepts at test time, such that the subsequent prediction is made based on the rectified concept values. Notably, such intervention must be treated attentively as we find that correcting only a small number of mistakes on mis-predicted concepts can lead to a significant increase in + +
WorkSelectionCostLevelImp.DataRel.
Koh et al. (2020)XXXX
Chauhan et al. (2022)XX
Sheth et al. (2022)XXXX
Ours
+ +Table 1: Comparison between the studies on intervention strategy of CBMs. $\triangle$ represents that the corresponding work provides only partial evaluations. Selection and Cost represent concept selection criteria and their analysis in terms of theoretical cost as will be discussed in Section 4.2. We study the effects of Level, Implementation and Data on intervention effectiveness in Sections 4.3, 4.4 and 5. Reliability of intervention practice is discussed in Section 6. + +the task performance (see Figure 1b). Considering the high cost of intervention, i.e., having domain experts go over each concept requires tremendous effort, this result further indicates the necessity of efficient intervention procedures to ensure the utility of CBMs. + +Despite the great potential, the intervention procedure of CBMs has not been studied much in the literature, quite surprisingly. For example, previous works tend to focus on increasing task performance (Sawada & Nakamura, 2022; Zarlenga et al., 2022) and addressing the problem of confounding factors (Bahadori & Heckerman, 2021) or information leakage (Margeloiu et al., 2021; Mahinpei et al., 2021; Havasi et al., 2022; Marconato et al., 2022). While a few concurrent works suggest new intervention methods (Chauhan et al., 2022; Sheth et al., 2022), we find that many critical aspects of the intervention procedure still remain unexplored (see Table 1). + +Our contributions are summarized as follows. First of all, we develop various concept selection criteria as new intervention strategies, improving the intervention performance of CBMs quite dramatically given the same amount of intervention counts. We also provide extensive evaluations to analyze these criteria under a wide variety of experimental settings considering the theoretical cost of each criterion, levels of intervention related to test-time environments, and how to train these models or conceptualize the concept predictions. We further develop a new framework to generate synthetic data using diverse causal graphs and conduct fully controlled experiments to verify the effectiveness of intervention on varying data. These results reveal that data characteristics as well as intervention granularity can affect the intervention procedure quite significantly. Finally, we identify some pitfalls of the current intervention practices, which helps to take a step toward building trustworthy and responsible interpretable models. + +# 2. Related Work + +Since the seminal work of Koh et al. (2020), CBMs have evolved in many different ways. Bahadori & Heckerman (2021) develop a debiased CBM to remove the impact of confounding information to secure causality. Sawada & Nakamura (2022) augment CBMs with unsupervised concepts to improve task performance. Mahinpei et al. (2021); Margeloiu et al. (2021) suggest addressing the information leakage problem in CBMs to improve interpretability of learned concepts, while Marconato et al. (2022); Havasi et al. (2022) design new CBMs based on disentangled representations or autoregressive models. Zarlenga et al. (2022) proposes to learn semantically meaningful concepts using concept embedding models to push the accuracy-interpretability trade-off. Both Chauhan et al. (2022) and Sheth et al. (2022) present uncertainty based intervention methods to determine which concepts to intervene on. We remark that previous work is mostly focused on developing CBM variants for high task performance from model-centric perspectives, whereas our work provides in-depth analyses and comprehensive evaluations on the intervention procedure of the standard CBMs in greater granularity. + +# 3. Intervention Strategies + +# 3.1. Preliminary + +Let $x \in \mathbb{R}^d$ , $c \in \{0,1\}^k$ , $y \in \mathcal{V}$ be input data, binary concepts, and target responses, respectively; here, $d$ and $k$ denote the dimensionality of input data and cardinality of concepts, and we assume $\mathcal{V}$ encodes categorical distribution for classification tasks. Given some input data (e.g., an image), a CBM first predicts its concepts (e.g., existing attributes in the given image) using a concept predictor $g$ and subsequently target response (e.g., class of the image) using a target predictor $f$ : i.e., first $\hat{c} = g(x)$ then $\hat{y} = f(\hat{c})$ , where $\hat{c}$ and $\hat{y}$ are predictions of concepts and target response. + +In this process, one can intervene on a set of concepts $S \subseteq \{1, \dots, k\}$ so that the final prediction can be made based on rectified concept values, i.e., $\hat{y} = f(\tilde{c})$ where $\tilde{c} = \{\hat{c}_{\backslash S}, c_{S}\}$ denotes the updated concept values partly rectified on $S$ with $\hat{c}_{\backslash S}$ referring to the predicted concept values excluding $S$ . + +# 3.2. Concept Selection Criteria + +How should one select which concepts to intervene on? This is a fundamental question to be answered in order to legitimize CBMs in practice since intervention incurs the cost of employing experts, which would increase as with the number of intervening concepts $|S|$ . In principle, one would select a concept by which it leads to the largest increase in the task performance. To address this question and investigate the effectiveness of intervention procedure in current practice, we develop various concept selection + +
CriteriaNgNfCost in complexity
RAND11O(τg + τf + nτi)
UCP11O(τg + τf + nτi)
LCP11O(τg + τf + nτi)
CCTP13O(τg + 3τf + nτi)
ECTP12k + 2O(τg + (2k + 2)τf + nτi)
EUDTP12k + 2O(τg + (2k + 2)τf + nτi)
+ +Table 2: Theoretical cost of employing concept selection criteria to make final prediction with $n$ number of intervened concepts. ${N}_{g}$ and ${N}_{f}$ refer to the number of forward/backward passes to run $g$ and $f$ ,respectively. + +criteria for which a selection score $s_i$ for $i$ -th concept is defined. Then, intervening concepts will be done based on the decreasing order of these scores. + +Random (RAND) It selects concepts uniformly at random as in Koh et al. (2020). We can treat this method as assigning a random score for each concept, i.e., $s_i \sim \mathcal{U}_{[0,1]}$ . It will serve as a baseline to study the effectiveness of concept selection criteria. + +Uncertainty of concept prediction (UCP) It selects concepts with the highest uncertainty of concept prediction. Specifically, it defines $s_i = \mathcal{H}(\hat{c}_i)$ where $\mathcal{H}$ is the entropy function. When the concepts are binary, it follows that $s_i = 1 / |\hat{c}_i - 0.5|$ as in Lewis & Catlett (1994); Lewis (1995). Intuitively, uncertain concepts may have an adverse influence on making the correct target prediction, and thus, they are fixed first by this criterion. + +Loss on concept prediction (LCP) It selects concepts with the largest loss on concept prediction compared to the ground-truth. Specifically, it defines $s_i = |\hat{c}_i - c_i|$ . This scheme can be advantageous to increasing task performance since a low concept prediction error is likely to lead to a correct target prediction. Nonetheless, this score is unavailable in practice as the ground-truth is unknown at test time. + +Contribution of concept on target prediction (CCTP) It selects concepts with the highest contribution on target prediction. Specifically, it sums up the contribution as $s_i = \sum_{j=1}^{M} \left| \hat{c}_i \frac{\partial f_j}{\partial \hat{c}_i} \right|$ where $f_j$ is the output related to $j$ -th target class and $M$ is the number of classes. This scheme is inspired by methods to explain neural network predictions (Selvaraju et al., 2017). + +Expected change in target prediction (ECTP) It selects concepts with the highest expected change in the target predictive distribution with respect to intervention. Specifically, it defines $s_i = (1 - \hat{c}_i)D_{\mathrm{KL}}(\hat{y}_{\hat{c}_i = 0}\| \hat{y}) + \hat{c}_iD_{\mathrm{KL}}(\hat{y}_{\hat{c}_i = 1}\| \hat{y})$ where $D_{\mathrm{KL}}$ refers to the Kullback-Leibler divergence, and $\hat{y}_{\hat{c}_i = 0}$ and $\hat{y}_{\hat{c}_i = 1}$ refer to the new target prediction with $\hat{c}_i$ being intervened to be 0 and 1, respectively. The intuition behind this scheme is that it would be better to intervene on + +![](images/662b1efc04221034544f6db439447bc9aa6a871aa687f34bcb5240f6ee045335.jpg) +(a) Individual vs. Group + +![](images/86de1eb84b7e70fe4065f2e5a029af4e8eea56cd6a3ed62399fc68911e279347.jpg) +(b) Single vs. Batch +Figure 2: Different levels of intervention conducted on concepts. Each number represents the order of intervention. + +those concepts whose rectification leads to a large expected change in target prediction (Settles et al., 2007). + +Expected uncertainty decrease in target prediction (EUDTP) It selects concepts with the largest expected entropy decrease in target predictive distribution with respect to intervention. Specifically, it defines $s_i = (1 - \hat{c}_i)\mathcal{H}(\hat{y}_{\hat{c}_i = 0}) + \hat{c}_i\mathcal{H}(\hat{y}_{\hat{c}_i = 1}) - \mathcal{H}(\hat{y})$ . Intuitively, it penalizes the concepts whose expected decrease in the target prediction entropy is low when intervened (Guo & Greiner, 2007). + +# 3.2.1.COST OF INTERVENTION + +Note that the cost of intervention may differ by the choice of concept selection criteria. Specifically, let the theoretical cost of intervening on a concept be $\tau_{i}$ (e.g., the time for an expert to look at the input and fix its attribute), and the theoretical cost of making inference on $g$ and $f$ be $\tau_{g}$ and $\tau_{f}$ , respectively. Then, the total cost of utilizing CCTP needed up to making the final prediction with $n$ number of intervened concepts, for example, would be $\mathcal{O}(\tau_g + 3\tau_f + n\tau_i)$ ; here we assume that the cost of the backward pass on $f$ is the same as $\tau_{f}$ . We summarize the cost of all concept selection criteria in Table 2. + +# 3.3. Levels of Intervention + +We find that intervention can be done at different levels given some auxiliary information about the structure of concepts or economic constraints put on practitioners. For example, it is often the case that datasets used to train CBMs have the grouping information for related concepts (Wah et al., 2011). Another situation worth consideration is where one has access to a batch of data to process with a budget constraint, and the goal is to maximize the overall task performance while minimizing the intervention effort (e.g., examining medical images in a hospital). Taking into account these scenarios, we extend the intervention procedure at various levels to study the effectiveness of concept selection criteria. + +Individual vs. Group intervention Intervention can be done depending on concept association (see Figure 2a): + +- Individual (I): Concepts are assumed to be independent of each other and thus selected individually one at a time. +- Group (G): A group of related concepts is selected at once whose association information is subject to datasets. The selection score is computed by taking the average of selection scores of individual concepts within group. + +Single vs. Batch intervention Intervention can be done depending on data accessibility (see Figure 2b): + +- Single (s): Every test case is allocated with the same amount of intervention budget (e.g., intervention counts). This could be useful for online systems where each test data comes in sequentially, and experts need to process as many cases as possible under a budget constraint. +- Batch (B): A batch of test cases shares a total intervention budget. This scheme could be particularly useful when the concept prediction is imbalanced toward easy cases, and one wants to focus on intervening on hard cases so as to maximize the overall task performance. + +# 4. Evaluating Intervention Strategies + +# 4.1. Experiment Settings + +Dataset We experiment with three datasets: (1) CUB (Wah et al., 2011) – the standard dataset used to study CBMs, (2) SkinCon (Daneshjou et al., 2022b) – a medical dataset used to build interpretable models, and (3) Synthetic – the synthetic datasets we generate based on different causal graphs to conduct a wide range of controlled experiments. Extensive details of these datasets including preprocessing, label characteristics, data splits, and the generation process are provided in Appendix A. + +Implementation We follow the standard implementation protocols as in previous works. The full details including model architectures and optimization hyperparameters are provided in Appendix B. Our code is available at https://github.com/ssbin4/Closer-Intervention-CBM. + +# Training + +We consider the following training strategies similarly to Koh et al. (2020): + +- IND: $g$ and $f$ are trained independently of each other. $f$ always takes ground-truth concept values as input. +- SEQ: $g$ and $f$ are trained sequentially, $g$ first and $f$ next. $f$ takes predicted concept values as input from trained $g$ . +- JNT: $g$ and $f$ are trained jointly at the same time as a multi-objective. This results in increased initial task accuracy but comes with the price of decreased intervention effectiveness (Koh et al., 2020). +- JNT+P: similar to JNT but the input to $f$ is sigmoid-activated probability distribution rather than logits. + +Conceptualization We consider different forms of concept predictions as input to the target predictor at inference: + +- SOFT: $f$ takes real values of $\hat{c} \in [0,1]^k$ as soft representation of concepts (Koh et al., 2020). +- HARD: $f$ takes binary values of $\hat{c} \in \{0,1\}^k$ as hard representation of concepts based on $\mathbb{1}[\hat{c} \geq 0.5]$ (Mahinpei et al., 2021). This prevents information leakage (Havasi et al., 2022) in exchange for decreased prediction performance. +- SAMP: $m$ random samples are drawn by treating the soft concept prediction scores as a probability distribution, and the target prediction is made as an ensemble, i.e., $\hat{y} = \frac{1}{m}\sum_{i=1}^{m}f(\hat{c})$ where $\hat{c}$ is binarized concept prediction (Havasi et al., 2022). We use $m = 5$ for the experiments. + +# 4.2. Evaluating Concept Selection Criteria + +We first evaluate the intervention effectiveness of concept selection criteria and present the results in Figure 3. Across all datasets, we find that the current practice of random intervention (RAND) is easily outperformed by the other alternatives in almost all cases with a significant margin. Specifically, in the CUB experiment, correcting 20 concepts by random intervention reduces the task error less than $4\%$ whereas correcting the same amount based on the uncertainty of concept predictions (UCP) leads to more than $16\%$ error reduction. To put it differently, RAND requires to intervene on 43 concepts in order to reduce the error by half, while it is only 12 concepts to fix for UCP to achieve the same reduction. In the SkinCon experiment, selecting concepts based on the expected change in target prediction (ECTP) leads the way among others, and yet, the scale of improvements over RAND is not as large. Note also that the strategy of fixing concepts with the largest loss first (LCP) performs exceptionally well in all cases. This is however due to the help of the ground-truth knowledge on concepts which is unavailable in practice. Nonetheless, we believe this can serve as an indicator to guide a better intervention strategy which we defer to future work. + +# 4.2.1. REFLECTING COST OF INTERVENTION + +As we discussed in Section 3.2.1, the cost of intervention may differ by concept selection criteria. Taking into account this aspect, we set up experiments where we can evaluate the intervention effectiveness in terms of the theoretical cost. Specifically, we model the relationships between $\tau_{i}$ , $\tau_{g}$ , $\tau_{f}$ as $\tau_{i} = \alpha \tau_{g}$ and $\tau_{g} = \beta \tau_{f}$ , which means that the cost of intervention (e.g., time to fix a concept) is $\alpha$ -proportional to the cost of making inference on $g$ , and likewise, $\tau_{g}$ is $\beta$ -proportional to $\tau_{f}$ . Then we can evaluate the cost-reflected intervention effectiveness with respect to arbitrary unit ( $v$ ), and from which, we can further show how it transforms by controlling $\alpha$ and $\beta$ . + +![](images/991c80b160b98a54fffd02c1a3ababcb3751d2603915401102bec0a6c766bd27.jpg) +(a) CUB + +![](images/2fa123db3af76c4a86f156f6f744994fa31225ff7e585a495eee1e55a39386e2.jpg) +(b) SkinCon + +![](images/05249e6cb70fb2d704e57d30f5cb4422c73e2bac88d37d10371d0112a64f1512.jpg) +(c) Synthetic + +![](images/f77ce79eada1f20057511c1407d7df8fb3ab122948b7f3b50cb5bcda85700098.jpg) +Figure 3: Intervention effectiveness of concept selection criteria (task error vs. number of concepts corrected by intervention) measured on I+S level. A more effective method would reduce the error more for the same number of concepts intervened. +(a) $\alpha = 0.01$ +Figure 4: Effect of $\alpha$ on intervention (on Synthetic). We fix $\tau_{i} = 1, \beta = 100, k = 100$ . ECTP, the intervention method strongly evaluated previously, becomes less effective as $\alpha$ decreases. Here, kinked shapes are due to the relatively high initial cost on the first intervention before $n$ becomes large. + +![](images/9748c817a5ac077be2fe3eb1de56a2b9869af3f0accdb0e36c09866c3f81143d.jpg) +(b) $\alpha = 0.03$ + +![](images/244e1a8ee7f583c7d549e7c4bab528afef1e2e16d5cd928768d47e56570aa920.jpg) +(c) $\alpha = 0.05$ + +![](images/42beddabee7eec05d3edd9b1613aa98b19c6bceeba880e8e2989ab6bccec2691.jpg) +(d) $\alpha = 0.1$ + +![](images/6b2ad4e649b81da57560b61ed2ca963b9d65bec05f527e02c4634688b92ff4ad.jpg) +(e) $\alpha = 1.0$ + +![](images/e80927edf2f5ee92c1e495ad1b159f870947ccb16ca3ed1fc39fd533a366f97d.jpg) +(f) $\alpha = 100.0$ + +First, the result of changing $\alpha$ is plotted in Figure 4. As $\alpha$ becomes smaller RAND becomes very effective compared to ECTP. This makes sense because with small $\alpha$ , $\tau_{i}$ becomes relatively small and the other terms related to $\tau_{g}$ or $\tau_{f}$ dominate the cost of ECTP which is $\mathcal{O}\big(\tau_g + (2k + 2)\tau_f + n\tau_i\big)$ as seen in Table 2. ECTP thus becomes penalized when it comes to the intervention effectiveness in the small $\alpha$ regime. In contrast, when $\alpha$ becomes larger, $\tau_{i}$ dominates the cost of ECTP as with increasing $n$ , which in turn recovers the effectiveness of ECTP. The former can happen in extreme circumstances, for example, when using very large models (i.e., large $\tau_{g}$ ) or in places with a tight labor marker (i.e., small $\tau_{i}$ in terms of monetized value). We clearly remark, however, that this can be seen as a hypothetical case and $\alpha$ will be much greater than 1 in realistic settings as summoning a domain expert for intervention would require more cost than a forward pass of neural networks. + +We also experiment on changing $\beta$ to control the relative cost between $\tau_{g}$ and $\tau_{f}$ . As a result, we find that when $\beta$ is small ECTP can perform poorly while RAND can be effective as it only requires a single forward pass of $f$ to make the final prediction. Furthermore, we extend this analysis to the CUB experiment with more realistic settings where $\tau_{g}$ and $\tau_{f}$ are set based on the wall-clock times of running each model, and $\tau_{i}$ is set based on the actual concept annotation + +time provided in the dataset. All of these results are put in Appendix C with detailed analysis for space reasons. + +# 4.3. Analyzing Intervention Levels + +As seen in Figure 5a, most criteria still remain more effective than RAND in group-wise single $(\mathrm{G} + \mathrm{S})$ intervention. Specifically, RAND needs $39.3\%$ (11 out of 28), while UCP needs $25.0\%$ (7 out of 28) of the groups to be intervened to decrease the task error by half. However, CCTP does not outperform RAND this time. We also find a similar pattern for the batch case $\mathrm{G} + \mathrm{B}$ (see Figure 14 in Appendix D). We suspect that calculating the mean of the scores loses some discriminative information in some selection criteria and perhaps a different surrogate needs to be designed. + +In addition, we find that group-wise intervention is in general less effective than individual counterpart with the same budget of intervention expense (see Figure 5b). Intuitively, correcting concepts within the same group may not provide rich information as opposed to selecting concepts across different groups with the same intervention counts. Nonetheless, we remark that group-wise intervention can potentially be cost-effective when concepts within the same group are mutually exclusive, which depends on how the concepts are annotated during the creation of datasets. + +![](images/9cd7ff894c874d55ada1e9927e66e8b0fe21b429e32c0bf9786ac9ddcd69cc7e.jpg) +(a) $\mathrm{G + S}$ level + +![](images/610c68a19b30450f10b834280a9d84a1c97f3c767a0e3f8a698d94b903740a63.jpg) +(b) $(\mathrm{I} + \mathrm{S})$ vs. $(\mathrm{G} + \mathrm{S})$ +Figure 5: Comparing the effects of different intervention levels using the CUB dataset. Here, intervention counts denote the number of intervened groups and average number of intervened concepts for G and B, respectively. We fix the selection criterion to be UCP in (b) and (d) while all other cases are provided in Appendix D. + +![](images/a498a01dc2babb1fb747357cfaf6bcdbef909b27000ab40f65d0aee0ae3e5a41.jpg) +(c) I+B level + +![](images/3764caef1f0035e3a3846159f937b7a2972420dea2ba55ac72ec26e9f3e8c6f4.jpg) +(d) $(\mathrm{I} + \mathrm{S})$ vs. $(\mathrm{I} + \mathrm{B})$ + +The proposed concept selection criteria also remain effective for batch intervention (B) as seen in Figure 5c. Interestingly, batch intervention turns out to be more effective when compared to single (S) as well as seen in Figure 5d. This trend holds true for other criteria besides UCP except for CCTP and extends to group-wise batch $(\mathrm{G} + \mathrm{B})$ intervention (see Appendix D for full results). + +# 4.4. Considering Training and Conceptualization + +Effect of training scheme As seen in Figure 6a, intervention is in general the most effective under the IND training scheme. We believe that this is because $f$ is not trained with the ground-truth concept labels in the case of SEQ and JNT(+P), and fixing concept predictions for these schemes may not work as well. We also find that EUDTP becomes much less effective under SEQ or JNT than other alternatives and actually underperforms RAND (see Appendix E). Hence, the effectiveness of a criterion can depend on which training strategy to use, implying the need of comprehensive evaluations for newly developed criteria. + +For the SkinCon dataset, however, intervening on the concepts under SEQ, JNT, JNT + P strategies rather increases the average task error regardless of the concept selection criteria. Specifically, training under JNT already achieves low task error and applying intervention does not help reduce it further (see Figure 6b). We hypothesize that this is due to some inherent characteristics of the dataset as well as limited concepts provided in the bottleneck, resulting in the negative influence on making correct task predictions with binarized concepts. This can potentially correspond to the known issue of information leakage in CBMs (Mahinpei et al., 2021; Havasi et al., 2022). + +Effect of conceptualization We find that HARD and SAMP may begin with high task error compared to SOFT as expected. However, when making use of the developed concept selection criteria such as UCP, the gap between these conceptualization methods decreases much faster with more intervention compared to RAND as seen in Figures 6c and 6d. + +This result is consistent across different training strategies and datasets (see Appendix F). + +# 5. Analyzing Intervention with Synthetic Data + +We have observed that intervention can often yield different results over datasets. Precisely, intervening on all concepts decreases the task error down to $0\%$ on CUB, whereas the amount of decrease is much less and the average task error remains still high around $29\%$ on SkinCon. Also, the relative order of effectiveness between concept selection criteria can vary. We find that it is difficult to unravel these findings if only experimenting on real datasets as in previous work (Koh et al., 2020; Chauhan et al., 2022; Sheth et al., 2022; Zarlenga et al., 2022). To provide an in-depth analysis, we develop a framework to generate synthetic datasets based on three different causal graphs that control the followings: input noise, hidden concepts, and concept diversity. + +# 5.1. Generating Synthetic Data + +CASE 1: Noisy input Real-world data contains a lot of random noise coming from various sources (e.g., lighting). We construct a causal graph to consider this case where the Gaussian noise is added on input data (see Figure 7a). + +CASE 2: Hidden concept When a subset of concepts is unknown or hidden, the target prediction is made incomplete with only available concepts as deep representations are not fully captured in the bottleneck layer. We design a causal graph for this case and generate synthetic data for which some concepts that are necessary to make correct target predictions are hidden on purpose (see Figure 7b). + +CASE 3: Diverse concept Examples within the same class can have different values for the same concept in realistic settings. For instance, simple concept-level noise or fine-grained sub-classes (e.g., 'black swan' and 'white swan' for 'swan' class) can make such diverse concept values. We construct a causal graph to generate such data for which concept values can vary probabilistically and inputs + +![](images/e187fc99eb2f19732359c451f8446bb59ed10b478a2f084ddfddc5a79a0bcb3a.jpg) +(a) Training on CUB + +![](images/cedabfecb39443fb71f98ef6160101f5ac7e1317789366c7ad994cdda302cca4.jpg) +(b) Training on SkinCon + +![](images/180eb1c6953f7ffd7227497dd4bca3a480ea625de929ec17effdc92c4ff9c8fd.jpg) +(c) Conceptualization: RAND + +![](images/0fd230aef138c394758e570d0207dc458028be5bb34c215f4a7c89eb7224f71e.jpg) +(d) Conceptualization: UCP + +![](images/7b9918e690404b30724e61135d82998a3f83af191f53f007d88616e7eadebef7.jpg) +Figure 6: Comparing the effects of different training strategies (a,b) and conceptualization methods (c, d). We choose EUDTP as the concept criterion for (a,b) and SkinCon as the dataset for (c, d). We provide all other results in Appendices E and F. +(a) Noisy input + +![](images/835c7e1c941808c82deacec20cc59550cd4c2b128309f89615f0ff2e2495431d.jpg) +(b) Hidden concept + +![](images/2e2e0989c6fe72fc8e37afb407ae887c86f6c83a9aba384ea407023f2c680c0c.jpg) +(c) Diverse concept + +![](images/f9f973a280c94f8ece4cbd01110d3279b7301d7047bb80c8edc56d00f6369eec.jpg) +Figure 7: Causal graphs for generating synthetic datasets. $z$ , $h$ , and $d$ represent factors of input noise, hidden concepts, and concept diversity, respectively. The full details of the data generation process are provided in Appendix A.3. +(a) Noisy input + +![](images/2e86296930d81549f536bbf522a619091c8f39df9f048108a214be02b5795306.jpg) +(b) Hidden concept + +![](images/0a41a669678465f03a02cd7c376e1de46d1b5e46536f55733b3185d0b195e334.jpg) +(c) Diverse concept + +![](images/12b2b66d27c62361c1e577758a66261f8f4ac3c4f2eb1d9757cba56c7b6d74b9.jpg) +Figure 8: Effects of data on intervention with UCP. Each plot is with different values of the variance of noise $(z)$ , the ratio of hidden concepts $(h)$ , and the probability to perturb the concept values $(d)$ , respectively. +(a) $\gamma = 1$ +Figure 9: Intervention effectiveness with different sub-group size $\gamma$ . The relative order of effectiveness between selection criteria changes significantly according to $\gamma$ . + +![](images/ee046aacf361a66e2142bd425eb4b9503be3d7d748cf4835f840237885496187.jpg) +(b) $\gamma = 10$ + +are produced according to these concepts (see Figure 7c). + +# 5.2. Results + +First, we display the effect of input noise in Figure 8a. The initial task error increases with a level of noise $(z)$ due to the poor performance on concept prediction. Specifically, we need 17 intervention counts to decrease the task error by half with extremely noisy data $(z = 2.0)$ while correcting only 2 concepts yields the same effect for a moderate level of noise case $(z = 0.5)$ . In contrast, the initial task error is already near $0\%$ with an extremely small level of noise $(z = 0.1)$ where we do not need intervention at all. + +Next, we evaluate the effect of hidden concepts in Figure 8b. The final task error increases with more hidden concepts, and thus, intervention becomes less effective. Specifically, the error is still high around $13\%$ when half of the concepts are hidden ( $h = 50\%$ ) while it reaches zero error without hidden concepts ( $h = 0\%$ ). This is due to the fact that the target prediction cannot be made with complete information when there exist hidden concepts, which is often the case for constructing CBMs in realistic settings. + +We also find that generating more diverse concept values within the same class increases both initial and final task errors, making intervention less effective (see Figure 8c). This is because learning discriminative representations for target prediction would be a lot more difficult. To circumvent this issue, many previous works (Koh et al., 2020; Zarlenga et al., 2022; Havasi et al., 2022) attempt to preprocess the data so as to force concepts within the same class have the same value. However, this may have an adverse effect on model fairness as we discuss in Section 6. + +Furthermore, we discover that different sub-group sizes can change the relative ordering of intervention effectiveness between concept selection criteria. Here, we define a subgroup as classes with similar concept values and denote its size as $\gamma$ . Interestingly, EUDTP becomes less effective with a small group size ( $\gamma = 1$ ) even compared to RAND whereas it becomes the most effective when $\gamma = 10$ except for LCP as seen in Figure 9. We believe that it is because classes within the same sub-group are classified more easily by decreasing uncertainty in target prediction using EUDTP when $\gamma$ is large. + +![](images/be92c758e927d457bcfad38f04ce7fb057e296a91393bd3da0733e8d94955cd0.jpg) +(a) RAND + +![](images/ab911e0dec44ada62641531ca6d1de43bc7bb76e680d7bb83b25ce68269f46c6.jpg) +(b) UCP + +![](images/e49740e04b84727ff7705120f5f7063ccaf54cda834af842b05c9cedfd1f2fa4.jpg) +(c) ECTP + +![](images/71ef9889f04068cc96cf9eef3f7f4bf1f15421ac46108f3e506d5325d68ac90a.jpg) +Figure 10: Effect of NVC on task error. Intervention is done on the CUB images for which concept prediction is $100\%$ accurate, and yet, NVC keeps on increasing the task error. NVC O and NVC X each correspond to the result with and without NVC. +(a) Advantage of MV + +![](images/9f8a3f30c78585622dc412a57873c97c242648c8b5069c1134fb94cc3ca4b808.jpg) +(b) Disadvantage of MV +Figure 11: Effects of majority voting (MV) on target prediction. MV O and MV X each correspond to the result with and without MV. (a) While it helps decrease task error on intervention, (b) it yields biased predictions against minorities. + +The result indicates that the behavior of a criterion can vary significantly across different datasets and again demonstrate the necessity of a comprehensive evaluation of the newly developed criteria. We refer to Appendix G for results on the effect of some other factors on intervention. + +# 6. Pitfalls of Intervention Practices + +So far we have focused on analyzing the effectiveness of intervention procedure in many aspects. In this section, we add another dimension, namely, reliability and fairness of the current intervention practices, to help advance toward trustworthy and responsible machine learning models. + +# 6.1. Nullifying Void Concepts Increases Task Error + +Does intervention always help target prediction? Contrary to expectation, we find that the answer is no, and in fact, intervention can rather increase the task error. To verify this, we set up an ablation experiment using the CUB dataset where intervention is conducted only on the cases for which all concepts are predicted correctly with zero error; ideally intervention should have no effect in this case. The results are quite the opposite as presented in Figure 10. The task error keeps on increasing as with more intervention, and the prediction error reaches to more than seven times as much as that with no intervention. + +It turns out that it is due to nullifying void concepts (NVC), a common practice of treating unsure concepts by setting them to be simply zero, which leads to this catastrophic failure. For example, just because the wing part of a bird species is invisible does not necessarily mean that the concept 'wing color:black' should be zero valued; this bird can fall in the class of 'Black_Tern' whose wing color is actually black. We identify that this seemingly plausible tactic can in fact mistreat invalid concepts, and therefore, for invalid cases applying NVC intervention should be avoided. + +# 6.2. Majority Voting Neglects Minorities + +Another common practice often taken by the community (Koh et al., 2020; Zarlenga et al., 2022; Havasi et al., 2022) is to coalesce concept values among the same class by forcing them to have their majority votes (MV). As a preprocessing, this tactic can dramatically improve the task performance as we demonstrate in Figure 11a. This is quite obvious by now as with our Synthetic experiment results in Section 5.2 where we show that high concept diversity can deteriorate the target prediction performance. + +However, it turns out that MV can have a negative impact on model fairness by ignoring minority samples. As a concrete example, consider the CUB dataset in which the majority of images of 'black tern' class have black underparts while some minority samples have white underparts. When MV is used in this case, we find that the underparts color predictions for the minorities are mis-guided to be black, which correspond to the majority-voted values, so as to yield the correct target prediction; if the minorities follow their own concept values before MV otherwise, it can lead to an incorrect target prediction (see Figure 11b). Intervention can even aggravate the situation since it can decrease the task error for the minorities only when the predicted concept value is changed to the majority-voted value (black). In this sense, target predictions become biased toward the majority when MV is used. + +This scenario can be problematic in the real world when the dataset contains sensitive concepts, e.g., gender or race. Consider the case where the target task is to predict the occupation of a person based on his/her look and 'race' is included in the concepts. When most 'physicians' are Caucasians and if we apply MV in this case, then an 'Asian physician' can be correctly classified only when he is predicted as a Caucasian; otherwise, it would lead to an incorrect target prediction. While this might be somewhat exaggerated, we remark that this kind of situation can happen in practice. Besides, MV also forces us to misconduct intervention at test time with the majority votes, which is + +neither available in practice nor considered fair. We defer addressing the trade-off between performance and fairness to future work. + +# 7. Discussion and future work + +In this section, we discuss our key findings, their potential implications to the community, and possible future research directions. + +In-depth analysis of intervention procedure We design and conduct a wide variety of new experiments from scratch to investigate the effectiveness of the current intervention procedure of CBMs. In a nutshell, our results reveal that not only is it the specific way of selecting which concept to intervene, but also how to intervene on what data under which environments matters to the degree of drastically changing results. Future works can extend our analysis to theoretically investigate the intervention strategies in more detail. + +Benchmark for evaluating concept selection methods Our evaluation protocol can serve as a way to evaluate any newly developed concept selection methods for their effectiveness. We also provide a framework to generate synthetic data based on which the effectiveness of proposed methods can be tested under various circumstances. + +Analyzing the cost of intervention The effectiveness of concept selection criteria can change when reflecting the cost of intervention (see Section 4.2.1). Specifically, we find that a strongly evaluated criterion can become less effective in hypothetical cases considering the size of the models or the status of the labor markets. This indicates that choosing the concept selection criterion should reflect the available budgets and environments at test time, especially in some extreme environments. + +Identifying the effect of data on intervention The effectiveness of the intervention procedure can vary quite significantly depending on some unknown characteristics of the real-world datasets (see Section 5). For example, intervention becomes less effective on datasets containing more hidden concepts or more diverse concept values within the same class. Practitioners should take into account this aspect when developing and deploying CBMs since intervention may not work effective as expected. + +Reliability and fairness of intervention While the current trend is mostly focused on developing new intervention methods, we discovered somewhat unexpected and previously unknown issues, which can be critical for ensuring reliability and fairness of the intervention procedure (see Section 6). To be more specific, intervention can sometimes increase the task error contrary to the expectation and have a negative impact on model fairness by making the predictions biased toward the majority. We call for future work + +to address these problems before blindly adopting CBMs in practice. + +Extension of our work to other settings We remark that we have only focused on the classification tasks, considering the characteristics of the real-world datasets used in the literature (Koh et al., 2020; Zarlenga et al., 2022; Havasi et al., 2022)1. Extension of the intervention strategies to the regression problems with real-valued concepts or targets can be a promising avenue for future works. Analyzing intervention under more diverse settings could also be interesting, such as introducing architectural variations with hard autoregressive models (Havasi et al., 2022) or concept embedding models (Zarlenga et al., 2022). + +# 8. Conclusion + +The intervention procedure of CBMs has been unattended in previous work despite its critical impact on practitioners. In this work, we study a wide range of aspects regarding the procedure and provide an in-depth analysis for the first time in the literature. Specifically, we develop various concept selection criteria that can be used for intervention and demonstrate that their behaviors can vary quite significantly based on an array of factors including intervention levels, cost, training, conceptualization, and data characteristics. We also find several pitfalls in the current practices that need a careful addressing to be deployed in realistic settings. We plan to investigate further on developing more effective and reliable intervention strategies in future work. + +# Acknowledgement + +This work was partly supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-01906, Artificial Intelligence Graduate School Program (POSTECH) and No.2022-0-00959, (part2) Few-Shot learning of Causal Inference in Vision and Language for Decision Making) and National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2022R1C1C1013366, 2022R1F1A1064569, RS-2023-00210466). + +# References + +Alvarez Melis, D. and Jaakkola, T. Towards robust interpretability with self-explaining neural networks. NeurIPS, 2018. + +Bahadori, M. T. and Heckerman, D. E. Debiasing concept-based explanations with causal analysis. *ICLR*, 2021. + +Chauhan, K., Tiwari, R., Freyberg, J., Shenoy, P., and Dvi-jotham, K. Interactive concept bottleneck models. AAAI, 2022. +Daneshjou, R., Vodrahalli, K., Novoa, R. A., Jenkins, M., Liang, W., Rotemberg, V., Ko, J., Swetter, S. M., Bailey, E. E., Gevaert, O., et al. Disparities in dermatology ai performance on a diverse, curated clinical image set. Science advances, 2022a. +Daneshjou, R., Yuksekgonul, M., Cai, Z. R., Novoa, R. A., and Zou, J. Skincon: A skin disease dataset densely annotated by domain experts for fine-grained debugging and analysis. NeurIPS Datasets and Benchmarks Track, 2022b. +Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. CVPR, 2009. +Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., and Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature, 2017. +Ghorbani, A., Wexler, J., Zou, J. Y., and Kim, B. Towards automatic concept-based explanations. *NeurIPS*, 2019. +Groh, M., Harris, C., Soenksen, L., Lau, F., Han, R., Kim, A., Koochek, A., and Badri, O. Evaluating deep neural networks trained on clinical images in dermatology with the fitzpatrick 17k dataset. CVPR, 2021. +Guo, Y. and Greiner, R. Optimistic active-learning using mutual information. *IJCAI*, 2007. +Havasi, M., Parbhoo, S., and Doshi-Velez, F. Addressing leakage in concept bottleneck models. NeurIPS, 2022. +Jordan, M. I. and Mitchell, T. M. Machine learning: Trends, perspectives, and prospects. Science, 2015. +Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., Viegas, F., et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). ICML, 2018. +Koh, P. W., Nguyen, T., Tang, Y. S., Mussmann, S., Pierson, E., Kim, B., and Liang, P. Concept bottleneck models. ICML, 2020. +LeCun, Y., Bengio, Y., and Hinton, G. Deep learning. Nature, 2015. +Lewis, D. D. A sequential algorithm for training text classifiers: Corrigendum and additional data. Acm Sigir Forum, 1995. + +Lewis, D. D. and Catlett, J. Heterogeneous uncertainty sampling for supervised learning. Machine learning proceedings, 1994. +Mahinpei, A., Clark, J., Lage, I., Doshi-Velez, F., and Pan, W. Promises and pitfalls of black-box concept learning models. Workshop on XAI, ICML, 2021. +Marconato, E., Passerini, A., and Teso, S. Glancenets: Interpretable, leak-proof concept-based models. NeurIPS, 2022. +Margeloiu, A., Ashman, M., Bhatt, U., Chen, Y., Jamnik, M., and Weller, A. Do concept bottleneck models learn as intended? Workshop on Responsible AI, ICLR, 2021. +Nevitt, M., Felson, D., and Lester, G. The osteoarthritis initiative. Protocol for the cohort study, 2006. +Sawada, Y. and Nakamura, K. Concept bottleneck model with additional unsupervised concepts. IEEE Access, 2022. +Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. CVPR, 2017. +Settles, B., Craven, M., and Ray, S. Multiple-instance active learning. NeurIPS, 2007. +Sheth, I., Rahman, A. A., Sevyeri, L. R., Havaei, M., and Kahou, S. E. Learning from uncertain concepts via test time interventions. Workshop on Trustworthy and Socially Responsible Machine Learning, NeurIPS, 2022. +Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. Rethinking the inception architecture for computer vision. CVPR, 2016. +Wah, C., Branson, S., Welinder, P., Perona, P., and Belongie, S. The caltech-ucsd birds-200-2011 dataset. 2011. +Yuksekgonul, M., Wang, M., and Zou, J. Post-hoc concept bottleneck models. *ICLR*, 2023. +Zarlenga, M. E., Barbiero, P., Ciravegna, G., Marra, G., Giannini, F., Diligenti, M., Shams, Z., Precioso, F., Melacci, S., Weller, A., et al. Concept embedding models. NeurIPS, 2022. +Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., and Torralba, A. Learning deep features for discriminative localization. CVPR, 2016. + +# A. Datasets + +# A.1.CUB + +CUB (Wah et al., 2011) is the standard dataset used to study CBMs in the previous works (Koh et al., 2020; Zarlenga et al., 2022; Havasi et al., 2022; Sawada & Nakamura, 2022). There are 5994 and 5794 examples for train and test sets in total, in which each example consists of the triplet of (image $x$ , concepts $c$ , label $y$ ) of a bird species. All the concepts have binary values; for example, the 'wing color:black' for a given bird image can be either 1 (for true) or 0 (for false). Following previous works (Koh et al., 2020; Sawada & Nakamura, 2022; Zarlenga et al., 2022), we perform so-called majority voting as pre-processing so that images of the same class always have the same concept values; for example, if more than half of the crow images have true value for the concept 'wing color:black' then this process converts all concept labels for images belonging to the crow class to have the same true value. Since the original concept labels are too noisy, this procedure helps to increase the overall performance. However, it can be potentially harmful to model fairness in some cases as we address in Section 6.2. We also remove concepts that are too sparse (i.e., concepts that are present in less than 10 classes) which results in 112 out of 312 concepts remaining. It is suggested in Koh et al. (2020) that including these sparse concepts in the concept layer makes it hard to predict their values as the positive training examples are too scarce. + +# A.2. SkinCon + +SkinCon (Daneshjou et al., 2022b) is a medical dataset which can be used to build interpretable machine learning models. The dataset provides densely annotated concepts for 3230 images from Fitzpatrick 17k skin disease dataset (Groh et al., 2021), which makes a triplet of (image $x$ , concepts $c$ , disease label $y$ ) of a skin lesion for each example. Since training and test sets are not specified in the SkinCon dataset, we randomly split the dataset into $70\%$ , $15\%$ , $15\%$ of training, validation, and test sets respectively. The dataset provides various levels of class labels ranging from individual disease labels with 114 classes to binary labels representing if the skin is benign or malignant. Following the experiments with Post-hoc CBM (Yuksekgonul et al., 2023) introduced in Daneshjou et al. (2022b), we use the binary labels for the target task and only use 22 concepts which are present in at least 50 images. Since the binary class labels are highly imbalanced ( $87\%$ vs. $13\%$ ), we train the target predictor $f$ with weighted loss and use the average of per-class error as the metric instead of overall error for a fair comparison. + +# A.3. Synthetic dataset + +Algorithm 1 Generating synthetic data +1: Sample $p_i \sim \mathcal{N}(\mu_\alpha, \sigma_\alpha)$ for $i = \{1, 2, \dots, k\}$ +2: for group $\ell = 0, 1, \dots, k / \gamma - 1$ do +3: Sample $\zeta_i \sim \mathcal{U}_{[0,1]}$ and set $\ell_i = \mathbb{1}[\zeta_i \geq p_i]$ for $i = \{1, 2, \dots, k\}$ +4: for $y = 1, \dots, \gamma$ do +5: Sample $i_y \in \{1, 2, \dots, k\}$ uniformly at random without replacement +6: Set $c_i^j = \neg \ell_i$ if $i = i_y$ and $c_i^j = \ell_i$ otherwise (class index $j = \gamma * \ell + y$ ) +7: end for +8: end for +9: Generate $W_x \in \mathbb{R}^{k \times k}$ with each element distributed according to the unit normal distribution $\mathcal{N}(0, \sigma_w)$ +10: for class $j = 1, \dots, k$ do +11: Generate $\nu$ samples for class $j$ as $x = W_x \cdot c^j + z$ where $z \sim \mathcal{N}(0, \sigma_z)$ +12: end for + +We generate the synthetic data following Algorithm 1 to test the effect of dataset characteristics on intervention. Here, we first assume that all examples within the same class share the same concept values and denote the $i$ -th concept value of $j$ -th class as $c_i^j$ . We also assume for simplicity that the dimensionality of inputs and the number of target classes are the same as the number of concepts $k$ , following Bahadori & Heckerman (2021). In line 1, $\mu_{\alpha}$ and $p_i = P(c_i = 0)$ each represent the overall sparsity level of the concepts (proportion of concepts with value 0) and the probability of $i$ -th concept taking value 0, respectively. We set $\mu_{\alpha}$ to be 0.8 considering that $80\%$ of the concepts have value 0 in the CUB dataset. We then divide classes into $k / \gamma$ sub-groups of size $\gamma$ to make those within the same group have similar concept values. Note that the classes within each sub-group only differ by two concept values as seen in line 6. We set + +$\gamma = 2, k = 100, \nu = 100, \sigma_{\alpha} = 0.1, \sigma_{w} = 0.1, z_{\alpha} = 0.8$ unless stated otherwise. We randomly divide the generated examples into $70\%$ of training sets, $15\%$ of validation sets, and $15\%$ of test sets. + +To generate the data with hidden concepts, we randomly pick $h\%$ of the concepts and remove them from the concept layer of CBMs. For training the models and intervention experiments, we only consider the remaining concepts. In addition, a new dataset with diverse concepts can be easily produced by introducing a single variable $d$ and reversing the value of each concept from the previously generated dataset with probability $d$ . In other words, $d$ stands for a factor to give variations to concept-target pairs that can exist in real world datasets, and it differs from the role of $z$ which controls the noise level to the input. + +# B. Architectures and Training + +CUB For the CUB dataset, we use Inception-v3 (Szegedy et al., 2016) pretrained onImagenet (Deng et al., 2009) for the concept predictor $g$ and 1-layer MLP for the target predictor $f$ respectively following the standard setup as in Koh et al. (2020). Here, both $g$ and $f$ are trained with the same training hyperparameters as in Koh et al. (2020). We used $\lambda = 0.01$ for JNT and JNT+P whose values were directly taken from Koh et al. (2020). For the experiments without majority voting (Figure 30 in Appendix H), we use Inceptionv3 pretrained on theImagenet for $g$ and 2-layer MLP for $f$ with the dimensionality of 200 so that it can describe more complex functions. We searched the best hyperparameters for both $g$ and $f$ over the same sets of values as in Koh et al. (2020). Specifically, we tried initial learning rates of [0.01, 0.001], constant learning rate and decaying the learning rate by 0.1 every [10, 15, 20] epoch, and the weight decay of [0.0004, 0.00004]. After finding the optimal values of hyperparameters whose validation accuracy is the best, we trained the networks with the same values again over 5 different random seeds on both training and validation sets. + +SkinCon For the SkinCon dataset, we fine-tune Deepderm (Daneshjou et al., 2022a) for the concept predictor $g$ , which is the Inception-v3 network trained on the data in Esteva et al. (2017), and train 1-layer MLP for the target predictor $f$ . We select hyperparameters that achieve the best performance (in terms of overall accuracy and average per-class accuracy for $g$ and $f$ respectively) in the validation set. Specifically, we tried initial learning rates of [0.0005, 0.001, 0.005], and constant learning rate and decaying the learning rate by 0.1 every 50 epoch. Here, we did not use the weight decay factor. For JNT and JNT+P training strategies, we tried concept loss weight $\lambda$ of [0.01, 0.1, 1.0, 5.0], but all of the values failed to decrease the task error at intervention. As in the CUB dataset, we trained the networks with the best hyperparameters over 5 different random seeds on the both training and validation sets. + +Synthetic For the synthetic datasets, we use 3-layer MLP of hidden layer size $\{100,100\}$ for $g$ and a single linear layer for $f$ , as similar to Zarlenga et al. (2022). For all the experiments, we tried constant learning rates of [0.01, 0.1, 1.0] without learning rate decay or weight decay factor and trained the networks with the best hyperparameters over 5 different random seeds on the training sets. We used $\lambda = 0.1$ for JNT and JNT+P whose values were determined by grid search over [0.01, 0.1, 1.0]. + +# C. More on Reflecting Cost of Intervention + +As $\beta$ becomes smaller RAND becomes more effective compared to ECTP (see Figure 12). This is because with small $\beta$ , $\tau_{g}$ becomes marginalized in the cost of ECTP which is $\mathcal{O}(\tau_g + (2k + 2)\tau_f + n\tau_i)$ , and therefore, the intervention effectiveness of ECTP is penalized as with increasing $k$ compared to RAND which only requires a single forward pass of $f$ . + +In addition, we experiment with more realistic settings for the CUB where we set $\tau_{i}$ as the concept annotation time (seconds) provided in the dataset and $\tau_{g},\tau_{f}$ as the wall-clock times for the inference. Specifically, we set $\tau_{i}\approx 0.7$ by dividing the annotation time into the number of concepts within the group and taking the average. In addition, $\tau_{g}\approx 18.7\times 10^{-3}$ and $\tau_{f}\approx 0.03\times 10^{-3}$ are acquired by measuring the inference time with RTX 3090 GPU and taking the average of 300 repetitions. In this setting, $\tau_{i}$ dominates the others, i.e., $\alpha$ is large, and the relative effectiveness between the criteria remains the same as seen in Figure 13. Nonetheless, we remark that the result can change with different model sizes or GPU environments in extreme cases. We also considered a more detailed case where we do not directly take the average of $\tau_{i}$ 's (concept annotation time) at once but rather take the average per intervention step, reflecting differences of intervention costs between different concepts. The relative rankings between RAND and ECTP do not change but interestingly we have + +![](images/acaa0fc90af8d9e2d4af320831715f1e4801457c0283c281ca1a15dcca8313cc.jpg) +(a) $\beta = 1$ + +![](images/38f215b6ca06bf74fecfb09674c1e265113de0cb40ac6badff45042618c9cb6e.jpg) +(b) $\beta = 3$ + +![](images/e59c935d89f2e8c812580037368e904c4bd9b8e9b2164107abc5f66a41611c80.jpg) +(c) $\beta = 5$ + +![](images/158094a11f3d06b68f802e1196b3be95606cb9a36371d1a6896755b6ff1b7c6c.jpg) +(d) $\beta = 10$ + +![](images/16228f1ae5cc2b92b6357af8c315291240c4a6dc6bf3a5331979780d60ad0dba.jpg) +(e) $\beta = 100$ + +![](images/35212f9430c0a981277cba8d09f6b9147ae11efa227797ef80c12f021fe58812.jpg) +Figure 12: Effect of $\beta$ on intervention. We fix $\tau_{i} = 1, \alpha = 1, k = 100$ . ECTP, the concept selection criteria strongly evaluated previously, becomes less effective as $\beta$ decreases. +(a) $\tau_{i}$ set as the average among all concepts + +![](images/c77f9a275597c32ada3b07a08cd19e9e548e8a8363ad4b22c56c59a496d69562.jpg) +(b) $\tau_{i}$ set as the average per each intervention step +Figure 13: Comparison between concept selection criteria in terms of the intervention cost for the CUB. Here, cost represents the seconds for concept annotation time and inference times for $g$ , $f$ . + +found that ECTP first selects the concepts which require more intervention costs (i.e., more concept annotation time). + +# D. More Results on the Effect of Intervention Levels on Intervention + +![](images/53abcdad1bdbded2c19d7a3d0a6c59d33e5d11b8c1faed796fb61c7d80e9aea7.jpg) +(a) $\mathrm{I} + \mathrm{S}$ + +![](images/006cceaeb25f0e428175fe6d8b14f429b4e429517e167a40a61e87c6550fccf9.jpg) +(b) $G + S$ + +![](images/8e60679c752fb74608bc6c6f4563c7bfb6fa49c75bc7578d6169eaaff3a67c21.jpg) +(c) $\mathrm{I} + \mathrm{B}$ + +![](images/912d032e82b3f946cd0dbdef4c805d009d6e814f0eadd1dbc57552a5cfdcc817.jpg) +(d) $\mathrm{G} + \mathrm{B}$ + +![](images/78d1a514607d161a12d32bb1108c2fc40205b8d878a0d153fbdd8727b401b84d.jpg) +Figure 14: Comparison between intervention criteria under different levels for the CUB. +(a) RAND + +![](images/e796ae91b795f1f67e2caa9d7927ba319257c5a00a299f539e80b9818fc0890a.jpg) +(b) UCP +Figure 15: Comparison between I+S vs. G+S for the CUB. + +![](images/de99cd0108eb9d5ed48447b7feaa9b93203a3b885f8a0c1278aa159109d855a3.jpg) +(c) LCP + +![](images/eaedc4ba07aa64885f39fa5afbb310077f650d163a27a5022e85275feec9a10b.jpg) +(d) CCTP + +![](images/47dd53fc79a8ff45c37d96062f9900cc83237aa8bd0e9b72148b1a8ce43fdbe8.jpg) +(e) ECTP + +![](images/181e9021678a96f818686b695a0143a9cc10f8877b3b2360908f8f729815719f.jpg) +(f) EUDTP + +The comparison between I+S and G+S using different concept selection criteria is presented in Figure 15. Individual + +![](images/181f2a9d2bd0f3a583ed921b7cc8dca6914b36f46fa8c5b2d0ae91ac1977bd0b.jpg) +(a) RAND + +![](images/a0e6a12e6e1859a2fec4e757dfb4ffbe18dd9de784ed6c55a1ce6a5b5e27be70.jpg) +(b) UCP + +![](images/56317d05db35ea67dd1f392619335231953a612f9e457238fa8a114fbb4c610a.jpg) +(c) LCP + +![](images/491b5c2f10cdc468b0ab2a9e370edc0a189652b243cdf0fdd10e742a3f7e7c4b.jpg) +(d) CCTP + +![](images/b8571dc164e2e68de4b24edf42b044a98d14806daf2c6ebc86691bf81a9ea186.jpg) +(e) ECTP + +![](images/25c92da71397ca8492882b32794058c0dd4b16a032912d6fcf0976de346c528f.jpg) +(f) EUDTP + +![](images/67f74821a73eeaf83c260313ef2bd054b183138be1a0674ec3530bad0509bda5.jpg) +(a) RAND + +![](images/7238dd903d6ad01a4957c4167c12d0cb9cd8f92aea08dea13b393dc723d73b5e.jpg) +(b) UCP + +![](images/2d3f905d54efb841e7fc449cd2cfa9f7d66140061456840dad7be18d18bcd60a.jpg) +Figure 16: Comparison between I+B vs. G+B for the CUB. +(c) LCP + +![](images/4d6ea4c9c38562fde0b325d601c8a21cef17fb3faf269dad7cc3fd1206284c9d.jpg) +(d) CCTP + +![](images/35ab8e65cbe1313e23025f3e8c7b675769a1a144f7e73bc1eaafd208ccfd4a9e.jpg) +(e) ECTP + +![](images/45913e4077a717020af758870ac9e57cb25344fe78d16df3779d16323299ea5c.jpg) +(f) EUDTP + +![](images/489e62ed38526af8e710a7318422551927e1241b7d0a9e1e0fb409d5619b77f4.jpg) +(a) RAND +Figure 18: Comparison between G+S vs. G+B for the CUB. For G+B, each point is plotted when the average number of intervened concepts per image first exceeds each integer value. + +![](images/4964518a859949f744c9ecd1a9f047ef7d0794b4e947a21a778e447791a31139.jpg) +Figure 17: Comparison between I+S vs. I+B for the CUB. +(b) UCP + +![](images/b9be29f2f82db70bb7619d1ec2b381ff77912f44ca22884450fd6677b9357d93.jpg) +(c) LCP + +![](images/11eae1f4d5b2d118345daf695a82ebf50b9a882c57e1308abe878e3c38fc4705.jpg) +(d) CCTP + +![](images/7acbe3a48a2ca744e1ef96fa30ddfdb79f5c9e7c1faf4ee4134aaae2e1cf89b3.jpg) +(e) ECTP + +![](images/3b237b105a172efd66d94b0da0ecc3b27bf505188da76ec7b03d90458c2db417.jpg) +(f) EUDTP + +intervention is in general more effective than group-wise intervention except for RAND criterion. We find similar results for the comparison between I+B and G+B (see Figure 16). We also note that CCTP becomes less effective in G levels as seen in Figure 14. + +Batch intervention is either more effective or at least as competitive as single intervention across different concept selection criteria as seen in Figure 17. In Figure 18, we observe that $\mathrm{G + B}$ are also more effective than $\mathrm{G + S}$ level. CCTP does not show much difference between S and B. It is because the target predictor $f$ is a simple linear layer for our experiments and thus $\frac{\partial f_j}{\partial\hat{c}_i} = w_{ij}$ is fixed for all examples where $w_{ij}$ is the weight of $i$ -th concept to $j$ -th class in $f$ . + +# E. More Results on the Effect of Training Strategies on Intervention + +![](images/0a6f3fb3f7ae3743172e918d7e6c8ef72627c73ae0dc1c7e1773609f0f6d9bc8.jpg) +(a) IND +Figure 19: Comparison between concept selection criteria using different training strategies for the CUB. For JNT, JNT + P, we present the results when $\lambda = 0.01$ . + +![](images/e1f116cfd5533f1123de20f7dfec2d989cac3538fb0eb0ef519deb35554c28ab.jpg) +(b)SEQ + +![](images/fb8e3f90fb83fea48431e30ff4a44851c5bdb2971f0aeb10f92e8ead1955d224.jpg) +(c) JNT + +![](images/9ce240a4b842cc1aade3dcc39328217f0cc4b9202da0d7fd8f895ff748779673.jpg) +(d) JNT + P + +The results for the CUB dataset are presented in Figure 19. Note that EUDTP becomes even less effective than RAND in SEQ + +![](images/afff0b0194ae5d8bd2ecacac7afa379ca2fdc251a22c69bf009b2fcaa5549d57.jpg) +(a) RAND + +![](images/568e3b0fedf70565b148f950df0e671fb411ef7817c7fbeff67bc180f3b1e875.jpg) +(b) UCP + +![](images/8e14a4b6c92e8963d8f44146a54ce2bb5ba8b39befb34e76f7d0caee03b2ddf1.jpg) +(c) LCP + +![](images/0c0e6db3becc5f4c12e7b857274c2f7119a8d627bb4386e4d3188d9a41fdfee8.jpg) +(d) CCTP + +![](images/1ad00de543abe49d1b7536fd974f2ecaae078aa355dcd8dfb75bdfaf5fabeb4e.jpg) +(e) ECTP + +![](images/708a56f6a5e7b254019865e2aef8220db8ea45df4535f2554fb44103e84a48f3.jpg) +(f) EUDTP + +![](images/c1a955cdadb5245c803c80d88a4332c6a8664d623a5b31cb5858d40766a7e558.jpg) +Figure 20: Comparison between different training strategies for a fixed concept selection criterion for the CUB. +(a) IND + +![](images/94e0e85f8f84775bbdb2e6e9d70c9c97f0bfe0a7dfa4137ac2a02847c9976b3b.jpg) +(b)SEQ + +![](images/9a9390cccdb4c18b0f43d414717c87b871517c903e51591104a31e20e8f050e2.jpg) +(c) JNT + +![](images/46722caa064e35b97cc618836b99f0441506bca8309ecd19fb34c839d276a793.jpg) +(d) JNT + P + +![](images/baebe8aeffb2c2d9f150e95904f0f639e49ab0452a0062a38dd62cd153662e6d.jpg) +Figure 21: Comparison between concept selection criteria using different training strategies for the Synthetic. For JNT, JNT + P, we present the results when $\lambda = 0.1$ +(a) RAND +Figure 22: Comparison between different training strategies for a fixed concept selection criterion for the Synthetic. + +![](images/2e640a3700c0c8bd805b8620c89c338d12e260134e6ba80e215cf32ecba44bd3.jpg) +(b) UCP + +![](images/3fa9267cef5a3709cd788200905be2b6da142c209101f77eded43a65073e8287.jpg) +(c) LCP + +![](images/ae7be9355e2ed13cf54a875b82185cdbde91dae0ad1ca33556f10290e82e346d.jpg) +(d) CCTP + +![](images/c8dbb4e30743bf2c9483f48380118fc2fddf163ce631c9ffb91b66e9fe08bab5.jpg) +(e) ECTP + +![](images/98ca3b071417655ef05646b3e7065b2cac8c92496f1f5cc2ea1bbccddab81698.jpg) +(f) EUDTP + +and JNT. For the synthetic datasets, EUDTP also becomes much less effective as in the CUB dataset (see Figure 21). Note that when using JNT or JNT+P training schemes, LCP may not be the best choice as the target predictor $f$ is not trained with the ground-truth concept values and thus rectifying the concept with the highest prediction loss does not always guarantee the decrease in the task error. Comparisons between different training strategies for a fixed concept selection criterion in the CUB and Synthetic are presented in Figures 20 and 22. + +# F. More Results on the Effect of Conceptualization Methods on Intervention + +![](images/095c3644c9dcef0a54e25c122b86b072be92be054e7a050999c3f4c76d0680bd.jpg) +(a) RAND +Figure 23: Intervention results under different conceptualization methods using various concept selection criteria. Here, we used IND training strategy for the CUB. + +![](images/d83f293434f77f38f4cfaaea77cc9969eea478f62222688b0bfba2977cd343d4.jpg) +(b) UCP + +![](images/417320f299aa26e8aa06be3045983d06c8fdd3c1770c0065099c5c3a1ecb17e4.jpg) +(c) LCP + +![](images/9f1c9e32c3e725216217e5d437329db13298326d6b4ffa13f3d6dfdf2b0ea423.jpg) +(d) CCTP + +![](images/2f681d90b7744e838e32ca1f6c879181c1e837ccd00544aa757ec920c5c9e07e.jpg) +(e) ECTP + +![](images/ae908223ff74f3d598e9abf26dbf54c411285985dd35e0d01681ca9372f19824.jpg) +(f) EUDTP + +Across all the datasets and concept selection criteria, utilizing effective criteria can reduce the gap between different conceptualization strategies much faster than RAND criterion as seen in Figures 23 to 27. + +![](images/4452223408a7db9001a4fbae532efe7b76e511a73ff875c89e11d3dbea58b649.jpg) +(a) RAND + +![](images/bfadd3374b735fbdbb884a79b1059899d04a439a75efe4ae967cbe41bea3219e.jpg) +(b) UCP + +![](images/c6d61d265fd74e88d6f366db53c1bb8cf33fb6ee2163094c0506ec0820eec2b7.jpg) +(c) LCP + +![](images/f921794b955f51e25782aed57f0b713a2430deeebbd1e8553832875c48a2282c.jpg) +(d) CCTP + +![](images/53aa8ee4e292873175f0a4bc9845aa673ad4033cfe603eb1ed52476444123e27.jpg) +(e) ECTP + +![](images/e79683049f44c6ff5d69d713866f4e3994c72d69e83e26a70447e89515d2104e.jpg) +(f) EUDTP + +![](images/acf7a994a940317428515c38849dfe1ea4c8622bf16268c51227abe736bd2126.jpg) +Figure 24: Intervention results under different conceptualization methods using various concept selection criteria. Here, we used JNT + P training strategy for the CUB. +(a) RAND + +![](images/10ad7803ec40f58146721bff7532ef56a3c64792b79822f577e8912db5ef5db7.jpg) +(b) UCP + +![](images/6fbe8b0405357043aab0dca457fdccc93fd39c6e86b679e0893f24ec20af0707.jpg) +(c) LCP + +![](images/1cfc370fd103bdeba49539dec14953c8fae06d93792e5eb37782cb2c22ceb8df.jpg) +(d) CCTP + +![](images/558f5c4a53161d95d62f08b0d20a5a82fe5e664e7adfc2d892cf674318e93fd9.jpg) +(e) ECTP + +![](images/e433da7abc541538ea0e750e89b1cf88db191a90a02bb2cbdc383a8fc156970b.jpg) +(f) EUDTP + +![](images/f9227c033380ed54ac57687814dfa793515dc71ec12fb7a12c5b178cf64a52d4.jpg) +Figure 25: Intervention results under different conceptualization methods using other concept selection criteria. Here, we used IND training strategy for the SkinCon. +(a) RAND + +![](images/c2200120fcdadd03e274a16738f9870c2fe86a05cf9ede9f1556411a62d08383.jpg) +(b) UCP + +![](images/c410a1c95d06da27ce8d97d9db573abaf3f21712105024f2c0fcdf601724e190.jpg) +(c) LCP + +![](images/83f1cc73a585b77b460456f6961b8eb9609745fb48701ae7770de18ce08aa20a.jpg) +(d) CCTP + +![](images/6353672af8cc5b85443ca5ec6486c79ab9c8dd778dcab4df402ba9dd15aa66a9.jpg) +(e) ECTP + +![](images/59f31c29dea0b5b2b7db3d76c883bbe0d349520c5e16f95df8a4c3ff42726d73.jpg) +(f) EUDTP + +![](images/caa7657b0acbf0148a630c0592489d91375ad8193fc508f51722f3c8664e8a41.jpg) +Figure 26: Intervention results under different conceptualization methods using various concept selection criteria. Here, we used IND training strategy for the synthetic dataset. +(a) RAND +Figure 27: Intervention results under different conceptualization methods using various concept selection criteria. Here, we used JNT + P training strategy for the synthetic dataset. + +![](images/cac16e3ad358b7bdae706b23711b4029f09ff67774ec6ec68d799d8ab4ad7417.jpg) +(b) UCP + +![](images/78d1a90a8ff4d2c0a94f8d5421be467677b67822b9dd65646cbe656b58133497.jpg) +(c) LCP + +![](images/b70509bc1f89848ed9219dd1b486b5a43bef4624104fe57fa7f2bbccfa6ac834.jpg) +(d) CCTP + +![](images/76f35e0147df16cf69ddfda050b05cf3a8a3e0e7141eb160a3fbc23e2b94629e.jpg) +(e) ECTP + +![](images/b0ef60fea46dc9b338d3dfd085db46a7a00173e8b8bb35fc1bf18ca5c57a5110.jpg) +(f) EUDTP + +# G. More Results on the Effect of Data on Intervention + +We find that intervention on data with extremely high input noise or extremely high diversity makes developed concept selection criteria less effective in general with a larger gap from LCP (see Figure 28). Specifically, UCP becomes less effective than other criteria in these cases. We assume that concept prediction uncertainty is rather uncorrelated with concept prediction loss when the concept predictor $g$ achieves very low accuracy. + +We also evaluate the effect of concept sparsity levels, i.e., probability of each concept having value 0, using CCTP criterion. Note that intervention becomes less effective as the sparsity level gets closer to $50\%$ as seen in Figure 29a. To understand why, recall that this criterion aggregates the contribution of each concept to the target label prediction. When the sparsity level is high and most concepts have value 0, target prediction is determined by only a few concepts and CCTP can work effectively by first intervening on the concept with the highest contribution. In contrast, as the level gets closer to $50\%$ , target prediction is determined by almost half of the concepts and contribution on target prediction becomes no longer a + +![](images/0fbcd34d101770b3271820afea8a93033057080cabffdcf931856eba7f42b420.jpg) +(a) Data with extremely high input noise + +![](images/0e6850f4f137a7fb1d43a52515d75e2cd8d682323170f53251a227bc90c7b32d.jpg) +(b) Data with extremely high concept diversity + +![](images/a57cc036c6f13581175a75d2c6effde80f9e4fc8014065a361fcda0d8eed5b2e.jpg) +Figure 28: Intervention results on the data with extremely high input noise (variance of 2.0) or concept diversity (perturbation probability of $30\%$ ) respectively. In these cases, the proposed concept selection criteria work less effectively. +(a) CCTP with different concept sparsity levels + +![](images/65344e5ae882b3ae74696c55146077707c5fc3953694ad409738c20878510048.jpg) +(b) UCP with different subgroup sizes +Figure 29: (a) CCTP becomes more effective with a higher concept sparsity level. (b) Final task error increases, but intervention becomes more effective with larger sub-group sizes. + +discriminative feature of the concepts, thus decreasing the effectiveness of the criterion. Furthermore, we observe that the final task error increases but intervention becomes more effective with a large sub-group size $\gamma$ (see Figure 29b). Specifically, we need 12 intervention counts to decrease the task error by half for the data with $\gamma = 1$ , but correcting 5 concepts achieve the same effect for $\gamma = 10$ . This is because intervention can decrease the task error much faster for mis-classified examples by distinguishing from similar classes when $\gamma$ is large. + +# H. More Results on Fairness of Majority Voting + +![](images/19dd29bbc1ea688cf532a2c07371ce8482c532d52e58ca6014a94ee18d11ed5e.jpg) +(a) RAND +Figure 30: Comparison of test-time intervention results with and without using majority voting. + +![](images/6cd62907c76141925e70b069fa232068c7c8cd7d4b8cba0a3bad7cfa2df2adb9.jpg) +(b) UCP + +![](images/54b78cdd13f91b464c0ff3a4818659aa96ea41c9af1598dbc634d6b8e83a1b77.jpg) +(c) LCP + +When we do not use majority voting on the CUB dataset, intervention rather increases the task error as seen in Figure 30. Specifically, intervention does not decrease task error at all with RAND, UCP. Even with LCP criterion, intervention does not reduce the task error as much as when we use majority voting, and the error rather starts to increase after about 10 concepts intervened. See Appendix B for the training details. \ No newline at end of file diff --git a/acloserlookattheinterventionprocedureofconceptbottleneckmodels/images.zip b/acloserlookattheinterventionprocedureofconceptbottleneckmodels/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6881ebeff3fca546bb2b0a6a68ccfc9eb6d0cb09 --- /dev/null +++ b/acloserlookattheinterventionprocedureofconceptbottleneckmodels/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e5d1e28de37fcf405dec9f5549cd9a56185dcc946015f1cd587c2cf77546882 +size 1377800 diff --git a/acloserlookattheinterventionprocedureofconceptbottleneckmodels/layout.json b/acloserlookattheinterventionprocedureofconceptbottleneckmodels/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c4bb3b65904f4a8994d9c6984dc0379973131fb3 --- /dev/null +++ b/acloserlookattheinterventionprocedureofconceptbottleneckmodels/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0ea9436111b0600312b62628922c4fe3499e086af8c67b8c5ae492660e8e940 +size 1014165 diff --git a/acompleteexpressivenesshierarchyforsubgraphgnnsviasubgraphweisfeilerlehmantests/44a1d0cf-c56a-42a0-8785-7b5f1d6c9d78_content_list.json b/acompleteexpressivenesshierarchyforsubgraphgnnsviasubgraphweisfeilerlehmantests/44a1d0cf-c56a-42a0-8785-7b5f1d6c9d78_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..07e58a205aba96943b3f289f3c62ef117f2bc503 --- /dev/null +++ b/acompleteexpressivenesshierarchyforsubgraphgnnsviasubgraphweisfeilerlehmantests/44a1d0cf-c56a-42a0-8785-7b5f1d6c9d78_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b83944708c2780dfe1951ffdd863a19178d8cdcfad8b8b2b8238f06228fb870 +size 498360 diff --git a/acompleteexpressivenesshierarchyforsubgraphgnnsviasubgraphweisfeilerlehmantests/44a1d0cf-c56a-42a0-8785-7b5f1d6c9d78_model.json b/acompleteexpressivenesshierarchyforsubgraphgnnsviasubgraphweisfeilerlehmantests/44a1d0cf-c56a-42a0-8785-7b5f1d6c9d78_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a873c020794fa223c783fdfe077c5fc1ab8adccc --- /dev/null +++ b/acompleteexpressivenesshierarchyforsubgraphgnnsviasubgraphweisfeilerlehmantests/44a1d0cf-c56a-42a0-8785-7b5f1d6c9d78_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a5972f1d6e41e3aeb44023d4788047ffd26d54d9579d33f4e558cc2a592602e0 +size 604218 diff --git a/acompleteexpressivenesshierarchyforsubgraphgnnsviasubgraphweisfeilerlehmantests/44a1d0cf-c56a-42a0-8785-7b5f1d6c9d78_origin.pdf b/acompleteexpressivenesshierarchyforsubgraphgnnsviasubgraphweisfeilerlehmantests/44a1d0cf-c56a-42a0-8785-7b5f1d6c9d78_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..00e78bb26e07dd4f0242e886cc4d371915a71703 --- /dev/null +++ b/acompleteexpressivenesshierarchyforsubgraphgnnsviasubgraphweisfeilerlehmantests/44a1d0cf-c56a-42a0-8785-7b5f1d6c9d78_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:33e7618a08120eff6bcd3d64d0902a25fa8111204976f28a054e10939c66f7f7 +size 3379085 diff --git a/acompleteexpressivenesshierarchyforsubgraphgnnsviasubgraphweisfeilerlehmantests/full.md b/acompleteexpressivenesshierarchyforsubgraphgnnsviasubgraphweisfeilerlehmantests/full.md new file mode 100644 index 0000000000000000000000000000000000000000..79d27efa780f8fa4492b0916ebfa194d8bbd5f3a --- /dev/null +++ b/acompleteexpressivenesshierarchyforsubgraphgnnsviasubgraphweisfeilerlehmantests/full.md @@ -0,0 +1,2252 @@ +# A Complete Expressiveness Hierarchy for Subgraph GNNs via Subgraph Weisfeiler-Lehman Tests + +Bohang Zhang1 Guhao Feng*23 Yiheng Du*4 Di He1 Liwei Wang15 + +# Abstract + +Recently, subgraph GNNs have emerged as an important direction for developing expressive graph neural networks (GNNs). While numerous architectures have been proposed, so far there is still a limited understanding of how various design paradigms differ in terms of expressive power, nor is it clear what design principle achieves maximal expressiveness with minimal architectural complexity. To address these fundamental questions, this paper conducts a systematic study of general node-based subgraph GNNs through the lens of Subgraph Weisfeiler-Lehman Tests (SWL). Our central result is to build a complete hierarchy of SWL with strictly growing expressivity. Concretely, we prove that any node-based subgraph GNN falls into one of the six SWL equivalence classes, among which SSWL achieves the maximal expressive power. We also study how these equivalence classes differ in terms of their practical expressiveness such as encoding graph distance and biconnectivity. In addition, we give a tight expressivity upper bound of all SWL algorithms by establishing a close relation with localized versions of WL and Folklore WL (FWL) tests. Overall, our results provide insights into the power of existing subgraph GNNs, guide the design of new architectures, and point out their limitations by revealing an inherent gap with the 2-FWL test. Finally, experiments demonstrate that SSWL-inspired subgraph GNNs can significantly outperform prior architectures on multiple benchmarks despite great simplicity. + +*Equal contribution 1National Key Laboratory of General Artificial Intelligence, School of Intelligence Science and Technology, Peking University 2School of EECS, Peking University 3Pazhou Lab 4Yuanpei College, Peking University 5Center for Data Science, Peking University. Correspondence to: Bohang Zhang , Di He , Liwei Wang . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +# 1. Introduction + +Graph neural networks (GNNs), especially equivariant message-passing neural networks (MPNNs), have become the dominant approach for learning on graph-structured data (Gilmer et al., 2017; Hamilton et al., 2017; Kipf & Welling, 2017; Velicković et al., 2018). Despite their great simplicity and scalability, one major drawback of MPNNs lies in the limited expressiveness (Xu et al., 2019; Morris et al., 2019). This motivated a variety of subsequent works to develop provably more expressive architectures, among which subgraph GNNs have emerged as a new trend (Cotta et al., 2021; You et al., 2021; Zhang & Li, 2021; Bevilacqua et al., 2022; Zhao et al., 2022a; Papp & Wattenhofer, 2022; Frasca et al., 2022; Qian et al., 2022; Huang et al., 2022). + +Broadly speaking, a general (node-based) subgraph GNN first transforms an input graph $G$ into a collection of subgraphs, each of which is associated with a unique node in $G$ . It then computes a feature representation for each node of each subgraph through a series of equivariant message-passing layers. Finally, it outputs a representation of graph $G$ by pooling all these subgraph node features. Subgraph GNNs have received great attention partly due to their elegant structure, enhanced expressiveness, message-passing-based inductive bias, and superior empirical performance (Frasca et al., 2022; Zhao et al., 2022a). + +One central question in subgraph GNNs lies in how to design simple yet expressive equivariant layers. Starting from the most basic design where each node only interacts with its local neighbors in the own subgraph (Cotta et al., 2021; Qian et al., 2022), recent works have developed a rich family of (cross-graph) aggregation operations (Bevilacqua et al., 2022; Zhao et al., 2022a; Frasca et al., 2022). In particular, Frasca et al. (2022) gave a unified characterization of the design space of subgraph GNNs based on 2-IGN (Maron et al., 2019b;a), which contains dozens of atomic aggregations. However, it is generally unclear whether the added aggregations can theoretically improve a model's expressiveness as it becomes increasingly complex. So far, a systematic investigation and comparison of various possible aggregation schemes in terms of expressiveness is still lacking. More fundamentally, for both theory and practice, is there a canonical design principle of subgraph GNNs that achieves the maximal expressiveness with the least model complexity? + +A complete hierarchy of subgraph GNNs. In this paper, we comprehensively study the aforementioned questions through the lens of Subgraph Weisfeiler-Lehman Tests (SWL), a class of color refinement algorithms abstracted from subgraph GNNs in distinguishing non-isomorphic graphs. Each SWL consists of three ingredients: (a) graph generation policy, (b) message-passing aggregation scheme, and (c) final pooling scheme. Among commonly used graph generation policies, we mainly focus on the canonical node marking SWL as it theoretically achieves the best expressive power despite simplicity (Proposition 4.2). Our central result is to build a complete hierarchy for all node marking SWL with various aggregation schemes and pooling schemes. Concretely, we prove that any node-based subgraph GNN falls into one of the six SWL equivalence classes and establish strict expressivity inclusion relationships between different classes (see Corollary 4.7, Theorem 7.1, and Figure 1). In particular, our result highlights that, by including symmetrically two basic local aggregations, the corresponding SWL (called SSWL) has theoretically achieved the maximal expressive power. Our result thus provides a clear picture of the power and limitation of existing architectures, settling a series of open problems raised in Bevilacqua et al. (2022); Frasca et al. (2022); Qian et al. (2022); Zhao et al. (2022a) (see Section 8 for discussions). + +Related to practical expressiveness. We provide concrete evidence that subgraph GNNs with better theoretical expressivity are also stronger in terms of their ability to compute fundamental graph properties. Inspired by the recent work of Zhang et al. (2023), we prove that the PSWL (defined in Corollary 4.7) is strictly more powerful than a variant of the Generalized Distance WL proposed in their paper, which incorporates both the shortest path distance and the hitting time distance (Definition A.1). Our result unifies and extends the findings in Zhang et al. (2023) and implies that all SWL algorithms stronger than PSWL are able to encode both distance and biconnectivity properties. In contrast, we give counterexamples to show that neither of these basic graph properties can be fully encoded in vanilla SWL. + +Localized (Folklore) WL tests. Similar to the classic WL and Folklore WL tests (Weisfeiler & Leman, 1968; Cai et al., 1992), node marking SWL corresponds to a natural class of computation models for graph canonization (Immerman & Lander, 1990). All SWL algorithms have $O(n^{2})$ memory complexity and $O(nm)$ computational complexity (per iteration) for a graph of $n$ vertices and $m$ edges. Owing to the improved computational efficiency over classic 2-FWL/3-WL (i.e. $O(n^{3})$ ), a better understanding of what can/cannot be achieved under this complexity class is arguably an important research question. We answer this question by first establishing a close relation between SWL and localized versions of 2-WL (Morris et al., 2020) and 2-FWL tests, both of which have the same complexity as SWL. We then + +![](images/ffed57ed2ad8de7e5006556b3024f3b533d305cdac565ca2d5b3d0107bdebc3f.jpg) +Figure 1. Expressiveness hierarchy of different WL algorithms. + +derive a number of key results: (i) The strongest SSWL is as powerful as localized 2-WL. This builds a surprising link between the works of Frasca et al. (2022) and Morris et al. (2020). (ii) Despite the same complexity, there is an inherent gap between localized 2-WL and localized 2-FWL. (iii) There is an inherent gap between localized 2-FWL and classic 2-FWL. Consequently, our results settle a fundamental open problem raised in Frasca et al. (2022) about whether subgraph GNNs can match the power of 2-FWL, and further implies that subgraph GNNs even do not reach the maximal expressiveness in the model class of $O(nm)$ complexity. This reveals an intrinsic limitation of the subgraph GNN model class and points out a new direction for improvement. + +Technical Contributions. Actually, it is quite challenging to find a principled class of hard graphs that can reveal the expressivity gap of different SWL/FWL-type algorithms. As a main technical contribution, we develop a novel analyzing framework inspired by Cai et al. (1992) based on pebbling games, where we considerably extend the game originally designed for FWL to all types of SWL and localized 2-WL/2-FWL algorithms. The game viewpoint offers deep insights into the power of different algorithms, through which we can skillfully construct a collection of nontrivial counterexample graphs to prove all strict separation results in this paper. We believe the proposed games and counterexamples may be of independent value in future work. + +Practical Contributions. Our theoretical insights can also guide in designing simple, efficient, yet powerful subgraph GNN architectures. In particular, the proposed SSWL corresponds to an elegant design principle with only 3 atomic equivariant aggregation operations, yet the resulting model is strictly more powerful than all prior node-based subgraph GNNs. Empirically, we verify SSWL-based subgraph GNNs on several benchmarks such as substructure counting and molecular property prediction, showing that they can significantly outperform prior architectures despite fewer model parameters and great simplicity. + +# 2. Formalizing Subgraph GNNs + +Notations. We use $\{\}$ to denote sets and use $\{\{\}\}$ to denote multisets. The cardinality of (multi)set $\mathcal{S}$ is denoted as $|\mathcal{S}|$ . In this paper, we consider finite, undirected, simple, connected graphs, and we use $G = (\mathcal{V}_G,\mathcal{E}_G)$ to denote such a graph with vertex set $\nu_{G}$ and edge set $\mathcal{E}_G$ . Each edge in $\mathcal{E}_G$ is expressed as a set $\{u,v\}$ containing two distinct vertices in $\nu_{G}$ . Given a vertex $u$ , denote its neighbors as $\mathcal{N}_G(u)\coloneqq \{v\in \nu_G:\{u,v\} \in \mathcal{E}_G\}$ . Similarly, the $k$ -hop neighbors of $u$ is denoted as $\mathcal{N}_G^k (u)\coloneqq \{v\in \nu_G:\mathrm{dis}_G(u,v)\leq k\}$ , where $\mathrm{dis}_G(u,v)$ is the shortest path distance between $u$ and $v$ . In particular, $\mathcal{N}_G^1 (u) = \mathcal{N}_G(u)\cup \{u\}$ . + +A general subgraph GNN processes an input graph $G$ following three steps: (i) generating subgraphs, (ii) equivariant message-passing, and (iii) final pooling. Below, we separately describe each of these components. + +Node-based graph generation policies. The first step is to generate a collection of subgraphs of $G$ based on a predefined graph generation policy $\pi$ and initialize node features in each subgraph. For node-based subgraph GNNs, there are a total of $|\mathcal{V}_G|$ subgraphs, and each subgraph is uniquely associated with a specific node $u\in \mathcal{V}_G$ , so that $\pi$ can be expressed as a mapping of the form $\pi (G) = \{(G^u,\tilde{h}_G^u):u\in \mathcal{V}_G\}$ . Here, all subgraphs $G^{u} = (\mathcal{V}_{G},\mathcal{E}_{G}^{u})$ share the vertex set $\mathcal{V}_G$ but may differ in the edge set $\mathcal{E}_G^u$ . The mapping $\tilde{h}_G^u:\mathcal{V}_G\to \mathbb{R}^d$ defines the initial node features, i.e., $\tilde{h}_G^u (v)$ is the initial feature of vertex $v$ in subgraph $G^{u}$ . + +Various graph generation policies have been proposed in prior works, which differ in the choice of $\mathcal{E}_G^u$ and $\tilde{h}_G^u$ . For example, common choices of $\mathcal{E}_G^u$ are: (i) using the original graph $(\mathcal{E}_G^u = \mathcal{E}_G)$ , (ii) node deletion $(\mathcal{E}_G^u = \mathcal{E}_G \setminus \{\{u, v\} : v \in \mathcal{N}_G(u)\}$ by deleting all edges associated to node $u$ , and (iii) $k$ -hop ego network $(\mathcal{E}_G^u = \{\{v, w\} \in \mathcal{E}_G : v, w \in \mathcal{N}_G^k(u)\})$ . To initialize node features, there are also three popular choices: (i) constant node features, where $\tilde{h}_G^u(v)$ is the same for all $u, v \in \mathcal{V}_G$ ; (ii) node marking, where $\tilde{h}_G^u(v)$ depends only on whether $u = v$ or not; (iii) distance encoding, where $\tilde{h}_G^u(v)$ depends on the shortest path distance between $u$ and $v$ , i.e. $\mathrm{dis}_G(u, v)$ . + +In this paper, we mainly consider the canonical node marking policy on the original graph due to its simplicity. Importantly, we will show in Section 4.1 that it already achieves the maximal expressiveness among all the above policies. + +Equivariant message-passing. The main backbone of subgraph GNNs consists of $L$ stacked equivariant message-passing layers. For each network layer $l \in [L]$ , the feature of each node $v$ in each subgraph $G^u$ is computed, which can be denoted as $h_G^{(l)}(u,v)$ . At the beginning, $h_G^{(0)}(u,v) = \tilde{h}_G^u(v)$ . Following Frasca et al. (2022), we study arguably the most general design space that incorporates a broad class of possible message-passing aggregation operations. + +Definition 2.1. A general subgraph GNN layer has the form + +$$ +h _ {G} ^ {(l + 1)} (u, v) = \sigma^ {(l + 1)} \left(\mathsf {o p} _ {1} (u, v, G, h _ {G} ^ {(l)}), \dots , \mathsf {o p} _ {r} (u, v, G, h _ {G} ^ {(l)})\right), +$$ + +where $\sigma^{(l + 1)}$ is an arbitrary (parameterized) continuous function, and each atomic operation $\mathsf{op}_i(u,v,G,h)$ can take any of the following expressions: + +- Single-point: $h(u, v)$ , $h(v, u)$ , $h(u, u)$ , or $h(v, v)$ ; +- Global: $\sum_{w\in \mathcal{V}_G}h(u,w)$ or $\sum_{w\in \mathcal{V}_G}h(w,v)$ ; +- Local: $\sum_{w\in \mathcal{N}_{G^u}(v)}h(u,w)$ or $\sum_{w\in \mathcal{N}_{G^v}(u)}h(w,v)$ . + +We assume that $h(u,v)$ is always present in some $\mathrm{op}_i$ . + +It is easy to see that any GNN layer defined above is permutation equivariant. Among them, two most basic atomic operations are $h(u,v)$ and $\sum_{w\in \mathcal{N}_{G^u}(v)}h(u,w)$ , which are applied in all prior subgraph GNNs. Without using further operations, the vanilla subgraph GNN layer has the form + +$$ +h _ {G} ^ {(l + 1)} (u, v) = \sigma^ {(l + 1)} \left(h _ {G} ^ {(l)} (u, v), \sum_ {w \in \mathcal {N} _ {G} u (v)} h _ {G} ^ {(l)} (u, w)\right). +$$ + +Besides, several works have explored other aggregation operations, and we list a few representative examples below1. + +Example 2.2. (i) ESAN (Bevilacqua et al., 2022) additionally uses global aggregation $\sum_{w\in \mathcal{V}_G}h(w,v)$ . (ii) GNN-AK (Zhao et al., 2022a) additionally uses single-point operation $h(v,v)$ . It also uses global aggregation $\sum_{w\in \mathcal{V}_G}h(u,w)$ when $u = v$ . (iii) SUN (Frasca et al., 2022) additionally uses $h(u,u)$ , $h(v,v)$ , and both types of global aggregations. + +Final pooling layer. The last step is to output a graph representation $f(G)$ based on all the collected features $\{h_G^{(L)}(u,v):u,v\in \mathcal{V}_G\}$ . There are two different ways to implement this, which differ in the order of pooling along the two dimensions $u,v$ . The first approach, called vertex-subgraph pooling, first pools all node features in each subgraph $G^u$ to obtain the subgraph representation, i.e., $f^{\mathsf{S}}(G,u)\coloneqq \sigma^{\mathsf{S}}\left(\sum_{v\in \mathcal{V}_G}h_G^{(L)}(u,v)\right)$ , and then pools all subgraph representations to obtain the final output $f(G)\coloneqq \sigma^{\mathsf{G}}(\sum_{u\in \mathcal{V}_G}f^{\mathsf{S}}(G,u))$ . Here, $\sigma^{\mathsf{S}}$ and $\sigma^{\mathsf{G}}$ can be any parameterized function. Most prior works follow this paradigm. In contrast, the second approach, called + +subgraph-vertex pooling, first generates node representations $f^{\mathsf{V}}(G,v) \coloneqq \sigma^{\mathsf{V}}(\sum_{u\in \mathcal{V}_G}h_G^{(L)}(u,v))$ for each $v \in \mathcal{V}_G$ , and then pools all these node representations to obtain the graph representation, i.e., $f(G) \coloneqq \sigma^{\mathsf{G}}(\sum_{v\in \mathcal{V}_G}f^{\mathsf{V}}(G,v))$ . This approach is adopted in Qian et al. (2022). + +# 3. Subgraph Weisfeiler-Lehman Test + +To formally study the expressive power of subgraph GNNs, in this section we introduce the Subgraph WL Test (SWL), a class of color refinement algorithms for graph isomorphism test. Let $G = (\mathcal{V}_G,\mathcal{E}_G)$ and $H = (\mathcal{V}_H,\mathcal{E}_H)$ be two graphs. As with subgraph GNNs, SWL first generates for each graph a collection of subgraphs and initializes color mappings based on a graph generation policy $\pi$ . We denote the results as $\{(G^u,\tilde{\chi}_G^u):u\in \mathcal{V}_G\}$ and $\{(H^u,\tilde{\chi}_H^u):u\in \mathcal{V}_H\}$ , where $\tilde{\chi}$ is the color mapping that can be constant, node marking or distance encoding (according to Section 2). + +Given graph $G$ , let $\chi_G^{(0)}(u,v) \coloneqq \tilde{\chi}_G^u(v)$ for $u,v \in \mathcal{V}_G$ . SWL then refines the color of each $(u,v)$ pair using various types of aggregation operations defined as follows: + +# Definition 3.1. A general SWL iteration has the form + +$$ +\chi_ {G} ^ {(t + 1)} (u, v) = \operatorname {h a s h} \left(\operatorname {a g g} _ {1} (u, v, G, \chi_ {G} ^ {(t)}), \dots , \operatorname {a g g} _ {r} (u, v, G, \chi_ {G} ^ {(t)})\right), +$$ + +where hash is a perfect hash function and each $\mathbf{agg}_i(u,v,G,\chi)$ can take any of the following expressions: + +- Single-point: $\chi(u, v)$ , $\chi(v, u)$ , $\chi(u, u)$ , or $\chi(v, v)$ ; +- Global: $\{\{\chi(u, w) : w \in \mathcal{V}_G\}\}$ or $\{\{\chi(w, v) : w \in \mathcal{V}_G\}\}$ . +- Local: $\{\{\chi(u,w):w\in \mathcal{N}_{G^u}(v)\}\}$ or $\{\{\chi(w,v):w\in \mathcal{N}_{G^v}(u)\}\}$ . + +We use symbols $\mathsf{agg}_{\mathsf{uv}}^{\mathsf{P}}, \mathsf{agg}_{\mathsf{vu}}^{\mathsf{P}}, \mathsf{agg}_{\mathsf{uu}}^{\mathsf{P}}, \mathsf{agg}_{\mathsf{vv}}^{\mathsf{P}}, \mathsf{agg}_{\mathsf{u}}^{\mathsf{G}}, \mathsf{agg}_{\mathsf{v}}^{\mathsf{G}}, \mathsf{agg}_{\mathsf{u}}^{\mathsf{L}},$ and $\mathsf{agg}_{\mathsf{v}}^{\mathsf{L}}$ to denote each of the 8 basic operations, respectively. We assume $\mathsf{agg}_{\mathsf{uv}}^{\mathsf{P}}$ is always present in some $\mathsf{agg}_i$ . The set $\mathcal{A} := \{\mathsf{agg}_i : i \in [r]\}$ fully determines the SWL iteration and is called the aggregation scheme. + +For each iteration $t$ , the color mapping $\chi_G^{(t)}$ induces an equivalence relation and thus a partition $\mathcal{P}_G^{(t)}$ over the set $\mathcal{V}_G \times \mathcal{V}_G$ . Since $\mathrm{agg}_{\mathrm{uv}}^{\mathrm{P}}$ is present in $\mathcal{A}$ , $\mathcal{P}_G^{(t)}$ must get refined as $t$ grows. Therefore, with a sufficiently large number of iterations $t \leq |\mathcal{V}_G|^2$ , the color mapping becomes stable (i.e., inducing a stable partition). Without abuse of notation, we denote the stable color mapping by $\chi_G$ . + +Finally, the representation of graph $G$ , denoted as $c(G)$ , is computed by hashing all colors $\chi_G(u,v)$ for $u,v\in \mathcal{V}_G$ . Parallel to the previous section, there are two different pooling paradigms to implement this: + +- Vertex-subgraph pooling (abbreviated as VS): $c(G) = \text{hash}(\{\text{hash}(\{\chi_G(u, v) : v \in \mathcal{V}_G\}) : u \in \mathcal{V}_G\})$ ; +- Subgraph-vertex pooling (abbreviated as SV): $c(G) = \text{hash}(\{\text{hash}(\{\chi_G(u, v) : u \in \mathcal{V}_G\}) : v \in \mathcal{V}_G\})$ . + +We say SWL can distinguish a pair of graphs $G$ and $H$ if $c(G) \neq c(H)$ . Similarly, given a subgraph GNN $f$ , we say $f$ distinguishes graphs $G$ and $H$ if $f(G) \neq f(H)$ . The following proposition establishes the connection between SWL and subgraph GNNs in terms of expressivity in distinguishing non-isomorphic graphs. + +Proposition 3.2. The expressive power of any subgraph GNN defined in Section 2 is bounded by a corresponding SWL by matching the graph generation policy $\pi$ , the aggregation scheme (between Definitions 2.1 and 3.1), and the pooling paradigm. Moreover, when considering bounded-size graphs, for any SWL algorithm, there exists a matching subgraph GNN with the same expressive power. + +Qian et al. (2022) first proved the above result for vanilla subgraph GNNs without cross-graph aggregations. Here, we consider general aggregation schemes and give a unified proof of Proposition 3.2 in Appendix D. Based on this result, we can focus on studying the expressive power of SWL in subsequent analysis. + +# 4. Expressiveness and Hierarchy of SWL + +In this section, we systematically study how different design paradigms impact the expressiveness of SWL algorithms. To begin with, we need the following set of terminologies: + +Definition 4.1. Let $\mathsf{A}_1$ and $\mathsf{A}_2$ be two color refinement algorithms, and denote $c_i(G), i \in \{1,2\}$ as the graph representation computed by $\mathsf{A}_i$ for graph $G$ . We say: + +- $\mathsf{A}_1$ is more powerful than $\mathsf{A}_2$ , denoted as $\mathsf{A}_2 \preceq \mathsf{A}_1$ , if for any pair of graphs $G$ and $H$ , $c_1(G) = c_1(H)$ implies $c_2(G) = c_2(H)$ . +- $\mathsf{A}_1$ is as powerful as $\mathsf{A}_2$ , denoted as $\mathsf{A}_1 \simeq \mathsf{A}_2$ , if both $\mathsf{A}_1 \preceq \mathsf{A}_2$ and $\mathsf{A}_2 \preceq \mathsf{A}_1$ hold. +- $\mathsf{A}_1$ is strictly more powerful than $\mathsf{A}_2$ , denoted as $\mathsf{A}_2 \prec \mathsf{A}_1$ , if $\mathsf{A}_2 \preceq \mathsf{A}_1$ and $\mathsf{A}_2 \not\cong \mathsf{A}_1$ , i.e., there exist graphs $G, H$ such that $c_1(G) \neq c_1(H)$ and $c_2(G) = c_2(H)$ . +- $\mathsf{A}_1$ and $\mathsf{A}_2$ are incomparable, denoted as $\mathsf{A}_1 \sim \mathsf{A}_2$ , if neither $\mathsf{A}_1 \preceq \mathsf{A}_2$ nor $\mathsf{A}_2 \preceq \mathsf{A}_1$ holds. + +# 4.1. The canonical form: node marking SWL test + +The presence of many different graph generation policies complicates our subsequent analysis. Interestingly, however, we show the simple node marking policy (on the original graph) already achieves the maximal power among all policies considered in Section 2 under mild assumptions. + +Proposition 4.2. Consider any SWL algorithm $\mathsf{A}$ that contains the two basic aggregations $\mathsf{agg}_{\mathsf{uv}}^{\mathsf{P}}$ and $\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}$ in Definition 3.1. Denote $\hat{\mathsf{A}}$ as the corresponding algorithm obtained from $\mathsf{A}$ by replacing the graph generation policy $\pi$ to node marking (on the original graph). Then, $\mathsf{A} \preceq \hat{\mathsf{A}}$ . + +We give a proof in Appendix E.2, which is based on the following finding: when the special node mark is propagated by SWL with local aggregation, the color of each node pair $(u,v)$ can encode its distance $\mathrm{dis}_G(u,v)$ (Lemma E.4), and the structure of $k$ -hop ego network is also encoded. + +Note that for the node marking policy, all subgraphs are just the original graph $(G^u = G)$ , which simplifies our analysis. We hence focus on the simple yet expressive node marking policy in subsequent sections. The following notations will be frequently used: + +Definition 4.3. Denote $\mathsf{A}(\mathcal{A}, \mathsf{Pool})$ as the node marking SWL test with aggregation scheme $\mathcal{A} \cup \{\mathsf{agg}_{uv}^{\mathsf{P}}\}$ and pooling paradigm Pool, where Pool $\in \{\mathsf{VS}, \mathsf{SV}\}$ , and + +$$ +\mathcal {A} \subset \left\{\operatorname {a g g} _ {\mathrm {u u}} ^ {\mathrm {P}}, \operatorname {a g g} _ {\mathrm {v v}} ^ {\mathrm {P}}, \operatorname {a g g} _ {\mathrm {v u}} ^ {\mathrm {P}}, \operatorname {a g g} _ {\mathrm {u}} ^ {\mathrm {G}}, \operatorname {a g g} _ {\mathrm {v}} ^ {\mathrm {G}}, \operatorname {a g g} _ {\mathrm {u}} ^ {\mathrm {L}}, \operatorname {a g g} _ {\mathrm {v}} ^ {\mathrm {L}} \right\}. +$$ + +Here, we assume that $\mathsf{agg}_{uv}^{\mathsf{P}}$ is always present in SWL. + +# 4.2. Hierarchy of different algorithms + +As shown in Definition 4.3, there are a large number of possible combinations of aggregation/pooling designs. In this subsection, we aim to build a complete hierarchy of SWL algorithms by establishing expressivity inclusion relations between different design paradigms. All proofs in this section are deferred to Appendix E. + +We first consider the expressive power of different aggregation schemes. We have the following main theorem: + +Theorem 4.4. Under the notation of Definition 4.3, for any $\mathcal{A}$ and Pool, the following hold: + +- $\mathsf{A}(\mathcal{A} \cup \{\mathsf{agg}_{\mathbf{u}}^{\mathsf{G}}\}, \mathsf{Pool}) \preceq \mathsf{A}(\mathcal{A} \cup \{\mathsf{agg}_{\mathbf{u}}^{\mathsf{L}}\}, \mathsf{Pool})$ and +- $\mathsf{A}(\mathcal{A} \cup \{\mathsf{agg}_{\mathbf{u}}^{\mathsf{L}}\}, \mathsf{Pool}) \simeq \mathsf{A}(\mathcal{A} \cup \{\mathsf{agg}_{\mathbf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathbf{u}}^{\mathsf{G}}\}, \mathsf{Pool})$ ; +- $A(\mathcal{A} \cup \{\text{agg}_{\mathbf{uu}}^{\mathsf{P}}\}, \text{Pool}) \preceq A(\mathcal{A} \cup \{\text{agg}_{\mathbf{u}}^{\mathsf{G}}\}, \text{Pool})$ and $A(\mathcal{A} \cup \{\text{agg}_{\mathbf{u}}^{\mathsf{G}}\}, \text{Pool}) \simeq A(\mathcal{A} \cup \{\text{agg}_{\mathbf{u}}^{\mathsf{G}}, \text{agg}_{\mathbf{uu}}^{\mathsf{P}}\}, \text{Pool})$ ; +- $\mathrm{A}(\{\mathrm{agg}_{\mathrm{u}}^{\mathrm{L}}, \mathrm{agg}_{\mathrm{vu}}^{\mathrm{P}}\}, \mathrm{Pool}) \simeq \mathrm{A}(\{\mathrm{agg}_{\mathrm{u}}^{\mathrm{L}}, \mathrm{agg}_{\mathrm{v}}^{\mathrm{L}}\}, \mathrm{Pool}) \simeq \mathrm{A}(\{\mathrm{agg}_{\mathrm{u}}^{\mathrm{L}}, \mathrm{agg}_{\mathrm{v}}^{\mathrm{L}}, \mathrm{agg}_{\mathrm{vu}}^{\mathrm{P}}\}, \mathrm{Pool}).$ + +Theorem 4.4 shows that local aggregation is more powerful than (and can express) the corresponding global aggregation, while global aggregation is more powerful than (and can express) the corresponding single-point aggregation. In addition, the "transpose" aggregation $\mathsf{agg}_{\mathsf{vu}}^{\mathsf{P}}$ is quite powerful: when combining a local aggregation $\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}$ , it can express the other local aggregation $\mathsf{agg}_{\mathsf{v}}^{\mathsf{L}}$ . + +We next turn to the pooling paradigm. We first show that there is a symmetry (duality) between $u, v$ and the two types of pooling paradigms VS, SV. + +Proposition 4.5. Let $\mathcal{A}$ be any aggregation scheme defined in Definition 4.3. Denote $\mathcal{A}^{\mathrm{u} \leftrightarrow \mathrm{v}}$ as the aggregation scheme obtained from $\mathcal{A}$ by exchanging the element $\mathrm{agg}_{\mathrm{uu}}^{\mathrm{P}}$ with $\mathrm{agg}_{\mathrm{vv}}^{\mathrm{P}}$ , exchanging $\mathrm{agg}_{\mathrm{u}}^{\mathrm{G}}$ with $\mathrm{agg}_{\mathrm{v}}^{\mathrm{G}}$ , and exchanging $\mathrm{agg}_{\mathrm{u}}^{\mathrm{L}}$ with $\mathrm{agg}_{\mathrm{v}}^{\mathrm{L}}$ . Then, $\mathrm{A}(\mathcal{A}, \mathrm{VS}) \simeq \mathrm{A}(\mathcal{A}^{\mathrm{u} \leftrightarrow \mathrm{v}}, \mathrm{SV})$ . + +Based on the symmetry, one can easily extend Theorem 4.4 to a variant that gives relations for $\mathsf{agg}_{\mathsf{vv}}^{\mathsf{P}}$ , $\mathsf{agg}_{\mathsf{v}}^{\mathsf{G}}$ , and $\mathsf{agg}_{\mathsf{v}}^{\mathsf{L}}$ . Moreover, we have the following main theorem: + +Theorem 4.6. Let $\mathcal{A}$ be defined in Definition 4.3 with $\mathrm{agg}_{\mathrm{u}}^{\mathrm{L}} \in \mathcal{A}$ . Then, the following hold: + +- $\mathrm{A}(\mathcal{A},\mathrm{VS}) \preceq \mathrm{A}(\mathcal{A},\mathrm{SV})$ +- If $\{\mathrm{agg}_{\mathsf{v}}^{\mathsf{G}}, \mathrm{agg}_{\mathsf{v}}^{\mathsf{L}}\} \cap \mathcal{A} \neq \emptyset$ , then $\mathrm{A}(\mathcal{A}, \mathrm{VS}) \simeq \mathrm{A}(\mathcal{A}, \mathrm{SV})$ . + +Theorem 4.6 indicates that the subgraph-vertex pooling is always more powerful than the vertex-subgraph pooling, especially when the aggregation scheme is weak (e.g., the vanilla SWL). On the other hand, they become equally expressive for SWL with strong aggregation schemes. + +Combined with the above three results, we have built a complete hierarchy for the expressive power of all node marking SWL algorithms in Definition 4.3. In particular, we show any SWL must fall into the following 6 types: + +Corollary 4.7. Let $\mathsf{A}(\mathcal{A},\mathsf{Pool})$ be any SWL defined in Definition 4.3 with at least one local aggregation, i.e. $\{\mathrm{agg}_{\mathrm{u}}^{\mathrm{L}},\mathrm{agg}_{\mathrm{v}}^{\mathrm{L}}\} \cap \mathcal{A}\neq \emptyset$ . Then, $\mathsf{A}(\mathcal{A},\mathsf{Pool})$ must be as expressive as one of the 6 SWL algorithms defined below: + +- (Vanilla SWL) SWL(VS) := A( $\{agg_u^L\}$ , VS), SWL(SV) := A( $\{agg_u^L\}$ , SV); +- (SWL with additional single-point aggregation) +PSWL(VS) := A(\{agg\}, aggP\}, VS), +PSWL(SV) := A(\{agg\}, aggP\}, SV); +- (SWL with additional global aggregation) + $\mathrm{GSWL} \coloneqq \mathrm{A}(\{\mathrm{agg}_{\mathrm{u}}^{\mathrm{L}}, \mathrm{agg}_{\mathrm{v}}^{\mathrm{G}}\}, \mathrm{VS})$ +(Symmetrized SWL) SSW $\coloneqq$ A( $\{agg_{\mathrm{u}}^{\mathrm{L}}, agg_{\mathrm{v}}^{\mathrm{L}}\}, VS$ ). + +Moreover, we have + +$$ +\begin{array}{l} \operatorname {S W L} (\mathrm {V S}) \preceq \operatorname {S W L} (\mathrm {S V}) a n d \operatorname {P S W L} (\mathrm {V S}) \preceq \operatorname {P S W L} (\mathrm {S V}), \\ \operatorname {S W L} (\mathrm {V S}) \preceq \operatorname {P S W L} (\mathrm {V S}) \text {a n d} \operatorname {S W L} (\mathrm {S V}) \preceq \operatorname {P S W L} (\mathrm {S V}), \\ \mathrm {P S W L} (\mathrm {S V}) \preceq \mathrm {G S W L} \preceq \mathrm {S S W L}. \\ \end{array} +$$ + +Corollary 4.7 is significant in that it drastically reduces the problem of studying a large number of different SWL variants to the study of only 6 standard paradigms. Moreover, it implies that the simple SSWL already achieves the maximal expressive power among all SWL variants. A detailed discussion on how these standard paradigms relate to previously proposed subgraph GNNs will be made in Section 8. + +Yet, there are still two fundamental problems that are not answered in Corollary 4.7. First, it remains unclear whether some SWL algorithm is strictly more powerful than another. This question is particularly important for a better understanding of how global, local, and single-point aggregations vary in their expressive power brought to SWL. + +Second, a deep understanding of the limitation of SWL algorithms is still open. While Frasca et al. (2022); Qian + +et al. (2022) recently discovered that the expressiveness of subgraph GNNs can be upper bounded by the standard 2-FWL (3-WL) test, it remains a mystery whether there is an inherent gap between 2-FWL and SWL (in particular, the strongest SSWL). Note that the per-iteration complexity of SWL is $O(nm)$ for a graph of $n$ vertices and $m$ edges, which is remarkably lower than 2-FWL ( $O(n^3)$ complexity), so it is reasonable to expect that 2-FWL is strictly more powerful. If this is the case, one may further ask: does SWL achieve the maximal power among all color refinement algorithms with complexity $O(nm)$ ? We aim to fully address both the above fundamental questions in subsequent sections. + +# 5. Localized Folklore Weisfeiler-Lehman Test + +In this section, we propose two novel types of WL algorithms based on the standard 2-dimensional Folklore Weisfeiler-Lehman test (2-FWL) (Weisfeiler & Leman, 1968; Cai et al., 1992), which turns out to be closely related to SWL. Recall that 2-FWL maintains a color for each vertex pair $(u,v)\in \mathcal{V}_G\times \mathcal{V}_G$ . Initially, the color $\chi_G^{(0)}(u,v)$ depends on the isomorphism type of the subgraph induced by $(u,v)$ , namely, depending on whether $u = v$ , $\{u,v\} \in \mathcal{E}_G$ , or $\{u,v\} \notin \mathcal{E}_G$ . In each iteration $t$ , the color is refined by the following update formula: + +$$ +\chi_ {G} ^ {(t + 1)} (u, v) = \operatorname {h a s h} \left(\chi_ {G} ^ {(t)} (u, v), \operatorname {w a l k} (u, v, \mathcal {V} _ {G}, \chi_ {G} ^ {(t)})\right), \tag {1} +$$ + +where we define + +$$ +\operatorname {w a l k} (u, v, \mathcal {V}, \chi) := \left\{\left\{\left(\chi (u, w), \chi (w, v)\right): w \in \mathcal {V} \right\}. \right. \tag {2} +$$ + +The color mapping $\chi_G^{(t)}$ stabilizes after a sufficiently large number of iterations $t\leq |\mathcal{V}_G|^2$ . Denote the stable color mapping as $\chi_{G}$ . 2-FWL finally outputs the graph representation $c(G)\coloneqq \mathsf{hash}(\{\{\chi_G(u,v):u,v\in \mathcal{V}_G\})\})$ + +One can see that each 2-FWL iteration has a complexity of $O(n^{3})$ for a graph of $n$ vertices and $m$ edges, due to the need to enumerate all $w \in \mathcal{V}_G$ for each pair $(u, v)$ . For sparse graphs where $m = o(n^{2})$ , 2-FWL is inefficient and does not well-exploit the sparse nature of the graph. This inspires us to consider variants of 2-FWL that enumerate only the local neighbors, such as $w \in \mathcal{N}_G^1(v)$ , by which the rich adjacency information is naturally incorporated in the update formula (besides in the initial colors by the isomorphism type). We note that such an idea was previously explored in Morris et al. (2020) (see Section 8 for further discussions). Importantly, the simple change substantially reduces the computational cost to $O(nm)$ , which is the same as SWL. To this end, we define two novel FWL-type algorithms: + +Definition 5.1. Define LFWL(2) as the localized version of 2-FWL, which replaces $\nu_{G}$ in (1) by $\mathcal{N}_G^1(v)$ . Define SLFWL(2) as the symmetrized version of LFWL(2), which replaces $\nu_{G}$ in (1) by $\mathcal{N}_G^1(u) \cup \mathcal{N}_G^1(v)$ . Finally, denote FWL(2) as the standard 2-FWL for consistency. + +Note that LFWL(2) only exploits the local information of the vertex $v$ , while SLFWL(2) uses all the local information of a vertex pair $(u, v)$ while still maintaining the $O(nm)$ cost. Therefore, one may expect that the latter is more powerful. Indeed, we have the following central result: + +Theorem 5.2. The following relations hold: + +- LFWL(2) $\preceq$ SLFWL(2) $\preceq$ FWL(2); +- PSWL(VS) $\preceq$ LFWL(2) and SSWL $\preceq$ SLFWL(2). + +The proof is given in Appendix G. We now make several discussions regarding the significance of Theorem 5.2. First, FWL(2) is more powerful than its localized variants, confirming that there is indeed a trade-off between complexity and expressiveness. Second, Theorem 5.2 reveals a close relationship between SWL and these localized 2-WL/2-FWL variants. In particular, SLFWL(2) is more powerful than all SWL algorithms despite the same computational cost. Therefore, we obtain a tight upper bound on the expressive power of subgraph GNNs with matching complexity, which remarkably improves the previous 2-FWL upper bound (Frasca et al., 2022; Qian et al., 2022). + +However, again, it is not known whether these localized 2-FWL variants are strictly more powerful than SWL, nor do we know whether there is an intrinsic gap between 2-FWL and its localized variants. To thoroughly answer all of these questions, we need a new tool: the pebbling game. + +# 6. Pebbling Game + +In this section, we develop a novel and unified analyzing framework for various SWL/FWL algorithms based on Ehrenfeucht-Fraïssé games (Ehrenfeucht, 1961; Fraïssaé, 1954). The seminal paper of Cai et al. (1992) has used such games to prove the existence of counterexample graphs which $k$ -FWL could not distinguish. Here, we vastly extend their result and show how pebbling games can be used to analyze all types of SWL and localized FWL algorithms. All proofs in this section are deferred to Appendix H. + +First consider any SWL algorithm $\mathsf{A}(\mathcal{A},\mathsf{Pool})$ . The pebbling game is played on two graphs $G = (\mathcal{V}_G,\mathcal{E}_G)$ and $H = (\mathcal{V}_H,\mathcal{E}_H)$ . Each graph is equipped with two different pebbles $u$ and $v$ , both of which lie outside the graph initially. There are two players, the Spoiler and the Duplicator. To describe the game, we first introduce a basic game operation dubbed "vertex selection". + +Definition 6.1 (Vertex Selection). Let $S_G \subset \mathcal{V}_G$ and $S_H \subset \mathcal{V}_H$ be given sets. Spoiler first freely chooses a non-empty subset $S^{\mathrm{S}}$ from either $S_G$ or $S_H$ , and Duplicator should respond with a subset $S^{\mathrm{D}}$ from the other set, satisfying $|S^{\mathrm{S}}| = |S^{\mathrm{D}}|$ . Duplicator loses the game if she has no feasible choice. Then, Spoiler can select any vertex $x^{\mathrm{S}} \in S^{\mathrm{D}}$ , and Duplicator responds by selecting any vertex $x^{\mathrm{D}} \in S^{\mathrm{S}}$ . + +Initialization. If $\mathsf{Pool} = \mathsf{VS}$ , the two players first select vertices $x^{\mathsf{S}}$ and $x^{\mathsf{D}}$ following the vertex selection procedure with $S_{G} = \mathcal{V}_{G}$ and $S_{H} = \mathcal{V}_{H}$ . Spoiler places pebble $u$ on the selected vertex $x^{\mathsf{S}}$ and Duplicator places the other pebble $u$ on vertex $x^{\mathsf{D}}$ . Next, Spoiler and Duplicator perform the vertex selection step again with $S_{G} = \mathcal{V}_{G}$ and $S_{H} = \mathcal{V}_{H}$ and place pebbles $v$ similarly. If $\mathsf{Pool} = \mathsf{SV}$ , the above procedure is analogous except that Spoiler/Duplicator places pebble $v$ in the first step and pebble $u$ in the second step. + +Main loop. The game then cyclically executes the following process. Depending on the SWL aggregation scheme $\mathcal{A}$ , Spoiler can freely choose one of the following ways to play: + +- Local aggregation $\mathrm{agg}_{\mathfrak{u}}^{\mathsf{L}} \in \mathcal{A}$ . Spoiler and Duplicator perform the vertex selection step with $S_{G} = \mathcal{N}_{G}(v)$ and $S_{H} = \mathcal{N}_{H}(v)$ , where $\mathcal{N}_G(v) / \mathcal{N}_H(v)$ represents the set of vertices in graph $G / H$ adjacent to the vertex placed by pebble $v$ . Spoiler moves pebble $v$ to the selected vertex $x^{\mathsf{S}}$ , and Duplicator moves the other pebble $v$ to vertex $x^{\mathsf{D}}$ . +- Global aggregation $\mathsf{agg}_{\mathfrak{u}}^{\mathsf{G}} \in \mathcal{A}$ . Spoiler and Duplicator perform the vertex selection step with $S_{G} = \mathcal{V}_{G}$ and $S_{H} = \mathcal{V}_{H}$ . Spoiler moves pebble $v$ to the selected vertex $x^{\mathsf{S}}$ , and Duplicator moves the other pebble $v$ to vertex $x^{\mathsf{D}}$ . +- Single-point aggregation $\mathrm{agg}_{uu}^{\mathsf{P}}\in \mathcal{A}$ . Both players move pebble $v$ to the position of pebble $u$ . +- Single-point aggregation $\mathrm{agg}_{\mathfrak{u}}^{\mathsf{P}} \in \mathcal{A}$ . Both players swap the position of pebbles $u$ and $v$ . + +The cases of $\mathsf{agg}_{\mathsf{v}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{v}}^{\mathsf{G}}, \mathsf{agg}_{\mathsf{v}\mathsf{v}}^{\mathsf{P}}$ are similar (symmetric) to $\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{u}}^{\mathsf{G}}, \mathsf{agg}_{\mathsf{uu}}^{\mathsf{P}}$ , so we omit them for clarity. + +Spoiler wins the game if, after a certain round, the subgraph of $G$ induced by vertices placed by pebbles $u, v$ does not have the same isomorphism type as that of $H$ . Duplicator wins the game if Spoiler cannot win after any number of rounds. Roughly speaking, Spoiler tries to find differences between graphs $G$ and $H$ using pebbles $u$ and $v$ , while Duplicator strives to make these pebbles look the same in the two graphs. Our main result is stated as follows: + +Theorem 6.2. Let $\mathsf{A}(\mathcal{A},\mathsf{Pool})$ be any SWL algorithm defined in Definition 4.3, satisfying $\{\mathrm{agg}_{\mathrm{u}}^{\mathrm{L}},\mathrm{agg}_{\mathrm{v}}^{\mathrm{L}}\} \cap \mathcal{A}\neq \emptyset$ . Then, $\mathsf{A}(\mathcal{A},\mathsf{Pool})$ can distinguish a pair of graphs $G$ and $H$ if and only if Spoiler can win the corresponding pebbling game on graphs $G$ and $H$ . + +We next turn to FWL-type algorithms. The games are mostly similar to SWL but with a few subtle differences. There are also two pebbles $u, v$ for each graph. Here, the two players first places pebbles $u, v$ using just one vertex selection step: Spoiler first chooses a non-empty subsets $S^{\mathsf{S}}$ from either $\mathcal{V}_G \times \mathcal{V}_G$ or $\mathcal{V}_H \times \mathcal{V}_H$ , and Duplicator should respond with a subset $S^{\mathsf{D}}$ from the other set, satisfying $|\mathcal{S}^{\mathsf{S}}| = |\mathcal{S}^{\mathsf{D}}|$ . Then, + +Spoiler selects any vertex pair $(x_{\mathbf{u}}^{\mathrm{S}}, x_{\mathbf{v}}^{\mathrm{S}}) \in S^{\mathrm{D}}$ , and Duplicator responds by selecting $(x_{\mathbf{u}}^{\mathrm{D}}, x_{\mathbf{v}}^{\mathrm{D}}) \in S^{\mathrm{S}}$ . Spoiler places pebbles $u$ and $v$ on $x_{\mathbf{u}}^{\mathrm{S}}$ and $x_{\mathbf{v}}^{\mathrm{S}}$ , respectively. Duplicator places the other pebbles $u$ and $v$ on $x_{\mathbf{u}}^{\mathrm{D}}$ and $x_{\mathbf{v}}^{\mathrm{D}}$ , respectively. + +The game then cyclically executes the following process. First consider LFWL(2). In each round, the two players perform the vertex selection step with $S_G = \mathcal{N}_G^1(v)$ and $S_H = \mathcal{N}_H^1(v)$ and select vertices $x^S$ and $x^D$ , respectively. Then it comes to the major difference from SWL: Spoiler can choose whether to move pebble $u$ or pebble $v$ to vertex $x^S$ , and Duplicator should move the same pebble in the other graph to $x^D$ . For SLFWL(2), the process is exactly the same as above except that the vertex selection is performed with $S_G = \mathcal{N}_G^1(u) \cup \mathcal{N}_G^1(v)$ and $S_H = \mathcal{N}_H^1(u) \cup \mathcal{N}_H^1(v)$ . Finally, for the standard FWL(2), the vertex selection is performed with $S_G = \mathcal{V}_G$ and $S_H = \mathcal{V}_H$ . We have the following result: + +Theorem 6.3. LFWL(2)/SLFWL(2)/FWL(2) can distinguish a pair of graphs $G$ and $H$ if and only if Spoiler can win the corresponding pebbling game on graphs $G$ and $H$ . + +Theorems 6.2 and 6.3 build an interesting connection between WL algorithms and games. Importantly, the game viewpoint offers us a much clearer picture to sort out various complex aggregation/pooling paradigms and leads to the main result of this paper in the next section. + +# 7. Strict Separation Results + +Up to now, all results derived in this paper are of the form “ $A_1 \preceq A_2$ ”. In this section, we will complete the analysis by proving that all relations $\preceq$ in Corollary 4.7 and Theorem 5.2 are actually the strict relations $\prec$ . Formally, we will prove: + +Theorem 7.1. The following hold: + +- SWL(VS) $\prec$ SWL(SV), PSWL(VS) $\prec$ PSWL(SV); +- SWL(VS) $\prec$ PSWL(VS), SWL(SV) $\prec$ PSWL(SV); +- PSWL(SV) $\prec$ GSWL $\prec$ SSWL; +- PSWL(VS) $\prec$ LFWL(2), SSWL $\prec$ SLFWL(2); +- LFWL(2) $\prec$ SLFWL(2) $\prec$ FWL(2); +- SWL(SV) $\approx$ PSWL(VS); +- LFWL(2) $\approx$ SWL(SV), LFWL(2) $\approx$ PSWL(SV), LFWL(2) $\approx$ GSWL, LFWL(2) $\approx$ SSWL. + +Due to space limitations, we can only present a brief proof sketch below, but we strongly encourage readers to browse the proof in Appendix I, where novel counterexamples for all these cases are constructed and analyzed using the pebbling game developed in Section 6. This is highly non-trivial and is a major technical contribution of this paper. + +Our counterexamples are motivated by Fürer (2001). Given a base graph $F$ , Fürer (2001) gave a principled way to construct a pair of non-isomorphic but highly similar graphs + +$G(F)$ and $H(F)$ that cannot be distinguished by $k$ -FWL. The key insight is that the difference between $G(F)$ and $H(F)$ is caused by a "twist" operation. (One can imagine the two graphs as a circle strip and its corresponding Möbius strip.) To distinguish the two graphs, Spoiler's only strategy is to fence out a twisted edge using his pebbles, similar to the strategy in Go. Yet, their analysis only applies to $k$ -FWL algorithms. We considerably generalize Fürer's approach by noting that different SWL/FWL-type algorithms differ significantly in their "surrounding" capability in the pebbling game. Given two WL algorithms $\mathsf{A}_1, \mathsf{A}_2$ where we want to prove $\mathsf{A}_1 \prec \mathsf{A}_2$ , we can identify the extra surrounding capability of $\mathsf{A}_2$ and skillfully construct a base graph $F$ such that the extra power is necessary to fence out a twisted edge. Here, the main challenge lies in constructing base graphs, which are given in Figures 4 to 11. + +In Figure 1, we give a clear illustration of the relationships between different SWL/FWL-type algorithms stated in Theorem 7.1, which forms a complete and elegant hierarchy. In the next section, we will give a detailed discussion of the significance of Theorem 7.1 in the context of prior works. + +# 8. Discussions with Prior Works + +The theoretical results in this paper can be directly used to analyze and compare the expressiveness of various subgraph GNNs in prior work, as summarized below: + +Proposition 8.1. Under the node marking policy, (i) ReconstructionGNN (Cotta et al., 2021), NGNN (Zhang & Li, 2021), IDGNN (You et al., 2021), and DS-GNN (Bevilacqua et al., 2022) are as expressive as SWL(VS); (ii) OSAN (Qian et al., 2022) is as expressive as SWL(SV); (iii) GNN-AK (Zhao et al., 2022a) is as expressive as PSWL(VS); (iv) DSS-GNN (or ESAN) (Bevilacqua et al., 2022), GNN-AK-ctx (Zhao et al., 2022a), and SUN (Frasca et al., 2022) are as expressive as GSWL; (v) ReIGN(2) (Frasca et al., 2022) is as expressive as SSWL. + +Proof. The proof of ReconstructionGNN, NGNN, IDGNN, DS-GNN, and OSAN follows by directly using Corollary 4.7 since these subgraph GNNs fit our framework of Definition 2.1. For other architectures, the proof can be found in Appendix F. $\square$ + +Regarding open problems in prior works. Below, we show how our results can be used to settle a series of open problems raised before. + +In Bevilacqua et al. (2022), the authors proposed two variants of WL algorithms, the DS-WL and the DSS-WL. They conjectured that the latter is strictly more powerful than the former due to the introduced cross-graph aggregation. Very recently, Zhang et al. (2023) gave the first evidence to this conjecture by proving that DSS-WL can distinguish cut + +vertices using node colors while DS-WL cannot. However, since identifying cut vertices is a node-level task, it remains an open question when considering the standard graph-level expressiveness, in particular, the task of distinguishing non-isomorphic graphs. Our result fully addressed the open question by showing that DSS-WL is indeed strictly more powerful than DS-WL in distinguishing non-isomorphic graphs. + +In Zhao et al. (2022a), the authors proposed two GNN architectures: GNN-AK and its extension GNN-AK-ctx. GNN-AK incorporates the so-called centroid encoding and GNN-AK-ctx further incorporates the contextual encoding. While the authors empirically showed the effectiveness of these encodings and found that GNN-AK-ctx can achieve much better performance on real-world tasks, they did not give a theoretical justification. Here, our result provides deep insights into the two models by indicating that (i) with centroid encoding, GNN-AK is strictly more powerful than vanilla subgraph GNNs; (ii) with contextual encoding, GNN-AK-ctx is strictly more powerful than GNN-AK. + +Recently, Qian et al. (2022) proposed two classes of subgraph GNNs, the original OSAN and the vertex-subgraph OSAN, which differ only in the final pooling paradigm. However, the authors did not discuss the relationship between the two types of architectures. Indeed, one may naturally guess that they have the same expressive power given the same GNN backbone. However, our result highlights that it is not the case: the original 1-OSAN is strictly more powerful than vertex-subgraph 1-OSAN. + +Recently, Frasca et al. (2022) proposed a theoretically-inspired model called RelIGN(2), as well as a practical version called SUN that unifies prior node-based subgraph GNNs. The authors conjectured that these models are more powerful than prior architectures and may even match the power of 2-FWL. It is formally left as an important open problem to study the expressiveness lower bound of RelIGN(2) and SUN (Frasca et al., 2022, Appendix E). In this paper, we fully settle the open problem by showing that: (i) RelIGN(2) is indeed the strongest subgraph GNN model and is strictly more powerful than prior models; (ii) However, SUN is just as powerful as the simpler ESAN although it incorporates many extra equivariant aggregation operations; (iii) RelIGN(2) does not achieve the 2-FWL expressiveness. Moreover, we point out an inherent gap between RelIGN(2) and 2-FWL, showing that RelIGN(2) even does not match SLFWL(2), a much weaker WL algorithm with the same complexity as RelIGN(2). + +Finally, we note that Frasca et al. (2022) mentioned two basic atomic aggregations that are not included in prior subgraph GNNs: $\mathrm{agg}_{\mathrm{v}}^{\mathrm{L}}$ and $\mathrm{agg}_{\mathrm{vu}}^{\mathrm{P}}$ (see Definition 3.1). In this paper, we highlight that they are actually fundamental: incorporating either of them into the subgraph GNN layer can essentially improve the model's expressiveness. + +Discussions with Morris et al. (2020). Our results also reveal a surprising relationship between the work of Morris et al. (2020) and subgraph GNNs. In Morris et al. (2020), the authors proposed the so-called $\delta$ -2-LWL, which can be seen as the symmetrized version of local 2-WL test. The update formula of $\delta$ -2-LWL is written as follows: + +$$ +\begin{array}{l} \chi_ {G} ^ {(t + 1)} (u, v) = \mathsf {h a s h} \left(\chi_ {G} ^ {(t)} (u, v), \{\{\chi_ {G} ^ {(t)} (u, w): w \in \mathcal {N} _ {G} (v) \}\}, \right. \\ \left. \left\{\left\{\chi_ {G} ^ {(t)} (w, v): w \in \mathcal {N} _ {G} (u) \right\} \right\}\right). \\ \end{array} +$$ + +$\delta$ -2-LWL shares great similarities with SSWL in the update formula. Actually, while the two algorithms differ in the initial color and the final pooling paradigm, we can prove that $\delta$ -2-LWL is as powerful as SSWL. We thus obtain the following key results: + +- Subgraph GNNs are also bounded by $\delta$ -2-LWL. Moreover, the strongest subgraph GNN, such as $\mathsf{RelGN}(2)$ , matches the power of $\delta$ -2-LWL. This builds an interesting link between the works of Frasca et al. (2022) and Morris et al. (2020). +- There is a fundamental gap between localized 2-WL and localized 2-FWL, despite the fact that both algorithms have the same computation/memory complexity. Such a result is perhaps surprising: it strongly contrasts to the relation between standard WL and FWL algorithms, where algorithms with equal computational complexity (e.g., $k$ -FWL and $(k + 1)$ -WL) always have the same expressive power. + +# 9. Experiments + +Our theory also provides clear guidance in designing simple, efficient, yet powerful subgraph GNN architectures. In particular, we find that all the previously proposed practical node-based subgraph GNNs are bounded by GSWL (Proposition 8.1), which does not attain the maximal power in the SWL hierarchy. Instead, we decide to adopt the elegant, SSWL-based subgraph GNN design principle, resulting in only 3 atomic equivariant aggregation operations; yet the corresponding model, called GNN-SSWL, is strictly more powerful than all prior node-based subgraph GNNs. We also design an extension of GNN-SSWL, denoted as GNN-SSWL+, by further incorporating the single-point aggregation $\mathsf{agg}_{\mathsf{vw}}^{\mathsf{P}}$ . While this does not improve the model's expressivity in theory, we find that it can often achieve better performance in real-world tasks. In addition, motivated by Proposition 4.2, the graph generation policy for both GNN-SSWL and GNN-SSWL+ is chosen as the distance encoding on the original graph (which is as expressive as node marking). A detailed description of model configuration and training hyper-parameters is given in Appendix K. Our code will be released at https://github.com/subgraph23/SWL. + +Performance on Counting Substructure Benchmark. Following Zhao et al. (2022a); Frasca et al. (2022), we first consider the synthetic task of counting substructures. The result is presented in Table 1. Our proposed models can solve all tasks almost completely and performs better than all prior node-based subgraph GNNs on most substructures, such as triangle, tailed triangle, 4-cycle, 5-cycle, and 6-cycle. In particular, GNN-SSWL+ significantly outperform GNN-AK+ and SUN for counting 6-cycles. We suspect that GSWL is not expressive for counting 6-cycle while SSWL is expressive for this task, which may highlight a fundamental advantage of SSWL in practical scenarios when the ability to count 6-cycle is needed (Huang et al., 2022). + +Performance on ZINC benchmark. We then validate our proposed models on the ZINC molecular property prediction benchmark (Dwivedi et al., 2020). We consider both ZINC-subset (12K selected graphs) and ZINC-full (250k graphs) and compare our models with both subgraph GNNs and other typical methods, such as substructure-based GNNs (Bouritsas et al., 2022; Bodnar et al., 2021a) and Graph Transformers (Zhang et al., 2023). + +The result is presented in Table 2. First, it can be observed that our proposed GNN-SSWL already matches/outperforms the performance of all subgraph GNN baselines while being much simpler. In particular, compared with state-of-the-art SUN architecture, GNN-SSWL requires only a quarter of atomic aggregations in each GNN layer and roughly half of the parameters, yet matches the performance of SUN on ZINC-subset. Second, by further incorporating $\mathsf{agg}_{\mathsf{vw}}^{\mathsf{P}}$ , GNN-SSWL+ significantly surpasses all subgraph GNN baselines and achieves state-of-the-art performance on both tasks. Finally, an interesting finding is that the performance of different subgraph GNN architectures shown in Table 2 roughly aligns with their theoretical expressivity in the SWL hierarchy. This may further justify that designing theoretically more powerful subgraph GNNs can benefit real-world tasks as well. + +Other tasks. We also conduct experiments on the OGBG-molhiv dataset (Hu et al., 2020). Due to space limit, the result is presented in Appendix K.4. + +# 10. Conclusion + +This paper gives a comprehensive and unified analysis of the expressiveness of subgraph GNNs. By building a complete expressiveness hierarchy, one can gain deep insights into the power and limitation of various prior works. On the theoretical side, we reveal close relations between SWL, localized WL, and localized Folklore WL, and propose a unified analyzing framework via pebbling games. On the practical side, we design a simple yet powerful subgraph GNN architecture that achieves strictly better expressivity and superior performance on multiple benchmarks. + +# References + +Arvind, V., Fuhlbrück, F., Köbler, J., and Verbitsky, O. On weisfeiler-leman invariance: Subgraph counts and related graph properties. Journal of Computer and System Sciences, 113:42-59, 2020. +Azizian, W. and Lelarge, M. Expressive power of invariant and equivariant graph neural networks. In International Conference on Learning Representations, 2021. +Balcilar, M., Héroux, P., Gauzere, B., Vasseur, P., Adam, S., and Honeine, P. Breaking the limits of message passing graph neural networks. In International Conference on Machine Learning, pp. 599-608. PMLR, 2021. +Barceló, P., Geerts, F., Reutter, J., and Ryschkov, M. Graph neural networks with local graph parameters. In Advances in Neural Information Processing Systems, volume 34, pp. 25280-25293, 2021. +Bevilacqua, B., Frasca, F., Lim, D., Srinivasan, B., Cai, C., Balamurugan, G., Bronstein, M. M., and Maron, H. Equivariant subgraph aggregation networks. In International Conference on Learning Representations, 2022. +Bodnar, C., Frasca, F., Otter, N., Wang, Y. G., Lio, P., Montufar, G., and Bronstein, M. M. Weisfeiler and lehman go cellular: CW networks. In Advances in Neural Information Processing Systems, volume 34, 2021a. +Bodnar, C., Frasca, F., Wang, Y., Otter, N., Montufar, G. F., Lio, P., and Bronstein, M. Weisfeiler and lehman go topological: Message passing simplicial networks. In International Conference on Machine Learning, pp. 1026-1037. PMLR, 2021b. +Bouritsas, G., Frasca, F., Zafeiriou, S. P., and Bronstein, M. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. +Cai, J.-Y., Fürer, M., and Immerman, N. An optimal lower bound on the number of variables for graph identification. Combinatorica, 12(4):389-410, 1992. +Chandra, A. K., Raghavan, P., Ruzzo, W. L., Smolensky, R., and Tiwari, P. The electrical resistance of a graph captures its commute and cover times. computational complexity, 6(4):312-340, 1996. +Chen, Z., Chen, L., Villar, S., and Bruna, J. Can graph neural networks count substructures? In Proceedings of the 34th International Conference on Neural Information Processing Systems, pp. 10383-10395, 2020. +Corso, G., Cavalleri, L., Beaini, D., Lio, P., and Velicković, P. Principal neighbourhood aggregation for graph nets. In Advances in Neural Information Processing Systems, volume 33, pp. 13260-13271, 2020. + +Cotta, L., Morris, C., and Ribeiro, B. Reconstruction for powerful graph representations. In Advances in Neural Information Processing Systems, volume 34, pp. 1713-1726, 2021. +Cvetkovic, D., Cvetković, D. M., Rowlinson, P., and Simic, S. Eigenspaces of graphs. Cambridge University Press, 1997. +Dwivedi, V. P., Joshi, C. K., Laurent, T., Bengio, Y., and Bresson, X. Benchmarking graph neural networks. arXiv preprint arXiv:2003.00982, 2020. +Ehrenfeucht, A. An application of games to the completeness problem for formalized theories. Fund. Math, 49 (129-141):13, 1961. +Fey, M. and Lenssen, J. E. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428, 2019. +Fraïsaé, R. Sur quelques classifications des systèmes de relations. *Publications Scientifiques de l'Université d'Alger*, 1954. +Frasca, F., Bevilacqua, B., Bronstein, M. M., and Maron, H. Understanding and extending subgraph gnu ns by rethinking their symmetries. In Advances in Neural Information Processing Systems, 2022. +Fürer, M. Weisfeiler-lehman refinement requires at least a linear number of iterations. In International Colloquium on Automata, Languages, and Programming, pp. 322-333. Springer, 2001. +Fürer, M. On the combinatorial power of the weisfeiler-lehman algorithm. In International Conference on Algorithms and Complexity, pp. 260-271. Springer, 2017. +Geerts, F. and Reutter, J. L. Expressiveness and approximation properties of graph neural networks. In International Conference on Learning Representations, 2022. +Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., and Dahl, G. E. Neural message passing for quantum chemistry. In International conference on machine learning, pp. 1263-1272. PMLR, 2017. +Hamilton, W. L., Ying, R., and Leskovec, J. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems, volume 30, pp. 1025-1035, 2017. +Hu, W., Fey, M., Zitnik, M., Dong, Y., Ren, H., Liu, B., Catasta, M., and Leskovec, J. Open graph benchmark: Datasets for machine learning on graphs. In Advances in neural information processing systems, volume 33, pp. 22118-22133, 2020. + +Huang, Y., Peng, X., Ma, J., and Zhang, M. Boosting the cycle counting power of graph neural networks with i2-gnns. arXiv preprint arXiv:2210.13978, 2022. +Immerman, N. and Lander, E. Describing graphs: A first-order approach to graph canonization. In Complexity theory retrospective, pp. 59-81. Springer, 1990. +Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pp. 448-456. PMLR, 2015. +Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Kipf, T. N. and Welling, M. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017. +Kreuzer, D., Beaini, D., Hamilton, W., Létourneau, V., and Tossou, P. Rethinking graph transformers with spectral attention. In Advances in Neural Information Processing Systems, volume 34, 2021. +Kwon, J., Kim, J., Park, H., and Choi, I. K. Asam: Adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks. In International Conference on Machine Learning, pp. 5905-5914. PMLR, 2021. +Li, P., Wang, Y., Wang, H., and Leskovec, J. Distance encoding: design provably more powerful neural networks for graph representation learning. In Proceedings of the 34th International Conference on Neural Information Processing Systems, pp. 4465-4478, 2020. +Lim, D., Robinson, J., Zhao, L., Smidt, T., Sra, S., Maron, H., and Jegelka, S. Sign and basis invariant networks for spectral graph representation learning. arXiv preprint arXiv:2202.13013, 2022. +Maron, H., Ben-Hamu, H., Serviansky, H., and Lipman, Y. Provably powerful graph networks. In Advances in neural information processing systems, volume 32, pp. 2156-2167, 2019a. +Maron, H., Ben-Hamu, H., Shamir, N., and Lipman, Y. Invariant and equivariant graph networks. In International Conference on Learning Representations, 2019b. +Maron, H., Fetaya, E., Segol, N., and Lipman, Y. On the universality of invariant networks. In International conference on machine learning, pp. 4363-4371. PMLR, 2019c. +Morris, C., Ritzert, M., Fey, M., Hamilton, W. L., Lenssen, J. E., Rattan, G., and Grohe, M. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pp. 4602-4609, 2019. + +Morris, C., Rattan, G., and Mutzel, P. Weisfeiler and leman go sparse: towards scalable higher-order graph embeddings. In Proceedings of the 34th International Conference on Neural Information Processing Systems, pp. 21824-21840, 2020. +Morris, C., Rattan, G., Kiefer, S., and Ravanbakhsh, S. Speqnets: Sparsity-aware permutation-equivariant graph networks. In International Conference on Machine Learning, pp. 16017-16042. PMLR, 2022. +Papp, P. A. and Wattenhofer, R. A theoretical comparison of graph neural network extensions. In Proceedings of the 39th International Conference on Machine Learning, volume 162, pp. 17323-17345, 2022. +Papp, P. A., Martinkus, K., Faber, L., and Wattenhofer, R. Dropgnnn: random dropouts increase the expressiveness of graph neural networks. In Advances in Neural Information Processing Systems, volume 34, pp. 21997-22009, 2021. +Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019. +Puny, O., Lim, D., Kiani, B. T., Maron, H., and Lipman, Y. Equivariant polynomials for graph neural networks. arXiv preprint arXiv:2302.11556, 2023. +Qian, C., Rattan, G., Geerts, F., Niepert, M., and Morris, C. Ordered subgraph aggregation networks. In Advances in Neural Information Processing Systems, 2022. +Velicković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., and Bengio, Y. Graph attention networks. In International Conference on Learning Representations, 2018. +Vignac, C., Loukas, A., and Frossard, P. Building powerful and equivariant graph neural networks with structural message-passing. In Proceedings of the 34th International Conference on Neural Information Processing Systems, pp. 14143-14155, 2020. +Weisfeiler, B. and Leman, A. The reduction of a graph to canonical form and the algebra which appears therein. NTI, Series, 2(9):12-16, 1968. +Xu, K., Hu, W., Leskovec, J., and Jegelka, S. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. +Ying, C., Cai, T., Luo, S., Zheng, S., Ke, G., He, D., Shen, Y., and Liu, T.-Y. Do transformers really perform badly for graph representation? Advances in Neural Information Processing Systems, 34, 2021. + +You, J., Ying, R., and Leskovec, J. Position-aware graph neural networks. In International conference on machine learning, pp. 7134-7143. PMLR, 2019. +You, J., Gomes-Selman, J. M., Ying, R., and Leskovec, J. Identity-aware graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 10737-10745, 2021. +Zhang, B., Luo, S., He, D., and Wang, L. Rethinking the expressive power of gnns via graph biconnectivity. In International Conference on Learning Representations, 2023. +Zhang, M. and Li, P. Nested graph neural networks. In Advances in Neural Information Processing Systems, volume 34, pp. 15734-15747, 2021. +Zhao, L., Jin, W., Akoglu, L., and Shah, N. From stars to subgraphs: Uplifting any gnn with local structure awareness. In International Conference on Learning Representations, 2022a. +Zhao, L., Shah, N., and Akoglu, L. A practical, progressively-expressive GNN. In Advances in Neural Information Processing Systems, 2022b. + +# Acknowledgement + +Bohang Zhang would like to thank Shengjie Luo for helpful discussions on experiments. This work is supported by National Key R&D Program of China (2022ZD0114900) and National Science Foundation of China (NSFC62276005). + +Table 1. Performance comparison of different GNN architectures on the Counting Substructure benchmark. We report the Mean Absolute Error (MAE), and use different background colors to distinguish different levels of MAE. + +
ModelReferenceTriangleTailed Tri.Star4-Cycle5-Cycle6-Cycle
PPGNMaron et al. (2019a)0.00890.00960.01480.00900.01370.0167
GNN-AKZhao et al. (2022a)0.09340.07510.01680.07260.11020.1063
GNN-AK+Zhao et al. (2022a)0.01230.01120.01500.01260.02680.0584
SUN (EGO+)Frasca et al. (2022)0.00790.00800.00640.01050.01700.0550
GNN-SSWLThis paper0.00980.00900.00890.01070.01420.0189
GNN-SSWL+This paper0.00640.00670.00780.00790.01080.0154
+ +Table 2. Performance comparison of different subgraph GNNs on ZINC benchmark. The Mean Absolute Error (MAE) and the standard deviation are reported. We also list the WL equivalence class and the number of parameters/atomic aggregations for each model. + +
ModelReferenceWL# +Param.# +Agg.ZINC Test MAE
SubsetFull
GSNBouritsas et al. (2022)-~500k-0.101±0.010-
CIN (small)Bodnar et al. (2021a)-~100k-0.094±0.0040.044±0.003
Graphormer-GDZhang et al. (2023)GD-WL503k-0.081±0.0090.025±0.004
NGNNZhang & Li (2021)SWL(VS)~500k20.111±0.0030.029±0.001
GNN-AKZhao et al. (2022a)PSWL(VS)~500k40.105±0.010-
GNN-AK+Zhao et al. (2022a)GSWL~500k50.091±0.002-
ESANBevilacqua et al. (2022)GSWL~100k40.102±0.0030.029±0.003
ESANFrasca et al. (2022)GSWL446k40.097±0.0060.025±0.003
SUNFrasca et al. (2022)GSWL526k120.083±0.0030.024±0.003
GNN-SSWLThis paperSSWL274k30.082±0.0030.026±0.001
GNN-SSWL+This paperSSWL387k40.070±0.0050.022±0.002
+ +# Appendix + +The Appendix is organized as follows: + +- In Appendix A, we complete our theoretical analysis by showing that subgraph GNNs with better expressivity can also be stronger in terms of their ability to compute fundamental graph properties, such as distance and biconnectivity. +- In Appendix B, we provide a broader review of related literature in the area of expressive GNNs, including higher-order GNNs, sparsity-aware GNNs, subgraph GNNs, and more. +- In Appendix C, we list a few promising open directions of this paper. +- In Appendix D, we give the missing proof of Proposition 3.2, showing the equivalence between SWL and Subgraph GNNs. +- In Appendix E, we give all the missing proofs in Section 4, which we use to build a complete hierarchy of SWL algorithms. This part is technical and is divided into several subsections (from Appendices E.1 to E.5). +- In Appendix F, we discuss several subgraph GNNs beyond our proposed framework (Definition 2.1), including GNN-AK, GNN-AK-ctx, ESAN, SUN, and RelIGN(2). We show that each of these architectures still corresponds to an equivalent SWL algorithm in terms of expressive power. +- In Appendix G, we give the missing proof of Theorem 5.2, showing the expressivity relationships between different SWL and localized FWL algorithms. +- In Appendix H, we give all the missing proofs in Section 6, bridging SWL/FWL-type algorithms and pebbling games. +- In Appendix I, we give the missing proof of Theorem 7.1. The proof is non-trivial and contains the main technical contribution of this paper. It is divided into three parts in Appendices I.1 to I.3 for readability. +- In Appendix J, we give all the missing proofs in Appendix A, showing how various SWL algorithms differ in terms of their practical expressiveness such as encoding graph distance and biconnectivity. +- In Appendix K, we provide experimental details to reproduce the results in Section 9, as well as a comprehensive set of ablation studies. + +# A. Discussions on Practical Expressiveness + +Up to now, we have obtained precise expressivity relations for all pairs of SWL/FWL-type algorithms in distinguishing non-isomorphic graphs. From a practical perspective, however, one may still wonder whether/how GNNs designed based on a theoretically stronger WL algorithm can be more powerful in solving practical graph problems. Here, we give concrete evidence that the power of different SWL algorithms does vary in terms of computing fundamental graph properties. In particular, WL algorithms with expressiveness over PSWL are capable of encoding distance and biconnectivity of a graph, while weaker algorithms like the vanilla SWL are unable to fully encode any of them. + +Our result is motivated by the recent study of Zhang et al. (2023), who proposed a new class of WL algorithms called the Generalized Distance WL (GD-WL). Given graph $G = (\mathcal{V}_G, \mathcal{E}_G)$ , GD-WL maintains a color $\chi_G(v)$ for each node $v \in \mathcal{V}_G$ and the node color is updated according to the following formula: + +$$ +\chi_ {G} ^ {(t + 1)} (v) := \mathsf {h a s h} \big (\{\big (d _ {G} (u, v), \chi_ {G} ^ {(t)} (u) \big): u \in \mathcal {V} \} \big), +$$ + +where $d_{G}(u,v)$ is a generalized distance between $u$ and $v$ . Zhang et al. (2023) proved that, by incorporating both the shortest path distance (SPD) and the resistance distance (RD), i.e., setting $d_{G}(u,v) = (\mathrm{dis}_{G}(u,v),\mathrm{dis}_{G}^{\mathsf{R}}(u,v))$ , the resulting GD-WL is provably expressive for all types of biconnectivity metrics, such as identifying cut vertices, cut edges, or distinguishing non-isomorphic graphs with different block cut trees. Surprisingly, we find that the PSWL has intrinsically (implicitly) encoded another type of GD-WL defined as follows: + +Definition A.1 (Hitting time distance). Define $\mathrm{dis}_G^{\mathsf{H}}(u,v)$ to be the hitting time distance (HTD) from node $u$ to $v$ in graph $G$ , i.e., the average number of edges passed in a random walk starting from $u$ and reaching $v$ for the first time. + +Theorem A.2. Let $d_G(u,v) = (\mathrm{dis}_G(u,v),\mathrm{dis}_G^{\mathsf{H}}(u,v))$ . Then, GD-WL $\prec$ PSWL(VS). + +Hitting time distance is closely related to resistance distance, in that $\mathrm{dis}_G^{\mathsf{R}}(u,v) = (\mathrm{dis}_G^{\mathsf{H}}(u,v) + \mathrm{dis}_G^{\mathsf{H}}(v,u)) / 2|\mathcal{E}_G|$ holds for any graph $G$ and nodes $u,v\in \mathcal{V}_G$ (Chandra et al., 1996). In other words, RD can be seen as the symmetrized version of HTD (by ignoring the constant $1 / |\mathcal{E}_G|)$ . Moreover, we have the following theorem, showing that HTD-WL also resembles RD-WL in distinguishing vertex-biconnectivity: + +Theorem A.3. By setting $d_G = \mathrm{dis}_G^{\mathsf{H}}$ , the resulting HTD-WL is fully expressive for all vertex-biconnectivity metrics proposed in Zhang et al. (2023). + +The proofs of Theorems A.2 and A.3 are given in Appendix J. Combining the two theorems readily leads to the following corollary: + +Corollary A.4. PSWL(VS) is fully expressive for both edge-biconnectivity and vertex-biconnectivity. + +On the other hand, we find that the vanilla SWL is unable to fully encode either SPD, HTD, or RD, as shown in the proposition below: + +Proposition A.5. The following hold: + +- SWL(VS) $\approx$ SPD-WL, SWL(SV) $\approx$ SPD-WL; +- SWL(VS) $\approx$ HTD-WL, SWL(SV) $\approx$ HTD-WL; +- SWL(VS) $\approx$ RD-WL, SWL(SV) $\approx$ RD-WL. + +Besides, Zhang et al. (2023) has shown that the SWL(VS) cannot identify cut vertices of a graph. Therefore, incorporating extra aggregation operations in the vanilla SWL does essentially improve its practical expressiveness in computing basic graph properties like distance and biconnectivity. + +A further discussions with Zhang et al. (2023). In Zhang et al. (2023), the authors showed that most prior GNN models are not expressive for biconnectivity metrics except ESAN (Bevilacqua et al., 2022), which corresponds to GSWL in our framework. Here, we unify, justify, and extend their results/findings in the following aspects: + +- We show ESAN can identify cut vertices mainly because it encodes the generalized distance. This provides deep insights into ESAN and complements the finding that ESAN can encode SPD. From this perspective, we obtain an alternative and unified proof for ESAN in distinguishing vertex-biconnectivity. Moreover, we also prove that ESAN can distinguish the block cut-vertex tree, a new result that was not originally proved in Zhang et al. (2023). + +- We strongly justify the introduced generalized distance and GD-WL as a fundamental class of color refinement algorithms, since the reason why ESAN and other SWL variants can encode biconnectivity metrics simply lies in the fact that it is more powerful than GD-WL. +- In contrast, we prove that the weaker SWL(VS) (or SWL(SV)) is not more powerful than either SPD-WL or RD-WL. This explains and complements the finding in Zhang et al. (2023) on why DS-WL cannot identify cut vertices. We also partially answered the question in Zhang et al. (2023) for OS-WL (Qian et al., 2022). +- We show adding the global aggregation in vanilla SWL (like ESAN) is not the only way to make it expressive for biconnectivity metrics. In particular, simply adding a single-point aggregation (PSWL) already suffices. + +Remark A.6. We suspect that the PSWL can also encode the resistance distance, but currently we can only prove that the strongest SSWL can encode RD (Appendix J). We leave this as an open problem for future work. + +# B. Related Work + +Since Xu et al. (2019); Morris et al. (2019) discovered the limited expressiveness of vanilla MPNNs, a large amount of works have been devoted to developing GNNs with better expressive power. Here, we briefly review the literature on expressive GNNs that are most relevant to this paper. + +Higher-order GNNs. Maron et al. (2019a;c); Azizian & Lelarge (2021); Geerts & Reutter (2022) theoretically studied the question of designing provably expressive equivariant GNNs that match the power of $k$ -FWL test for $k > 1$ . In this way, they build a hierarchy of GNNs with strictly growing expressivity (similar to this paper). A representative higher-order GNN architecture is called the $k$ -IGN (Maron et al., 2019c): it stores a feature representation for each node $k$ -tuple and updates these features using higher-order equivariant layers developed in Maron et al. (2019b). Recently, Frasca et al. (2022) proved that all node-based subgraph GNNs can be implemented by 3-IGN, which then implies that subgraph GNNs' expressive power is intrinsically bounded by 2-FWL (Geerts & Reutter, 2022). + +Sparsity-aware GNNs. One major drawback of higher-order GNNs is that the architectural design does not well-exploit the graph structural information, since the graph adjacency is only encoded in the initial node features. In light of this, subsequent works like Morris et al. (2020; 2022); Zhao et al. (2022b) incorporated this inductive bias directly into the network layers and designed local versions of higher-order GNNs. For example, Morris et al. (2020) developed the so-called $\delta$ - $k$ -LWL, which can be seen as a localized version of $k$ -WL. Morris et al. (2022) proposed the $(k,s)$ -SpeqNets by considering only $k$ -tuples whose vertices can be grouped into no more than $s$ connected components. Zhao et al. (2022b) concurrently proposed the $(k,s)$ -SETGNN which is similar to $(k,s)$ -SpeqNets. In this paper, we propose a class of localized $k$ -FWL, which shares interesting similarities to $\delta$ - $k$ -LWL. Our major contribution is to establish complete relations between localized 2-FWL, $\delta$ -2-LWL, and subgraph GNNs. + +Subgraph GNNs. Subgraph GNNs are an emerging class of higher-order GNNs that compute a feature representation for each subgraph-node pair. The earliest idea of subgraph GNNs may track back to Cotta et al. (2021); Papp et al. (2021), which proposed to use node-deleted subgraphs and performed message-passing on each subgraph separately without cross-graph interaction. Papp & Wattenhofer (2022) argued to use node marking instead of node deletion for better expressive power. Zhang & Li (2021) proposed the Nested GNN (NGNN), a variant of subgraph GNNs that use $k$ -hop ego nets with distance encoding. It further added the global aggregation $\mathrm{agg}_{\mathrm{u}}^{\mathrm{G}}$ to merge all node information in a subgraph when computing the feature of the root node of a subgraph. You et al. (2021) designed the ID-GNN, which is similar to NGNN and also uses $k$ -hop ego nets as subgraphs. Bevilacqua et al. (2022) developed a principled class of subgraph GNNs, called ESAN, which first introduced the cross-graph global aggregation into the network design. Zhao et al. (2022a) concurrently proposed the GNN-AK and its extension GNN-AK-ctx, which also includes the cross-graph global aggregation. Recently, Frasca et al. (2022); Qian et al. (2022) first provided theoretical analysis of various node-based subgraph GNNs by proving that they are intrinsically bounded by 2-FWL. We note that besides node-based subgraph GNNs, one can also develop edge-based subgraph GNNs, which have been explored in Bevilacqua et al. (2022); Huang et al. (2022). Both works showed that the expressive power of edge-based subgraph GNNs can go beyond 2-FWL. Finally, we note that Vignac et al. (2020) proposed a GNN architecture that is somewhat similar to the vanilla subgraph GNN, and the $\delta$ -2-LWL proposed in Morris et al. (2020) can also be seen as a subgraph GNN according to Section 8. + +Practical expressivity of GNNs. Another line of works sought to develop expressive GNNs from practical consideration. For example, Fürer (2017); Chen et al. (2020); Arvind et al. (2020) studied the power of WL algorithms in counting graph substructures and pointed out that vanilla MPNNs cannot count/detect cycles, which may severely limit their practical + +performance in real-world tasks (e.g., in bio-chemistry). In light of this, Bouritsas et al. (2022); Barceló et al. (2021) proposed to incorporate substructure counting (or homomorphism counting) into the initial node features to boost the expressiveness. Bodnar et al. (2021b;a) further proposed a message-passing framework that enables interaction between nodes, edges, and higher-order substructures. Huang et al. (2022) studied the cycle counting power of subgraph GNNs and proposed the $\mathrm{I}^2$ -GNN to count cycles of length no more than 6. Recently, Puny et al. (2023) studied the expressive power of GNNs in expressing/approximating equivariant graph polynomials. They showed that computing equivariant polynomials generalizes the problem of counting substructures. + +Besides cycle counting, several works explored other aspects of encoding basic graph properties. You et al. (2019); Li et al. (2020); Ying et al. (2021) proposed to use distance encoding to boosting the expressiveness of MPNNs or Graph Transformers. In particular, Li et al. (2020) proposed to use a generalized distance called page-rank distance. Balcilar et al. (2021); Kreuzer et al. (2021); Lim et al. (2022) studies the expressive power of GNNs from the perspective of graph spectral (Cvetkovic et al., 1997). Recently, Zhang et al. (2023) discovered that most prior GNN architectures are not expressive for graph biconnectivity and built an interesting relation between biconnectivity and generalized distance. Here, we extend Zhang et al. (2023) by giving a comprehensive characterization of which SWL equivalence class can encode distance and biconnectivity. + +# C. Open directions + +We highlight several open directions for future work as follows. + +Regarding higher-order subgraph GNNs. From a theoretical perspective, it is an interesting direction to generalize the results of this paper to higher-order subgraph GNNs (which compute a feature representation for each node $k$ -tuple). We note that such an idea has appeared in Cotta et al. (2021); Qian et al. (2022); Papp & Wattenhofer (2022). However, none of these works explored the possible design space of cross-graph aggregations. Since our results imply that these cross-graph aggregations do essentially improve the expressive power, it may be worthwhile to establish a complete hierarchy of higher-order subgraph GNNs. This may include the following questions: (i) How many expressivity equivalence classes are there? (ii) What are the expressivity inclusion relations between different equivalence classes? (iii) What design principle achieves the maximal expressive power with a minimal number of atomic aggregations? We conjecture that, by symmetrically incorporating $k$ local aggregations, the resulting $k$ -order subgraph GNN achieves the maximal expressiveness and is as expressive as $\mathsf{RelGN}(k)$ (by extending Frasca et al. (2022)). + +Regarding edge-based subgraph GNNs. Another different perspective is to study edge-based subgraph GNNs, which compute a feature representation for each edge-node pair. Importantly, edge-based subgraph GNNs and the corresponding SWL are also a fundamental class of computation models with $O(nm)$ memory complexity and $O(m^2)$ computation complexity. For sparse graphs (i.e. $m = O(n)$ ), such a complexity is quite desirable and is close to that of node-based subgraph GNNs. Yet, it results in enhanced expressiveness: as shown in Bevilacqua et al. (2022); Huang et al. (2022), their proposed edge-based subgraph GNNs are not less powerful than 2-FWL. Therefore, we believe that characterizing the expressiveness hierarchy of edge-based subgraph GNNs is of both theoretical and practical interest. Another interesting topic is to build expressivity relations between node-based and edge-based subgraph GNNs. + +Regarding localized Folklore WL tests. This paper proposed a novel class of color refinement algorithms called localized Folklore WL. Importantly, we show SLFWL(2) is strictly more powerful than all node-based subgraph GNNs despite the same complexity. Therefore, an interest question is whether we can design practical GNN architectures based on SLFWL(2) for both efficiency and better expressiveness. On the other hand, from a theoretical side, it may also be an interesting direction to study higher-order localized Folklore WL tests, in particular, SLFWL $(k)$ , due to its fundamental nature. We conjecture that SLFWL $(k)$ is strictly more powerful than $\delta$ - $k$ -LWL (Morris et al., 2020) and strictly less powerful than standard $k$ -FWL. Furthermore, does SLFWL $(k)$ achieve the maximal expressive power among the algorithm class within $O(n^{k-1}m)$ computation cost? + +Regarding practical expressiveness of GSWL and SSWL. This paper discusses the practical expressiveness of subgraph GNNs by showing an inherent gap between SWL and PSWL in terms of their ability to encode distance and biconnectivity of a graph. Yet, it remains an open problem how PSWL, GSWL, and SSWL differ in terms of their practical expressiveness for computing graph properties. This question is particularly important since recently proposed subgraph GNNs are typically bounded by GSWL. Answering this question will thus highlight the power and limitation of prior architectures. + +# D. The Equivalence between SWL and Subgraph GNNs + +This section aims to prove Proposition 3.2. We restate the proposition below: + +Proposition 3.2. The expressive power of any subgraph GNN defined in Section 2 is bounded by a corresponding SWL by matching the policy $\pi$ , the aggregation scheme between Definitions 2.1 and 3.1, and the pooling paradigm. Moreover, when considering bounded-size graphs, for any SWL algorithm, there exists a matching subgraph GNN with the same expressive power. + +Proof. We first prove that any subgraph GNN defined in Section 2 is bounded by a corresponding SWL. To do this, we will prove the following result: given a pair of graphs $G = (\mathcal{V}_G,\mathcal{E}_G)$ and $H = (\mathcal{V}_H,\mathcal{E}_H)$ , for any $t\in \mathbb{N}$ and any vertices $u,v\in \mathcal{V}_G$ and $x,y\in \mathcal{V}_H$ , $\chi_G^{(t)}(u,v) = \chi_H^{(t)}(x,y)\Rightarrow h_G^{(t)}(u,v) = h_H^{(t)}(x,y)$ , where $\chi$ and $h$ are defined in Definitions 2.1 and 3.1, respectively. + +We prove the result by induction over $t$ . For the base case of $t = 0$ , the result clearly holds when the graph generation policy is the same between SWL and subgraph GNNs. Now assume the result holds for all $t \leq T$ , and we want to prove that it also holds for $t = T + 1$ . By Definition 3.1, $\chi_G^{(T + 1)}(u,v) = \chi_H^{(T + 1)}(x,y)$ is equivalent to + +$$ +\mathsf {a g g} _ {i} (u, v, G, \chi_ {G} ^ {(T)}) = \mathsf {a g g} _ {i} (x, y, H, \chi_ {H} ^ {(T)}), \quad \forall i \in [ r ]. +$$ + +We separately consider each type of aggregation operation: + +- Single-point aggregation. Take $\mathsf{agg}_{\mathsf{vu}}^{\mathsf{P}}$ for example: $\mathsf{agg}_{\mathsf{vu}}^{\mathsf{P}}(u,v,G,\chi_G^{(T)}) = \mathsf{agg}_{\mathsf{vu}}^{\mathsf{P}}(x,y,H,\chi_H^{(T)})$ implies $\chi_G^{(T)}(v,u) = \chi_H^{(T)}(y,x)$ . By induction, we have $h_G^{(t)}(v,u) = h_H^{(t)}(y,x)$ . +- Local aggregation. Take $\mathsf{agg}_{\mathfrak{u}}^{\mathsf{L}}$ for example: $\mathsf{agg}_{\mathfrak{u}}^{\mathsf{L}}(u,v,G,\chi_G^{(T)}) = \mathsf{agg}_{\mathfrak{u}}^{\mathsf{L}}(x,y,H,\chi_H^{(T)})$ implies + +$$ +\{\{\chi_ {G} ^ {(T)} (u, w): w \in \mathcal {N} _ {G ^ {u}} (v) \} \} = \{\{\chi_ {H} ^ {(T)} (x, z): z \in \mathcal {N} _ {H ^ {x}} (y) \} \}. +$$ + +By induction, it is straightforward to see that + +$$ +\left\{\left\{h _ {G} ^ {(T)} (u, w): w \in \mathcal {N} _ {G ^ {u}} (v) \right\} \right\} = \left\{\left\{h _ {H} ^ {(T)} (x, z): z \in \mathcal {N} _ {H ^ {x}} (y) \right\} \right\}. +$$ + +Therefore, + +$$ +\sum_ {w \in \mathcal {N} _ {G ^ {u}} (v)} h _ {G} ^ {(T)} (u, w) = \sum_ {z \in \mathcal {N} _ {H ^ {x}} (y)} h _ {H} ^ {(T)} (x, z). +$$ + +- Global aggregation. This case is similar to the above one and we omit it for clarity. + +Combining all these cases, we have + +$$ +\mathsf {o p} _ {i} (u, v, G, \chi_ {G} ^ {(T)}) = \mathsf {o p} _ {i} (x, y, H, \chi_ {H} ^ {(T)}) \quad \forall i \in [ r ], +$$ + +and thus $h_G^{(T + 1)}(u,v) = h_H^{(T + 1)}(x,y)$ . We have completed the induction step. + +Let $L$ be the number of layers in a subgraph GNN. Then $\chi_G^{(L)}(u,v) = \chi_H^{(L)}(x,y)$ implies $h_G^{(L)}(u,v) = h_H^{(L)}(x,y)$ . Since $\mathsf{agg}_{\mathsf{uv}}^{\mathsf{P}}$ is always present, the stable color mapping $\chi$ satisfies that $\chi_G(u,v) = \chi_H(x,y) \Rightarrow \chi_G^{(L)}(u,v) = \chi_H^{(L)}(x,y)$ . Therefore, $\chi_G(u,v) = \chi_H(x,y) \Rightarrow h_G^{(L)}(u,v) = h_H^{(L)}(x,y)$ . + +Finally consider the pooling paradigm. As in the analysis of global aggregation, it can be concluded that $c(G) = c(H)$ implies $f(G) = f(H)$ where $c(G)$ and $f(G)$ represent the graph representation computed by SWL and subgraph GNN, respectively. We have finished the first part of the proof. + +It remains to prove that for any SWL algorithm, there exists a matching subgraph GNN with the same expressive power. The key idea is to ensure that whenever $\chi_G^{(t)}(u,v)\neq \chi_H^{(t)}(x,y)$ , we have $h_G^{(t)}(u,v)\neq h_H^{(t)}(x,y)$ . To achieve this, we rely on injective functions that take a set as input. When assuming that the size of the set is bounded, the injective property can be easily constructed using the approach proposed in Maron et al. (2019a), called the power-sum multi-symmetric polynomials (PMP). We note that while Maron et al. (2019a) only focused on the case when the input belongs to sets of a fixed size, it + +can be easily extended to our case for sets of different but bounded sizes by padding zero-elements. The summation in PMP just coincides with the aggregation in Definition 2.1, and the power can be extracted by the function $\sigma^{(t)}$ in the previous layer. For more details, please refer to Maron et al. (2019a). + +Finally, note that when the input graph has bounded size $N$ , the SWL iteration must get stabilized in no more than $N^2$ steps. Therefore, by using a sufficiently deep GNN (i.e., $L = N^2$ ), one can guarantee that $\chi_G(u,v) \neq \chi_H(x,y)$ implies $h_G^{(L)}(u,v) \neq h_H^{(L)}(x,y)$ . This eventually yields that $c(G) \neq c(H)$ implies $f(G) \neq f(H)$ , as desired. + +# E. Proof of Theorems in Section 4 + +This section contains all the missing proofs in Section 4. + +# E.1. Preliminary + +We first introduce some basic terminologies and facts, which will be frequently used in subsequent proofs. + +Definition E.1. Let $\chi$ and $\tilde{\chi}$ be two color mappings, with $\chi_G(u,v)$ and $\tilde{\chi}_G(u,v)$ representing the color of vertex pair $(u,v)$ in graph $G$ . We say: + +- $\tilde{\chi}$ is finer than $\chi$ , denoted as $\tilde{\chi} \preceq \chi$ , if for any two graphs $G = (\mathcal{V}_G, \mathcal{E}_G)$ , $H = (\mathcal{V}_H, \mathcal{E}_H)$ and any vertices $u, v \in \mathcal{V}_G$ , $x, y \in \mathcal{V}_H$ , we have $\tilde{\chi}_G(u, v) = \tilde{\chi}_H(x, y) \implies \chi_G(u, v) = \chi_H(x, y)$ . +- $\tilde{\chi}$ and $\chi$ are equivalent, denoted as $\tilde{\chi} \simeq \chi$ , if $\tilde{\chi} \preceq \chi$ and $\chi \preceq \tilde{\chi}$ . +- $\tilde{\chi}$ is strictly finer than $\chi$ , denoted as $\tilde{\chi} \prec \chi$ , if $\tilde{\chi} \preceq \chi$ and $\tilde{\chi} \neq \chi$ . + +Remark E.2. Several simple facts regarding this definition are as follows. + +(a) For any color refinement algorithm, let $\{\chi^{(t)}\}_{t=0}^{\infty}$ be the sequence of color mappings generated at each iteration $t$ , then $\chi^{(t+1)} \preceq \chi^{(t)}$ for any $t$ . This is exactly why we call the algorithm "color refinement". As a result, the stable color mapping $\chi$ is finer than any intermediate color mapping $\chi^{(t)}$ . +(b) Definition E.1 is closely related to the power of WL algorithms. Indeed, let $\chi^{\mathsf{A}}$ and $\chi^{\mathsf{B}}$ be two stable color mappings generated by algorithms A and B, respectively. If both algorithms use the same pooling paradigm, then $\chi^{\mathsf{B}} \preceq \chi^{\mathsf{A}}$ implies that B is more powerful than A, i.e. A $\preceq$ B. +(c) Consider two SWL algorithms A and B with the same graph generation policy, but with different aggregation schemes $\mathcal{A}$ and $\mathcal{B}$ . Denote $\chi^{\mathrm{A}}$ and $\chi^{\mathrm{B}}$ as the corresponding stable color mappings. Define a new color mapping $\tilde{\chi} = \mathcal{T}(\mathcal{A},\chi^{\mathrm{B}})$ that “refines” $\chi^{\mathrm{B}}$ using aggregation scheme $\mathcal{A} \cup \{\mathrm{agg}_{uv}^{\mathrm{P}}\}$ : + +$$ +[ \mathcal {T} (\mathcal {A}, \chi) ] _ {G} (u, v) = \operatorname {h a s h} \left(\operatorname {a g g} _ {1} (u, v, G, \chi_ {G}), \dots , \operatorname {a g g} _ {r} (u, v, G, \chi_ {G})\right), \tag {3} +$$ + +where $\mathcal{A} \cup \{\mathrm{agg}_{\mathrm{uv}}^{\mathrm{P}}\} = \{\mathrm{agg}_i : i \in [r]\}$ . Then we can prove that $\chi^{\mathrm{B}} \preceq \tilde{\chi} \Rightarrow \chi^{\mathrm{B}} \preceq \chi^{\mathrm{A}}$ . If the two algorithms further share the same pooling paradigm, Remark E.2(b) yields $\mathsf{A} \preceq \mathsf{B}$ . This gives a simple way to compare the expressiveness of different algorithms. + +Proof of Remark E.2(c). Define a sequence of color mappings $\{\tilde{\chi}^{(t)}\}_{t = 0}^{\infty}$ recursively, such that $\tilde{\chi}^{(0)} = \chi^{\mathsf{B}}$ and + +$$ +\tilde {\chi} _ {G} ^ {(t + 1)} (u, v) = \mathsf {h a s h} (\mathsf {a g g} _ {1} (u, v, G, \tilde {\chi} _ {G} ^ {(t)}), \dots , \mathsf {a g g} _ {r} (u, v, G, \tilde {\chi} _ {G} ^ {(t)})) +$$ + +for any $u, v$ in graph $G$ . Clearly, $\tilde{\chi}^{(1)}$ is just $\tilde{\chi}$ in Remark E.2(c). Since we have both $\tilde{\chi}^{(0)} \preceq \tilde{\chi}^{(1)}$ (by the assumption of $\chi^{\mathsf{B}} \preceq \tilde{\chi}$ ) and $\tilde{\chi}^{(1)} \preceq \tilde{\chi}^{(0)}$ (by Remark E.2(a)), $\tilde{\chi}^{(1)} \simeq \tilde{\chi}^{(0)}$ . Therefore, $\tilde{\chi}^{(t)} \simeq \chi^{\mathsf{B}}$ holds for all $t \in \mathbb{N}$ . On the other hand, a simple induction over $t$ yields $\tilde{\chi}^{(t)} \preceq \chi^{\mathsf{A},(t)}$ (where $\chi^{\mathsf{A},(t)}$ is the color mapping at iteration $t$ for algorithm A), since they are both refined by the same aggregation scheme $\mathcal{A}$ (for the base case, $\tilde{\chi}^{(0)} = \chi^{\mathsf{B}} \preceq \chi^{\mathsf{B},(0)} = \chi^{\mathsf{A},(0)}$ ). By taking $t \to \infty$ , this finally yields $\tilde{\chi} \preceq \chi^{\mathsf{A}}$ , namely, $\chi^{\mathsf{B}} \preceq \chi^{\mathsf{A}}$ , as desired. + +# E.2. Discussions on graph generation policies + +In this subsection, we make a detailed discussion regarding graph generalization policies and prove that the canonical node marking policy already achieves the best expressiveness among all of these policies (Proposition 4.2). + +Depending on the choice of $(G^u, h_G^u)$ , there are a total of 7 non-trivial combinations. For ease of presentation, we first give symbols to each of them: + +- NM: the node marking policy on the original graph; +- DE: the distance encoding policy on the original graph; +- $\mathsf{EGO}(k)$ : the $k$ -hop ego network policy with constant node features; +- $\mathsf{EGO}(k) + \mathsf{NM}$ : the policy with both node marking and $k$ -hop ego network; +- $\operatorname{EGO}(k) + \mathrm{DE}$ : the policy with both distance encoding and $k$ -hop ego network; +- ND: the node deletion policy with constant node features; +- NDM: the policy with both node deletion and marking. + +Proposition E.3. Consider any fixed aggregation scheme $\mathcal{A}$ that contains the two basic aggregations $\mathrm{agg}_{uv}^{\mathsf{P}}$ and $\mathrm{agg}_{u}^{\mathsf{L}}$ in Definition 3.1, and consider any fixed pooling paradigm Pool defined in Section 3. We have the following result: + +- NM is as powerful as DE; +- EGO(k) + NM is as powerful as EGO(k) + DE; +- NM is more powerful than $\mathsf{EGO}(k) + \mathsf{NM}$ ; +- NM is more powerful than NDM; +- $\operatorname{EGO}(k) + \mathrm{NM}$ is more powerful than $\operatorname{EGO}(k)$ ; +- NDM is more powerful than ND. + +Proof. Let $\chi^{\text{Policy},(t)}$ be the color mapping of the SWL algorithm with graph generation policy Pocily, aggregation scheme $\mathcal{A}$ , and pooling paradigm Pool at iteration $t$ , and let $\chi^{\text{Policy}}$ be the corresponding stable color mapping. Here, Pocily $\in$ {NM, DE, EGO(k), EGO(k) + NM, EGO(k) + DE, ND, NDM}. + +We first consider the case NM vs. DE. By definition, $\chi^{\mathrm{DE},(0)}$ is finer than $\chi^{\mathrm{NM},(0)}$ . Since the subgraphs $\{(G^u : u \in \mathcal{V}_G)\}$ are the same for the two policies and the aggregation scheme $\mathcal{A}$ is fixed, a simple induction over $t$ then yields $\chi^{\mathrm{DE},(t)} \preceq \chi^{\mathrm{NM},(t)}$ for any $t \in \mathbb{N}$ , namely, $\chi^{\mathrm{DE}} \preceq \chi^{\mathrm{NM}}$ . It follows that DE is more powerful than NM (by Remark E.2(b)). + +To prove the converse direction, we leverage Lemma E.4 (which will be proved later). Lemma E.4 implies that $\chi^{\mathsf{NM},(D)}\preceq$ $\chi^{\mathrm{DE},(0)}$ when the input graphs have bounded diameter $D$ . Then using the same analysis as above, we have $\chi^{\mathsf{NM},(D + t)}\preceq$ $\chi^{\mathrm{DE},(t)}$ for any $t\in \mathbb{N}$ . By taking $t\to \infty$ , this implies that $\chi^{\mathsf{NM}}\preceq \chi^{\mathsf{DE}}$ , namely, NM is more powerful than DE (by Remark E.2(b)). Combining the two directions concludes the proof of the first bullet. + +The proof for the case $\mathsf{EGO}(k) + \mathsf{NM}$ vs. $\mathsf{EGO}(k) + \mathsf{DE}$ is almost the same, so we omit it for clarity. + +We next turn to the case NM vs. $\mathsf{EGO}(k) + \mathsf{NM}$ . Initially, by definition we have $\chi^{\mathsf{NM},(0)} = \chi^{\mathsf{EGO}(k) + \mathsf{NM},(0)}$ . Therefore, $\chi^{\mathsf{NM},(D)} \preceq \chi^{\mathsf{EGO}(k) + \mathsf{NM},(0)}$ (by Remark E.2(a)), where we assume the input graphs have bounded diameter $D$ . Below, we aim to prove that $\chi^{\mathsf{NM},(t)} \preceq \chi^{\mathsf{EGO}(k) + \mathsf{NM},(t - D)}$ for any integer $t \geq D$ . We prove it by induction. + +The base case of $t = D$ already holds. Assume the above result holds for $t = T$ and consider $t = T + 1$ . Let $G = (\mathcal{V}_G, \mathcal{E}_G)$ and $H = (\mathcal{V}_H, \mathcal{E}_H)$ be two graphs with diameter no more than $D$ . Consider any vertices $u, v \in \mathcal{V}_G$ and $x, y \in \mathcal{V}_H$ satisfying $\chi_G^{\mathrm{NM},(T + 1)}(u, v) = \chi_H^{\mathrm{NM},(T + 1)}(x, y)$ , and we want to prove that $\chi_G^{\mathrm{EGO}(k) + \mathrm{NM},(T + 1 - D)}(u, v) = \chi_H^{\mathrm{EGO}(k) + \mathrm{NM},(T + 1 - D)}(x, y)$ . By Definition 3.1, + +$$ +\operatorname {a g g} _ {i} (u, v, G, \chi_ {G} ^ {\mathsf {N M}, (T)}) = \operatorname {a g g} _ {i} (x, y, H, \chi_ {H} ^ {\mathsf {N M}, (T)}) +$$ + +holds for all $i\in [r]$ . If $\mathsf{agg}_i$ is any single-point aggregation or global aggregation, by induction we clearly have $\mathsf{agg}_i(u,v,G,\chi_G^{\mathsf{EGO}(k) + \mathsf{NM},(T - D)}) = \mathsf{agg}_i(x,y,H,\chi_H^{\mathsf{EGO}(k) + \mathsf{NM},(T - D)})$ . If $\mathsf{agg}_i$ is any local aggregation, e.g., $\mathsf{agg}_{\mathfrak{u}}^{\mathsf{L}}$ , we have + +$$ +\{\{\chi_ {G} ^ {\mathsf {N M}, (T)} (u, w): w \in \mathcal {N} _ {G} (v) \} \} = \{\{\chi_ {H} ^ {\mathsf {N M}, (T)} (x, z): z \in \mathcal {N} _ {H} (y) \} \} \tag {4} +$$ + +for policy NM. We additionally need to prove that + +$$ +\{\left\{\chi_ {G} ^ {\mathsf {N M}, (T)} (u, w): w \in \mathcal {N} _ {G ^ {u}} (v) \right\} \} = \left\{\left\{\chi_ {H} ^ {\mathsf {N M}, (T)} (x, z): z \in \mathcal {N} _ {H ^ {x}} (y) \right\} \right\}, \tag {5} +$$ + +where $G^{u}$ and $H^{x}$ are generated by policy $\mathsf{EGO}(k) + \mathsf{NM}$ . This is due to the following observations: if $w \in \mathcal{N}_G^k(u)$ and $z \notin \mathcal{N}_H^k(x)$ , then $\mathrm{dis}_G(u, w) \neq \mathrm{dis}_H(x, z)$ . Therefore, by Lemma E.4 we have $\chi_G^{\mathsf{NM},(T)}(u, w) \neq \chi_H^{\mathsf{NM},(T)}(x, z)$ . Similarly, + +if $w \notin \mathcal{N}_G^k(u)$ and $z \in \mathcal{N}_H^k(x)$ , then $\chi_G^{\mathsf{NM},(T)}(u, w) \neq \chi_H^{\mathsf{NM},(T)}(x, z)$ . This yields (5). By induction, + +$$ +\{\{\chi_ {G} ^ {\mathsf {E G O} (k) + \mathsf {N M}, (T - D)} (u, w): w \in \mathcal {N} _ {G ^ {u}} (v) \} \} = \{\{\chi_ {H} ^ {\mathsf {E G O} (k) + \mathsf {N M}, (T - D)} (x, z): z \in \mathcal {N} _ {H ^ {x}} (y) \} \}. +$$ + +Therefore, in all cases we have + +$$ +\mathsf {a g g} _ {i} (u, v, G, \chi_ {G} ^ {\mathsf {E G O} (k) + \mathsf {N M}, (T - D)}) = \mathsf {a g g} _ {i} (x, y, H, \chi_ {H} ^ {\mathsf {E G O} (k) + \mathsf {N M}, (T - D)}). +$$ + +This concludes the induction step. We finally obtain that the stable mappings satisfy $\chi^{\mathsf{NM}}\preceq \chi^{\mathsf{EGO}(k) + \mathsf{NM}}$ and thus NM is more powerful than $\mathsf{EGO}(k) + \mathsf{NM}$ (by Remark E.2(b)). + +We next turn to the case NM vs. NDM. This case is similar to the above one. Initially, by definition we have $\chi^{\mathsf{NM},(0)} = \chi^{\mathsf{NDM},(0)}$ . Therefore, $\chi^{\mathsf{NM},(D)} \preceq \chi^{\mathsf{NDM},(0)}$ where we assume the input graphs have bounded diameter $D$ . We aim to prove that $\chi^{\mathsf{NM},(t)} \preceq \chi^{\mathsf{NDM},(t - D)}$ for any integer $t \geq D$ . We prove it by induction. The base case of $t = D$ already holds. + +Assume the above result holds for $t = T$ and consider $t = T + 1$ . Let $\chi_G^{\mathsf{NM},(T + 1)}(u,v) = \chi_H^{\mathsf{NM},(T + 1)}(x,y)$ . Then by Definition 3.1, + +$$ +\mathsf {a g g} _ {i} (u, v, G, \chi_ {G} ^ {\mathsf {N M}, (T)}) = \mathsf {a g g} _ {i} (x, y, H, \chi_ {H} ^ {\mathsf {N M}, (T)}) +$$ + +holds for all $i \in [r]$ . If $\mathsf{agg}_i$ is any single-point aggregation or global aggregation, by induction we have $\mathsf{agg}_i(u,v,G,\chi_G^{\mathsf{NDM},(T - D)}) = \mathsf{agg}_i(x,y,H,\chi_H^{\mathsf{NDM},(T - D)})$ . If $\mathsf{agg}_i$ is any local aggregation, e.g., $\mathsf{agg}_u^{\mathsf{L}}$ , we have (4) for policy NM, and we additionally need to prove (5), where $G^{u}$ and $H^{x}$ are generated by policy NDM. Note that we have $\mathrm{dis}_G(u,v) = \mathrm{dis}_H(x,y)$ due to the assumption $\chi_G^{\mathsf{NM},(T + 1)}(u,v) = \chi_H^{\mathsf{NM},(T + 1)}(x,y)$ and Lemma E.4. Consider the following three cases: + +- If $\mathrm{dis}_G(u,v) = \mathrm{dis}_H(x,y)\geq 2$ , then $\mathcal{N}_{G^u}(v) = \mathcal{N}_G(v)$ and $\mathcal{N}_{H^x}(y) = \mathcal{N}_H(y)$ . +- If $\mathrm{dis}_G(u,v) = \mathrm{dis}_H(x,y) = 1$ , then $\mathcal{N}_{G^u}(v) = \mathcal{N}_G(v)\backslash \{u\}$ and $\mathcal{N}_{H^x}(y) = \mathcal{N}_H(y)\backslash \{x\}$ . We also have $\chi_G^{\mathrm{NM},(T)}(u,u) = \chi_H^{\mathrm{NM},(T)}(x,x)$ , because by (4) there exists a vertex $z\in \mathcal{V}_H$ such that $\chi_G^{\mathrm{NM},(T)}(u,u) = \chi_H^{\mathrm{NM},(T)}(x,z)$ , implying $0 = \mathrm{dis}_G(u,u) = \mathrm{dis}_H(x,z)$ by Lemma E.4. +- If $\mathrm{dis}_G(u,v) = \mathrm{dis}_H(x,y) = 0$ , then $\mathcal{N}_{G^u}(v) = \emptyset$ and $\mathcal{N}_{H^x}(y) = \emptyset$ . + +In all cases (5) holds, which concludes the induction step. We finally obtain that the stable mappings satisfy $\chi^{\mathrm{NM}}\preceq \chi^{\mathrm{NDM}}$ and thus NM is more powerful than NDM (by Remark E.2(b)). + +We next turn to the case $\mathsf{EGO}(k) + \mathsf{NM}$ vs. $\mathsf{EGO}(k)$ . This case follows by a simple induction that $\chi^{\mathsf{EGO}(k) + \mathsf{NM},(t)} \preceq \chi^{\mathsf{EGO}(k),(t)}$ for all $t \in \mathbb{N}$ . + +We finally turn to the case NDM vs. ND. This case also follows by a simple induction that $\chi^{\mathsf{NDM},(t)}\preceq \chi^{\mathsf{ND},(t)}$ for all $t\in \mathbb{N}$ + +It remains to prove the following key lemma: + +Lemma E.4. Consider an SWL algorithm A such that the aggregation scheme $\mathcal{A}$ contains the two basic aggregations $\mathsf{agg}_{\mathsf{uv}}^{\mathsf{P}}$ and $\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}$ in Definition 3.1, and the node marking policy is used (possibly along with an ego network policy). Denote $\chi^{(t)}$ as the color mapping of A at iteration $t$ . For any graphs $G = (\mathcal{V}_G,\mathcal{E}_G)$ , $H = (\mathcal{V}_H,\mathcal{E}_H)$ and vertices $u,v\in \mathcal{V}_G$ , $x,y\in \mathcal{V}_H$ , when $t\geq \min(D_G,D_H)$ , $\chi_{G}^{(t)}(u,v) = \chi_{H}^{(t)}(x,y)\Rightarrow \mathrm{dis}_{G^u}(u,v) = \mathrm{dis}_{G^x}(x,y)$ . Here, $D_G$ and $D_H$ are the diameter of graphs $G$ and $H$ , respectively. + +Proof. It suffices to prove the following result: if $\mathrm{dis}_{G^u}(u,v)\neq \mathrm{dis}_{G^x}(x,y)$ , then $\chi_G^{(t)}(u,v)\neq \chi_H^{(t)}(x,y)$ for $t = \min (\mathrm{dis}_{G^u}(u,v),\mathrm{dis}_{G^x}(x,y))$ . We prove it by induction over $t$ . + +For the base case of $t = 0$ , without loss of generality we can assume $\mathrm{dis}_{G^u}(u,v) = 0$ and $\mathrm{dis}_{H^x}(x,y) > 0$ . For node marking policy, we clearly have $\chi_G^{(0)}(u,v) \neq \chi_H^{(0)}(x,y)$ since $v = u$ but $y \neq x$ . + +Assume the result holds for $t \leq T$ and consider $t = T + 1$ . Without loss of generality, we can assume $\mathrm{dis}_{G^u}(u,v) = T + 1$ and $\mathrm{dis}_{H^x}(x,y) > T + 1$ (remark: $\mathrm{dis}_{H^x}(x,y)$ can be $\infty$ for ego network policy). If $\chi_G^{(T + 1)}(u,v) = \chi_H^{(T + 1)}(x,y)$ , by definition of $\mathsf{agg}_\mathfrak{u}^\perp$ we have + +$$ +\left\{\left\{\chi_ {G} ^ {(T)} (u, w): w \in \mathcal {N} _ {G ^ {u}} (v) \right\} \right\} = \left\{\left\{\chi_ {H} ^ {(T)} (x, z): z \in \mathcal {N} _ {H ^ {x}} (y) \right\} \right\}. +$$ + +Pick any vertex $w \in \mathcal{N}_{G^u}(v)$ satisfying $\mathrm{dis}_{G^u}(u, w) + 1 = \mathrm{dis}_{G^u}(u, v)$ . Then, there is a vertex $z \in \mathcal{N}_{H^x}(y)$ such that $\chi_G^{(T)}(u, w) = \chi_H^{(T)}(x, z)$ . By induction, $\mathrm{dis}_{G^u}(u, w) = \mathrm{dis}_{H^x}(x, z)$ . This yields a contradiction, since + +$$ +\operatorname {d i s} _ {H ^ {x}} (x, y) \leq \operatorname {d i s} _ {H ^ {x}} (x, z) + 1 = \operatorname {d i s} _ {G ^ {u}} (u, w) + 1 = \operatorname {d i s} _ {G ^ {u}} (u, v) = T + 1. +$$ + +This concludes the induction step. + +The above lemma directly leads to the following corollary, which is useful in subsequent analysis. + +Corollary E.5. Let $\chi$ be the stable color mapping of SWL algorithm $\mathsf{A}(\mathcal{A},\mathsf{Pool})$ defined in Definition 4.3, satisfying $\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}\in \mathcal{A}$ . For any graphs $G = (\mathcal{V}_G,\mathcal{E}_G)$ , $H = (\mathcal{V}_H,\mathcal{E}_H)$ and vertices $u,v\in \mathcal{V}_G$ , $x,y\in \mathcal{V}_H$ , if $\chi_G(u,v) = \chi_H(x,y)$ , then + +- $\mathrm{dis}_G(u,v) = \mathrm{dis}_H(x,y)$ ; +- $\chi_G(u,u) = \chi_H(x,x)$ . + +Proof. The first bullet directly follows from Lemma E.4. The second bullet can be proved by induction over the distance $\mathrm{dis}_G(u,v)$ . For the base case of $\mathrm{dis}_G(u,v) = \mathrm{dis}_H(x,y) = 0$ , $u = v$ , $x = y$ , and the result already holds. For the induction step, the proof is similar to the above proof of Lemma E.4, in that we can find $w \in \mathcal{N}_G(v)$ and $z \in \mathcal{N}_H(y)$ such that $\mathrm{dis}_G(u,w) + 1 = \mathrm{dis}_G(u,v)$ and $\mathrm{dis}_H(x,z) + 1 = \mathrm{dis}_H(x,y)$ , and $\chi_G(u,w) = \chi_H(x,z)$ (we omit the detail here). This finishes the induction step and concludes the proof. + +# E.3. Hierarchy of different aggregation schemes + +This subsection gives a complete analysis of different aggregation schemes in SWL, which is related to the proofs of Theorem 4.4. Note that we focus on the canonical node marking policy with a fixed pooling paradigm Pool throughout this subsection. In all proofs, we denote $G = (\mathcal{V}_G, \mathcal{E}_G)$ and $H = (\mathcal{V}_H, \mathcal{E}_H)$ as any connected graphs. + +Lemma E.6. Let $\chi$ be the stable color mapping of SWL algorithm A(A, Pool) defined in Definition 4.3, satisfying $\mathsf{agg}_{\mathbf{u}}^{\mathsf{L}} \in \mathcal{A}$ . For any vertices $u \in \mathcal{V}_G$ and $x \in \mathcal{V}_H$ , if $\chi_G(u, u) = \chi_H(x, x)$ , then $\{\{\chi_G(u, v) : v \in \mathcal{V}_G\}\} = \{\{\chi_H(x, y) : y \in \mathcal{V}_H\}\}$ . + +Proof. Actually, we can prove a stronger result: for any $k \in \mathbb{N}$ , + +$$ +\{\{\chi_ {G} (u, v): v \in \mathcal {N} _ {G} ^ {k} (u) \} \} = \{\{\chi_ {H} (x, y): y \in \mathcal {N} _ {H} ^ {k} (x) \}. \tag {6} +$$ + +This implies Lemma E.6 because $G$ and $H$ are connected graphs. + +We prove it by induction over $k$ . The base case of $k = 0$ is trivial. Now assume (6) holds for all $k \leq K$ , and we want to prove that (6) holds for $k = K + 1$ . Using the condition $\mathsf{agg}_{\mathfrak{u}}^{\mathsf{L}} \in \mathcal{A}$ , for any vertices $v \in \mathcal{V}_G$ and $y \in \mathcal{V}_H$ satisfying $\chi_G(u, v) = \chi_H(x, y)$ , we have + +$$ +\{\left\{\chi_ {G} (u, w): w \in \mathcal {N} _ {G} (v) \right\} \} = \{\left\{\chi_ {H} (x, z): z \in \mathcal {N} _ {H} (y) \right\}. \tag {7} +$$ + +Combining (7) with (6), we obtain + +$$ +\bigcup_ {v \in \mathcal {D} _ {G} ^ {K} (u)} \{\{\chi_ {G} (u, w): w \in \mathcal {N} _ {G} (v) \} \} = \bigcup_ {y \in \mathcal {D} _ {H} ^ {K} (x)} \{\{\chi_ {H} (x, z): z \in \mathcal {N} _ {H} (y) \} \}, \tag {8} +$$ + +where we define + +$$ +\mathcal {D} _ {G} ^ {K} (u) := \mathcal {N} _ {G} ^ {K} (u) \backslash \mathcal {N} _ {G} ^ {K - 1} (u) = \{v \in \mathcal {V} _ {G}: \operatorname {d i s} _ {G} (u, v) = K \}. +$$ + +Here, each vertex $w$ in (8) satisfies $K - 1 \leq \mathrm{dis}_G(u,w) \leq K + 1$ , and each vertex $z$ in (8) satisfies $K - 1 \leq \mathrm{dis}_H(x,z) \leq K + 1$ . By Corollary E.5, for any vertices $w \in \mathcal{V}_G$ and $z \in \mathcal{V}_H$ , $\mathrm{dis}_G(u,w) \neq \mathrm{dis}_H(x,z)$ implies $\chi_G(u,w) \neq \chi_H(x,z)$ . Therefore, + +$$ +\bigcup_ {v \in \mathcal {D} _ {G} ^ {K} (u)} \{\{\chi_ {G} (u, w): w \in \mathcal {N} _ {G} (v) \cap \mathcal {D} _ {G} ^ {K + 1} (u) \} \} = \bigcup_ {y \in \mathcal {D} _ {H} ^ {K} (x)} \{\{\chi_ {H} (x, z): z \in \mathcal {N} _ {H} (y) \cap \mathcal {D} _ {H} ^ {K + 1} (x) \} \}. \tag {9} +$$ + +Rearranging the terms in (9) yields an equivalent formula: + +$$ +\bigcup_ {w \in \mathcal {D} _ {G} ^ {K + 1} (u)} \left\{\left\{\chi_ {G} (u, w) \right\} \right\} \times \left| \mathcal {N} _ {G} (w) \cap \mathcal {D} _ {G} ^ {K} (u) \right| = \bigcup_ {z \in \mathcal {D} _ {H} ^ {K + 1} (x)} \left\{\left\{\chi_ {H} (x, z) \right\} \right\} \times \left| \mathcal {N} _ {H} (z) \cap \mathcal {D} _ {H} ^ {K} (x) \right|, \tag {10} +$$ + +where we denote $\{\{c\}\} \times M$ as a multiset containing $M$ repeated elements $c$ . Next, note that if $\chi_G(u,w) = \chi_H(x,z)$ for some $w \in \mathcal{V}_G$ and $z \in \mathcal{V}_H$ , then by (7) and Corollary E.5, we have $|\mathcal{N}_G(w) \cap \mathcal{D}_G^K(u)| = |\mathcal{N}_H(z) \cap \mathcal{D}_H^K(x)|$ . This proves + +$$ +\left\{\left\{\chi_ {G} (u, w): w \in \mathcal {N} _ {G} ^ {K + 1} (u) \right\} \right\} = \left\{\left\{\chi_ {H} (x, z): z \in \mathcal {N} _ {H} ^ {K + 1} (x) \right\} \right\} +$$ + +and finishes the induction step. + +Corollary E.7. Let $\chi^{\mathsf{L}}$ , $\chi^{\mathsf{G}}$ , and $\chi^{\mathsf{LG}}$ be the stable color mappings of SWL algorithms $\mathsf{A}(\mathcal{A} \cup \{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}\}, \mathsf{Pool})$ , $\mathsf{A}(\mathcal{A} \cup \{\mathsf{agg}_{\mathsf{u}}^{\mathsf{G}}\}, \mathsf{Pool})$ and $\mathsf{A}(\mathcal{A} \cup \{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{u}}^{\mathsf{G}}\}, \mathsf{Pool})$ , respectively. Then, $\chi^{\mathsf{LG}} \simeq \chi^{\mathsf{L}} \preceq \chi^{\mathsf{G}}$ . + +Proof. The proof is based on Remark E.2(c). We first prove that $\chi^{\mathsf{L}}\preceq \chi^{\mathsf{G}}$ . Define an auxiliary color mapping $\tilde{\chi} = \mathcal{T}(\mathcal{A}\cup \{\mathrm{agg}_{\mathfrak{u}}^{\mathsf{G}}\} ,\chi^{\mathsf{L}})$ where $\mathcal{T}$ is defined in (3). It suffices to prove that $\chi^{\mathsf{L}}\preceq \tilde{\chi}$ . + +Consider any vertices $u, v \in \mathcal{V}_G$ and $x, y \in \mathcal{V}_H$ satisfying $\chi_G^{\mathsf{L}}(u,v) = \chi_H^{\mathsf{L}}(x,y)$ . Since the mapping $\chi^{\mathsf{L}}$ is already stable, for any $\mathrm{agg} \in \mathcal{A}$ , we have + +$$ +\mathsf {a g g} (u, v, G, \chi_ {G} ^ {\mathsf {L}}) = \mathsf {a g g} (x, y, H, \chi_ {H} ^ {\mathsf {L}}). +$$ + +Moreover, due to the use of $\mathbf{agg}_{\mathbf{u}}^{\mathsf{L}}$ , by Corollary E.5 we have $\chi_G^{\mathsf{L}}(u,u) = \chi_H^{\mathsf{L}}(x,x)$ . Using Lemma E.6, we further obtain + +$$ +\{\{\chi_ {G} ^ {\mathsf {L}} (u, w): w \in \mathcal {V} _ {G} \} \} = \{\{\chi_ {H} ^ {\mathsf {L}} (x, z): z \in \mathcal {V} _ {H} \} \}. +$$ + +Namely, + +$$ +\mathsf {a g g} _ {\mathfrak {u}} ^ {\mathsf {G}} (u, v, G, \chi_ {G} ^ {\mathsf {L}}) = \mathsf {a g g} _ {\mathfrak {u}} ^ {\mathsf {G}} (x, y, H, \chi_ {H} ^ {\mathsf {L}}). +$$ + +Therefore, $\tilde{\chi} (u,v) = \tilde{\chi} (x,y)$ . We have proved $\chi^{\mathsf{L}}\preceq \tilde{\chi}$ + +We next turn to $\chi^{\mathrm{L}}\simeq \chi^{\mathrm{LG}}$ , for which it suffices to prove $\chi^{\mathrm{L}}\preceq \chi^{\mathrm{LG}}$ . The process is exactly the same as above. + +Lemma E.8. Let $\chi$ be the stable color mapping of SWL algorithm A(A, Pool) defined in Definition 4.3, satisfying $\mathsf{agg}_{\mathfrak{u}}^{\mathsf{G}} \in \mathcal{A}$ . For any vertices $u, v \in \mathcal{V}_G$ and $x, y \in \mathcal{V}_H$ , if $\chi_G(u, v) = \chi_H(x, y)$ , then $\chi_G(u, u) = \chi_H(x, x)$ . + +Proof. Since $\mathbf{agg}_{\mathfrak{u}}^{\mathsf{G}} \in \mathcal{A}$ , we have + +$$ +\{\{\chi_ {G} (u, w): w \in \mathcal {V} _ {G} \} \} = \{\{\chi_ {H} (x, z): z \in \mathcal {V} _ {H} \} \}. +$$ + +Therefore, there is a vertex $z \in \mathcal{V}_H$ such that $\chi_G(u,u) = \chi_H(x,z)$ . By definition of node marking policy, we must have $x = z$ (otherwise, the initial color satisfies $\chi_G^{(0)}(u,u) \neq \chi_H^{(0)}(x,z)$ , a contradiction). This already proves that $\chi_G(u,u) = \chi_H(x,x)$ . + +Corollary E.9. Let $\chi^{\mathsf{G}}$ , $\chi^{\mathsf{P}}$ , and $\chi^{\mathsf{GP}}$ be the stable color mappings of SWL algorithms $\mathsf{A}(\mathcal{A} \cup \{\mathsf{agg}_{\mathsf{u}}^{\mathsf{G}}\}, \mathsf{Pool})$ , $\mathsf{A}(\mathcal{A} \cup \{\mathsf{agg}_{\mathsf{uu}}^{\mathsf{P}}\}, \mathsf{Pool})$ , and $\mathsf{A}(\mathcal{A} \cup \{\mathsf{agg}_{\mathsf{u}}^{\mathsf{G}}, \mathsf{agg}_{\mathsf{uu}}^{\mathsf{P}}\}, \mathsf{Pool})$ , respectively. Then, $\chi^{\mathsf{GP}} \simeq \chi^{\mathsf{G}} \preceq \chi^{\mathsf{P}}$ . + +Proof. Similar to Corollary E.7, the proof is based on Remark E.2(c). We only prove $\chi^{\mathsf{G}}\preceq \chi^{\mathsf{P}}$ , and the proof of $\chi^{\mathsf{G}}\preceq \chi^{\mathsf{GP}}$ is exactly the same. Define an auxiliary color mapping $\tilde{\chi} = \mathcal{T}(\mathcal{A}\cup \{\mathrm{agg}_{\mathrm{uu}}^{\mathsf{P}}\} ,\chi^{\mathsf{G}})$ where $\mathcal{T}$ is defined in (3). It suffices to prove that $\chi^{\mathsf{G}}\preceq \tilde{\chi}$ . + +Consider any vertices $u, v \in \mathcal{V}_G$ and $x, y \in \mathcal{V}_H$ satisfying $\chi_G^{\mathrm{G}}(u, v) = \chi_H^{\mathrm{G}}(x, y)$ . Due to the presence of $\mathrm{agg}_{\mathrm{u}}^{\mathrm{G}}$ , by Lemma E.8 we have $\chi_G^{\mathrm{G}}(u, u) = \chi_H^{\mathrm{G}}(x, x)$ . This already implies + +$$ +\mathsf {a g g} _ {\mathsf {u u}} ^ {\mathsf {P}} (u, v, G, \chi_ {G} ^ {\mathsf {G}}) = \mathsf {a g g} _ {\mathsf {u u}} ^ {\mathsf {P}} (x, y, H, \chi_ {H} ^ {\mathsf {G}}). +$$ + +For any $\mathbf{agg} \in \mathcal{A}$ , we also have + +$$ +\mathsf {a g g} (u, v, G, \chi_ {G} ^ {\mathsf {G}}) = \mathsf {a g g} (x, y, H, \chi_ {H} ^ {\mathsf {G}}). +$$ + +Therefore, $\tilde{\chi}(u,v) = \tilde{\chi}(x,y)$ , namely, $\chi^{\mathsf{G}} \preceq \tilde{\chi}$ . + +Lemma E.10. Let $\chi$ be the stable color mapping of SWL algorithm A( $\{\mathsf{agg}_{\mathbf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathbf{v}}^{\mathsf{L}}\}$ , Pool). Then for any vertices $u, v \in \mathcal{V}_G$ and $x, y \in \mathcal{V}_H$ , if $\chi_G(u, v) = \chi_H(x, y)$ , then $\chi_G(v, u) = \chi_H(y, x)$ . + +Proof. We will prove a stronger result: let $\chi^{(t)}$ be the color mapping at iteration $t$ , then for any $t \in \mathbb{N}$ , $\chi_G^{(t)}(u, v) = \chi_H^{(t)}(x, y) \iff \chi_G^{(t)}(v, u) = \chi_H^{(t)}(y, x)$ . We prove it by induction over $t$ . + +The base case of $t = 0$ trivially holds by definition of the node marking. Assume the above result holds for $t = T$ , and consider $t = T + 1$ . Let $\chi_G^{(T + 1)}(u,v) = \chi_H^{(T + 1)}(x,y)$ . By definition of the aggregation scheme $\{\mathsf{agg}_{\mathbf{u}}^{\mathsf{L}},\mathsf{agg}_{\mathbf{v}}^{\mathsf{L}}\}$ , we have + +$$ +\begin{array}{l} \{\{\chi_ {G} ^ {(T)} (u, w): w \in \mathcal {N} _ {G} (v) \} \} = \{\{\chi_ {H} ^ {(T)} (x, z): z \in \mathcal {N} _ {H} (y) \} \}, \\ \{\{\chi_ {G} ^ {(T)} (w, v): w \in \mathcal {N} _ {G} (u) \} \} = \{\{\chi_ {H} ^ {(T)} (z, y): z \in \mathcal {N} _ {H} (x) \} \}. \\ \end{array} +$$ + +Using induction we obtain + +$$ +\begin{array}{l} \{\{\chi_ {G} ^ {(T)} (w, u): w \in \mathcal {N} _ {G} (v) \} \} = \{\{\chi_ {H} ^ {(T)} (z, x): z \in \mathcal {N} _ {H} (y) \} \}, \\ \{\{\chi_ {G} ^ {(T)} (v, w): w \in \mathcal {N} _ {G} (u) \} \} = \{\{\chi_ {H} ^ {(T)} (y, z): z \in \mathcal {N} _ {H} (x) \} \}. \\ \end{array} +$$ + +Therefore, $\chi_G^{(T + 1)}(v,u) = \chi_H^{(T + 1)}(y,x)$ , which finishes the induction step. + +Corollary E.11. Let $\chi^{\mathsf{LL}}$ , $\chi^{\mathsf{LP}}$ , and $\chi^{\mathsf{LLP}}$ be the stable color mappings of SWL algorithms $\mathsf{A}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{v}}^{\mathsf{L}}\}, \mathsf{Pool})$ , $\mathsf{A}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{v}}^{\mathsf{P}}\}, \mathsf{Pool})$ , and $\mathsf{A}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{v}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{vu}}^{\mathsf{P}}\}, \mathsf{Pool})$ , respectively. Then, $\chi^{\mathsf{LL}} \simeq \chi^{\mathsf{LP}} \simeq \chi^{\mathsf{LLP}}$ . + +Proof. Similar to Corollary E.7, the proof is based on Remark E.2(c). We only prove $\chi^{\mathbb{L}\mathbb{L}}\simeq \chi^{\mathbb{L}\mathbb{P}}$ , and the proof of $\chi^{\mathbb{L}\mathbb{L}}\simeq \chi^{\mathbb{L}\mathbb{P}}$ is exactly the same. + +We first prove $\chi^{\mathrm{LP}}\preceq \chi^{\mathrm{LL}}$ . Define an auxiliary color mapping $\tilde{\chi} = \mathcal{T}(\{\mathsf{agg}_{\mathbf{u}}^{\mathsf{L}},\mathsf{agg}_{\mathbf{v}}^{\mathsf{L}}\},\chi^{\mathrm{LP}})$ where $\mathcal{T}$ is defined in (3). It suffices to prove that $\chi^{\mathrm{LP}}\preceq \tilde{\chi}$ . Consider any vertices $u,v\in \mathcal{V}_G$ and $x,y\in \mathcal{V}_H$ satisfying $\chi_G^{\mathrm{LP}}(u,v) = \chi_H^{\mathrm{LP}}(x,y)$ . Since the mapping $\chi^{\mathrm{LP}}$ is already stable, we have $\chi_G^{\mathrm{LP}}(v,u) = \chi_H^{\mathrm{LP}}(y,x)$ and + +$$ +\{\{\chi_ {G} ^ {\mathrm {L P}} (u, w): w \in \mathcal {N} _ {G} (v) \} \} = \{\{\chi_ {H} ^ {\mathrm {L P}} (x, z): z \in \mathcal {N} _ {H} (y) \} \}. \tag {11} +$$ + +Since $\chi_G^{\mathsf{LP}}(v,u) = \chi_H^{\mathsf{LP}}(y,x)$ , we also have + +$$ +\{\{\chi_ {G} ^ {\mathsf {L P}} (v, w): w \in \mathcal {N} _ {G} (u) \} \} = \{\{\chi_ {H} ^ {\mathsf {L P}} (y, z): z \in \mathcal {N} _ {H} (x) \} \}. +$$ + +This further implies + +$$ +\{\left\{\chi_ {G} ^ {\mathrm {L P}} (w, v): w \in \mathcal {N} _ {G} (u) \right\} = \{\left\{\chi_ {H} ^ {\mathrm {L P}} (z, y): z \in \mathcal {N} _ {H} (x) \right\}. \tag {12} +$$ + +Combining (11) and (12) we obtain $\tilde{\chi}_G(u,v) = \tilde{\chi}_H(x,y)$ , as desired. + +We next prove $\chi^{\mathrm{LL}}\preceq \chi^{\mathrm{LP}}$ . Define an auxiliary color mapping $\tilde{\chi} = \mathcal{T}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}},\mathsf{agg}_{\mathsf{vu}}^{\mathsf{P}}\},\chi^{\mathrm{LL}})$ where $\mathcal{T}$ is defined in (3). It suffices to prove that $\chi^{\mathrm{LL}}\preceq \tilde{\chi}$ . This simply follows by the fact that the stable color mapping $\chi^{\mathrm{LL}}$ cannot be refined using $\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}$ (by definition) or using $\mathsf{agg}_{\mathsf{vu}}^{\mathsf{P}}$ (by Lemma E.10). + +# E.4. Analyzing the pooling paradigm + +This subsection discusses how the pooling paradigm can influence the expressive power of the SWL algorithm, which is related to the proofs of Theorem 4.6. In all proofs, we denote $G = (\mathcal{V}_G, \mathcal{E}_G)$ and $H = (\mathcal{V}_H, \mathcal{E}_H)$ as any graphs. + +Lemma E.12. Let $\mathcal{A}$ be defined in Definition 4.3 with $\{\mathrm{agg}_{\mathbf{u}}^{\mathrm{G}},\mathrm{agg}_{\mathbf{u}}^{\mathrm{L}}\} \cap \mathcal{A}\neq \emptyset$ . Then, $\mathsf{A}(\mathcal{A},\mathsf{VS})\preceq \mathsf{A}(\mathcal{A},\mathsf{SV})$ + +Proof. Let $c^{\mathrm{VS}}(G)$ and $c^{\mathrm{SV}}(G)$ be the graph representations computed by algorithms $\mathsf{A}(\mathcal{A},\mathsf{VS})$ and $\mathsf{A}(\mathcal{A},\mathsf{SV})$ , respectively. Since both algorithms use the same aggregation scheme, we denote the stable color mapping as $\chi$ . We aim to prove that if $c^{\mathrm{SV}}(G) = c^{\mathrm{SV}}(H)$ , then $c^{\mathrm{VS}}(G) = c^{\mathrm{VS}}(H)$ . + +Let $c^{\mathsf{SV}}(G) = c^{\mathsf{SV}}(H)$ , then by definition of $\mathsf{SV}$ pooling + +$$ +\left\{\left\{r _ {G} ^ {\vee} (v): v \in \mathcal {V} _ {G} \right\} \right\} = \left\{\left\{r _ {H} ^ {\vee} (y): y \in \mathcal {V} _ {H} \right\} \right\}, +$$ + +where we denote $r_G^{\vee}(v) = \{\{\chi_G(u,v):u\in \mathcal{V}_G\}\}$ . Consider any vertices $v\in \mathcal{V}_G$ and $y\in \mathcal{V}_H$ satisfying $r_G^{\vee}(v) = r_H^{\vee}(y)$ . Then, there exists vertex $w\in \mathcal{V}_H$ such that $\chi_G(v,v) = \chi_H(w,y)$ . Due to the definition of node marking, we must have $w = y$ . This implies that $\chi_G(v,v) = \chi_H(y,y)$ . Now separately consider two cases: + +- If $\mathbf{agg}_{\mathfrak{u}}^{\mathsf{G}} \in \mathcal{A}$ , then by definition of stable color mapping we have $\{\{\chi_G(v, u) : u \in \mathcal{V}_G\} = \{\{\chi_H(y, x) : x \in \mathcal{V}_H\}\}$ ; +- If $\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}} \in \mathcal{A}$ , then by Lemma E.6 we have $\{\{\chi_G(v, u): u \in \mathcal{V}_G\} \} = \{\{\chi_H(y, x): x \in \mathcal{V}_H\} \}$ . + +In both cases, we have $r_G^{\mathsf{S}}(v) = r_H^{\mathsf{S}}(y)$ where we denote $r_G^{\mathsf{S}}(v) = \{\{\chi_G(v,u):u\in \mathcal{V}_G\} \}$ . + +Therefore, we have proved that $r_G^{\mathsf{V}}(v) = r_H^{\mathsf{V}}(y)\Rightarrow r_G^{\mathsf{S}}(v) = r_H^{\mathsf{S}}(y)$ . This finally yields + +$$ +\left\{\left\{r _ {G} ^ {\mathsf {S}} (v): v \in \mathcal {V} _ {G} \right\} \right\} = \left\{\left\{r _ {H} ^ {\mathsf {S}} (y): y \in \mathcal {V} _ {H} \right\} \right\}, +$$ + +namely, $c^{\vee \mathsf{S}}(G) = c^{\vee \mathsf{S}}(H)$ , as desired. + +# E.5. Proof of theorems in Section 4.2 + +We are now ready to prove all the main results in Section 4.2, which we restate below. + +Theorem 4.4. Under the notation of Definition 4.3, the following hold: + +- $\mathrm{A}\left( {\mathcal{A} \cup \left\{ {{\text{agg}}_{\mathrm{u}}^{\mathrm{G}}}\right\} ,\text{ Pool }}\right) \preceq \mathrm{A}\left( {\mathcal{A} \cup \left\{ {{\text{agg}}_{\mathrm{u}}^{\mathrm{L}}}\right\} ,\text{ Pool }}\right)$ and $\mathrm{A}\left( {\mathcal{A} \cup \left\{ {{\text{agg}}_{\mathrm{u}}^{\mathrm{L}},\text{ Pool }}\right\} ,\text{ Pool }}\right) \simeq \mathrm{A}\left( {\mathcal{A} \cup \left\{ {{\text{agg}}_{\mathrm{u}}^{\mathrm{L}},{\text{agg}}_{\mathrm{u}}^{\mathrm{G}}}\right\} ,\text{ Pool }}\right)$ ; +- $\mathsf{A}(\mathcal{A}\cup \{\mathsf{agg}_{\mathsf{uu}}^{\mathsf{P}}\} ,\mathsf{Pool})\preceq \mathsf{A}(\mathcal{A}\cup \{\mathsf{agg}_{\mathsf{u}}^{\mathsf{G}}\} ,\mathsf{Pool})$ and $\mathsf{A}(\mathcal{A}\cup \{\mathsf{agg}_{\mathsf{u}}^{\mathsf{G}}\} ,\mathsf{Pool})\simeq \mathsf{A}(\mathcal{A}\cup \{\mathsf{agg}_{\mathsf{u}}^{\mathsf{G}},\mathsf{agg}_{\mathsf{uu}}^{\mathsf{P}}\} \} ,\mathsf{Pool})$ +- $\mathrm{A}(\left\{\mathrm{agg}_{\mathrm{u}}^{\mathrm{L}}, \mathrm{agg}_{\mathrm{vu}}^{\mathrm{P}}\right\}, \mathrm{Pool}) \simeq \mathrm{A}(\left\{\mathrm{agg}_{\mathrm{u}}^{\mathrm{L}}, \mathrm{agg}_{\mathrm{v}}^{\mathrm{L}}\right\}, \mathrm{Pool}) \simeq \mathrm{A}(\left\{\mathrm{agg}_{\mathrm{u}}^{\mathrm{L}}, \mathrm{agg}_{\mathrm{v}}^{\mathrm{L}}, \mathrm{agg}_{\mathrm{vu}}^{\mathrm{P}}\right\}, \mathrm{Pool}).$ + +Proof. Based on Remark E.2(b), we only need to focus on the stable color mappings of these algorithms. The proof readily follows by using Corollaries E.7, E.9 and E.11. $\square$ + +Proposition 4.5. Let $\mathcal{A}$ be any aggregation scheme defined in Definition 4.3. Denote $\mathcal{A}^{\mathrm{u}\leftrightarrow \mathrm{v}}$ as the aggregation scheme obtained from $\mathcal{A}$ by exchanging the element $\mathsf{agg}_{\mathsf{uu}}^{\mathsf{P}}$ with $\mathsf{agg}_{\mathsf{vv}}^{\mathsf{P}}$ , exchanging $\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}$ with $\mathsf{agg}_{\mathsf{v}}^{\mathsf{L}}$ , and exchanging $\mathsf{agg}_{\mathsf{u}}^{\mathsf{G}}$ with $\mathsf{agg}_{\mathsf{v}}^{\mathsf{G}}$ . Then, $\mathsf{A}(\mathcal{A},\mathsf{VS}) \simeq \mathsf{A}(\mathcal{A}^{\mathrm{u}\leftrightarrow \mathrm{v}},\mathsf{SV})$ . + +Proof. The proof is almost trivial by symmetry. It is easy to see that for any vertices $u, v \in \mathcal{V}_G$ and $x, y \in \mathcal{V}_H$ , $\chi_G(u, v) = \chi_H(x, y) \iff \chi_G^{\mathrm{u} \leftrightarrow \mathrm{v}}(v, u) = \chi_H^{\mathrm{u} \leftrightarrow \mathrm{v}}(y, x)$ , where $\chi$ and $\chi^{\mathrm{u} \leftrightarrow \mathrm{v}}$ are the stable color mapping of SWL algorithms A(A, VS) and A(A $^{\mathrm{u} \leftrightarrow \mathrm{v}}$ , SV), respectively. + +Theorem 4.6. Let $\mathcal{A}$ be defined in Definition 4.3 with $\mathrm{agg}_{\mathrm{u}}^{\mathrm{L}} \in \mathcal{A}$ . Then the following hold: + +- $\mathrm{A}(\mathcal{A},\mathrm{VS}) \preceq \mathrm{A}(\mathcal{A},\mathrm{SV})$ +- If $\{\mathrm{agg}_{\mathsf{v}}^{\mathsf{G}}, \mathrm{agg}_{\mathsf{v}}^{\mathsf{L}}\} \cap \mathcal{A} \neq \emptyset$ , then $\mathrm{A}(\mathcal{A}, \mathrm{VS}) \simeq \mathrm{A}(\mathcal{A}, \mathrm{SV})$ . + +Proof. The first bullet is a direct consequence of Lemma E.12. The second bullet is a direct consequence of the first bullet and Proposition 4.5, since we have both $\mathsf{A}(\mathcal{A},\mathsf{VS}) \preceq \mathsf{A}(\mathcal{A},\mathsf{SV})$ and $\mathsf{A}(\mathcal{A},\mathsf{SV}) \simeq \mathsf{A}(\mathcal{A}^{\mathsf{u}\leftrightarrow \mathsf{v}},\mathsf{VS}) \preceq \mathsf{A}(\mathcal{A}^{\mathsf{u}\leftrightarrow \mathsf{v}},\mathsf{SV}) \simeq \mathsf{A}(\mathcal{A},\mathsf{VS})$ . + +Corollary 4.7. Let $\mathsf{A}(\mathcal{A},\mathsf{Pool})$ be any SWL algorithm defined in Definition 4.3 with at least one local aggregation, i.e. $\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}},\mathsf{agg}_{\mathsf{v}}^{\mathsf{L}}\} \cap \mathcal{A}\neq \emptyset$ . Then, $\mathsf{A}(\mathcal{A},\mathsf{Pool})$ must be as expressive as one of the 6 SWL algorithms defined below: + +- (Vanilla SWL) SWL(VS) := A( $\{agg_u^L\}$ , VS), SWL(SV) := A( $\{agg_u^L\}$ , SV); +- (SWL with additional single-point aggregation) PSWL(VS) := A( $\{agg_u^L, agg_{vv}^P\}$ , VS), PSWL(SV) := A( $\{agg_u^L, agg_{vv}^P\}$ , SV); +- (SWL with additional global aggregation) GSWL := A( $\{agg_u^L, agg_v^G\}$ , VS); +(Symmetrized SWL) SSW $\coloneqq$ A( $\{agg_{\mathbf{u}}^{\mathrm{L}}, agg_{\mathbf{v}}^{\mathrm{L}}\}$ , VS). + +Moreover, we have + +$$ +\operatorname {S W L} (\mathrm {V S}) \preceq \operatorname {S W L} (\mathrm {S V}) a n d \operatorname {P S W L} (\mathrm {V S}) \preceq \operatorname {P S W L} (\mathrm {S V}), +$$ + +$$ +\operatorname {S W L} (\mathrm {V S}) \preceq \operatorname {P S W L} (\mathrm {V S}) a n d \operatorname {S W L} (\mathrm {S V}) \preceq \operatorname {P S W L} (\mathrm {S V}), +$$ + +$$ +\operatorname {P S W L} (\operatorname {S V}) \preceq \operatorname {G S W L} \preceq \operatorname {S S W L}. +$$ + +Proof. Due to Proposition 4.5, we can assume $\mathsf{agg}_{\mathfrak{u}}^{\mathsf{L}} \in \mathcal{A}$ without loss of generality. We separately consider several cases: + +- Case 1: $\{\mathsf{agg}_{\mathsf{v}}^{\mathsf{L}},\mathsf{agg}_{\mathsf{v}}^{\mathsf{G}},\mathsf{agg}_{\mathsf{v}}^{\mathsf{P}},\mathsf{agg}_{\mathsf{v}}^{\mathsf{P}}\} \cap \mathcal{A} = \emptyset$ In this case, we have + +$$ +A (A, \text {P o o l}) \preceq A (\left\{\operatorname {a g g} _ {u} ^ {L}, \operatorname {a g g} _ {u} ^ {G}, \operatorname {a g g} _ {u u} ^ {P} \right\}, \text {P o o l}) \simeq A (\left\{\operatorname {a g g} _ {u} ^ {L}, \operatorname {a g g} _ {u} ^ {G} \right\}, \text {P o o l}) \simeq A (\left\{\operatorname {a g g} _ {u} ^ {L} \right\}, \text {P o o l}) +$$ + +by Theorem 4.4. On the other hand, clearly $\mathrm{A}(\{\mathrm{agg}_{\mathrm{u}}^{\mathrm{L}}\}, \mathrm{Pool}) \preceq \mathrm{A}(\mathcal{A}, \mathrm{Pool})$ . We thus have + +$\mathsf{A}(\mathcal{A},\mathsf{Pool})\simeq \mathsf{A}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}\} ,\mathsf{Pool})$ , namely, $\mathsf{A}(\mathcal{A},\mathsf{Pool})\simeq \mathsf{SWL}(\mathsf{VS})$ or $\mathsf{A}(\mathcal{A},\mathsf{Pool})\simeq \mathsf{SWL}(\mathsf{SV})$ + +- Case 2: $\mathsf{agg}_{\mathsf{vv}}^{\mathsf{P}} \in \mathcal{A}$ and $\{\mathsf{agg}_{\mathsf{v}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{v}}^{\mathsf{G}}, \mathsf{agg}_{\mathsf{vu}}^{\mathsf{P}}\} \cap \mathcal{A} = \emptyset$ . In this case, we have + +$$ +\begin{array}{l} A (\mathcal {A}, \text {P o o l}) \preceq A \left(\left\{\operatorname {a g g} _ {\mathrm {u}} ^ {\mathrm {L}}, \operatorname {a g g} _ {\mathrm {u}} ^ {\mathrm {G}}, \operatorname {a g g} _ {\mathrm {u u}} ^ {\mathrm {P}}, \operatorname {a g g} _ {\mathrm {v v}} ^ {\mathrm {P}} \right\}, \text {P o o l}\right) \\ \simeq \mathrm {A} \left(\left\{\mathrm {a g g} _ {\mathrm {u}} ^ {\mathrm {L}}, \mathrm {a g g} _ {\mathrm {u}} ^ {\mathrm {G}}, \mathrm {a g g} _ {\mathrm {v v}} ^ {\mathrm {P}} \right\}, \text {P o o l}\right) \\ \simeq A (\left\{\mathbf {a g g} _ {\mathrm {u}} ^ {\mathrm {L}}, \mathbf {a g g} _ {\mathrm {v v}} ^ {\mathrm {P}} \right\}, \text {P o o l}) \\ \end{array} +$$ + +by Theorem 4.4. On the other hand, clearly $\mathsf{A}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}},\mathsf{agg}_{\mathsf{v}}^{\mathsf{P}}\}, \mathsf{Pool}) \preceq \mathsf{A}(\mathcal{A},\mathsf{Pool})$ . We thus have + +$\mathsf{A}(\mathcal{A},\mathsf{Pool})\simeq \mathsf{A}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}},\mathsf{agg}_{\mathsf{vv}}^{\mathsf{P}}\} ,\mathsf{Pool})$ , namely, $\mathsf{A}(\mathcal{A},\mathsf{Pool})\simeq \mathsf{PSWL}(\mathsf{VS})$ or $\mathsf{A}(\mathcal{A},\mathsf{Pool})\simeq \mathsf{PSWL}(\mathsf{SV})$ + +- Case 3: $\mathsf{agg}_{\mathsf{v}}^{\mathsf{G}} \in \mathcal{A}$ and $\{\mathsf{agg}_{\mathsf{v}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{vu}}^{\mathsf{P}}\} \cap \mathcal{A} = \emptyset$ . In this case, we have + +$$ +\begin{array}{l} A (A, P o o l) \preceq A (\left\{\text {a g g} _ {u} ^ {L}, \text {a g g} _ {u} ^ {G}, \text {a g g} _ {u u} ^ {P}, \text {a g g} _ {v} ^ {G}, \text {a g g} _ {v v} ^ {P} \right\}, P o o l) \\ \simeq A \left(\left\{\operatorname {a g g} _ {\mathrm {u}} ^ {\mathrm {L}}, \operatorname {a g g} _ {\mathrm {u}} ^ {\mathrm {G}}, \operatorname {a g g} _ {\mathrm {v}} ^ {\mathrm {G}} \right\}, \text {P o o l}\right) \\ \simeq A \left(\left\{\operatorname {a g g} _ {\mathrm {u}} ^ {\mathrm {L}}, \operatorname {a g g} _ {\mathrm {v}} ^ {\mathrm {G}} \right\}, \text {P o o l}\right) \\ \end{array} +$$ + +by Theorem 4.4. On the other hand, clearly $\mathsf{A}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}},\mathsf{agg}_{\mathsf{v}}^{\mathsf{G}}\}$ , Pool) $\preceq$ A(A,Pool). We thus have + +$\mathsf{A}(\mathcal{A}, \mathsf{Pool}) \simeq \mathsf{A}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{v}}^{\mathsf{G}}\}, \mathsf{Pool})$ . Moreover, by Theorem 4.6 we have + +$\mathsf{A}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{v}}^{\mathsf{G}}\}, \mathsf{VS}) = \mathsf{A}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{v}}^{\mathsf{G}}\}, \mathsf{SV})$ . Therefore, $\mathsf{A}(\mathcal{A}, \mathsf{Pool}) \simeq \mathsf{GSWL}$ . + +- Case 4: $\mathsf{agg}_{\mathsf{v}}^{\mathsf{L}} \in \mathcal{A}$ or $\mathsf{agg}_{\mathsf{vu}}^{\mathsf{P}} \in \mathcal{A}$ . In this cases, a similar analysis yields + +$$ +\begin{array}{l} A (\mathcal {A}, \text {P o o l}) \preceq A \left(\left\{\operatorname {a g g} _ {\mathrm {u}} ^ {\mathrm {L}}, \operatorname {a g g} _ {\mathrm {v}} ^ {\mathrm {L}}, \operatorname {a g g} _ {\mathrm {v u}} ^ {\mathrm {P}} \right\}, \text {P o o l}\right) \\ \simeq A \left(\left\{\operatorname {a g g} _ {\mathrm {u}} ^ {\mathrm {L}}, \operatorname {a g g} _ {\mathrm {v}} ^ {\mathrm {L}} \right\}, \text {P o o l}\right) \\ \simeq A \left(\left\{\mathbf {a g g} _ {\mathrm {u}} ^ {\mathrm {L}}, \mathbf {a g g} _ {\mathrm {v u}} ^ {\mathrm {P}} \right\}, \text {P o o l}\right) \\ \end{array} +$$ + +by Theorem 4.4. On the other hand, $\mathsf{A}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}},\mathsf{agg}_{\mathsf{v}}^{\mathsf{L}}\}$ , Pool) $\simeq$ A( $\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}},\mathsf{agg}_{\mathsf{vu}}^{\mathsf{P}}\})$ , Pool) $\preceq$ A(A, Pool). We thus have $\mathsf{A}(\mathcal{A},\mathsf{Pool})\simeq \mathsf{A}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}},\mathsf{agg}_{\mathsf{v}}^{\mathsf{L}}\}$ , Pool). Moreover, by Theorem 4.6 we have + +$\mathsf{A}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{v}}^{\mathsf{L}}\}, \mathsf{VS}) = \mathsf{A}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{v}}^{\mathsf{L}}\}, \mathsf{SV})$ . Therefore, $\mathsf{A}(\mathcal{A}, \mathsf{Pool}) \simeq \mathsf{SSWL}$ . + +Combining the four cases concludes the proof. + +# F. Discussions on Subgraph GNNs beyond the Framework of Definition 2.1 + +There have been several prior works that design subgraph GNNs beyond the aggregation schemes of Definition 2.1. In this section, we will investigate them and compare the expressive power with our framework. We focus on the WL algorithm corresponding to each subgraph GNN because it has the same expressive power as the GNN model in distinguishing non-isomorphic graphs (which can be easily proved following Appendix D). Throughout this section, we assume that node marking policy is used since it achieves the strongest expressive power according to Proposition 4.2. + +Below, we discuss the following works: + +GNN-AK (Zhao et al., 2022a); +GNN-AK-ctx (Zhao et al., 2022a); +DSS-WL (Bevilacqua et al., 2022); +- SUN (Frasca et al., 2022); +- RelIGN(2) (Frasca et al., 2022). + +GNN-AK (Zhao et al., 2022a). The GNN aggregation scheme can be written as + +$$ +\chi_ {G} ^ {(t + 1)} (u, v) = \left\{ \begin{array}{l l} \mathsf {h a s h} (\chi_ {G} ^ {(t)} (u, v), & \\ \chi_ {G} ^ {(t)} (v, v), & \text {i f} u \neq v, \\ \{\{\chi_ {G} ^ {(t)} (u, w): w \in \mathcal {N} _ {G} (v) \} \}) \\ \mathsf {h a s h} (\chi_ {G} ^ {(t)} (v, v), & \\ \{\{\chi_ {G} ^ {(t)} (u, w): w \in \mathcal {N} _ {G} (v) \} \}, & \text {i f} u = v. \\ \{\{\chi_ {G} ^ {(t)} (u, w): w \in \mathcal {V} _ {G} \} \}) & \end{array} \right. +$$ + +GNN-AK uses the vertex-subgraph pooling. As can be seen, there is an additional global aggregation $\mathrm{agg}_{\mathfrak{u}}^{\mathsf{G}}$ when $u = v$ , which differs from the case of $u \neq v$ . Therefore, it goes beyond the framework of Definition 3.1. + +Proposition F.1. GNN-AK is as powerful as PSWL(VS). + +Proof. Consider the following two SWL algorithms defined in Definition 4.3: (i) $\mathsf{A}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{v}\mathsf{v}}^{\mathsf{P}}, \mathsf{agg}_{\mathsf{u}}^{\mathsf{G}}\}, \mathsf{VS})$ , and (ii) $\mathsf{A}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{v}\mathsf{v}}^{\mathsf{P}}, \mathsf{agg}_{\mathsf{u}}^{\mathsf{G}}\}, \mathsf{VS})$ . It is clear that the stable color mapping of GNN-AK is finer than that of $\mathsf{A}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{v}\mathsf{v}}^{\mathsf{P}}, \mathsf{agg}_{\mathsf{u}}^{\mathsf{G}}\}, \mathsf{VS})$ , but the stable color mapping of $\mathsf{A}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{v}\mathsf{v}}^{\mathsf{P}}, \mathsf{agg}_{\mathsf{u}}^{\mathsf{G}}\}, \mathsf{VS})$ is finer than GNN-AK. However, both algorithms are equivalent to PSWL(VS) as shown in Corollary 4.7. Therefore, by Remark E.2(b) GNN-AK is as powerful as PSWL(VS). + +GNN-AK-ctx (Zhao et al., 2022a). The GNN aggregation scheme can be written as + +$$ +\chi_ {G} ^ {(t + 1)} (u, v) = \left\{ \begin{array}{l l} \mathsf {h a s h} (\chi_ {G} ^ {(t)} (u, v), & \text {i f} u \neq v, \\ \chi_ {G} ^ {(t)} (v, v), & \text {i f} u \neq v, \\ \{\{\chi_ {G} ^ {(t)} (u, w): w \in \mathcal {N} _ {G} (v) \} \}) & \\ \mathsf {h a s h} (\chi_ {G} ^ {(t)} (v, v), & \\ \{\{\chi_ {G} ^ {(t)} (u, w): w \in \mathcal {N} _ {G} (v) \} \}, & \text {i f} u = v. \\ \{\{\chi_ {G} ^ {(t)} (u, w): w \in \mathcal {V} _ {G} \} \}, & \\ \{\{\chi_ {G} ^ {(t)} (w, v): w \in \mathcal {V} _ {G} \} \}) & \end{array} \right. +$$ + +GNN-AK also uses the vertex-subgraph pooling. Compared with GNN-AK, GNN-AK-ctx further introduces the cross-graph global aggregation $\mathsf{agg}_{\mathsf{v}}^{\mathsf{G}}$ when $u = v$ (which they called the contextual encoding). + +Proposition F.2. GNN-AK-ctx is as powerful as GSWL. + +Proof. Similar to the above proof, by using the result that GSWL is as powerful as $\mathrm{A}(\{\mathrm{agg}_{\mathrm{u}}^{\mathrm{L}}, \mathrm{agg}_{\mathrm{v}}^{\mathrm{P}}, \mathrm{agg}_{\mathrm{v}}^{\mathrm{G}}\}$ , VS) (Corollary 4.7), it is clear that GSWL is more powerful than GNN-AK-ctx. It remains to prove that GNN-AK-ctx is more powerful than GSWL. + +The proof is based on Remark E.2(c). Let $\chi$ be the stable color mapping of GNN-AK-ctx. Define an auxiliary color mapping $\tilde{\chi} = \mathcal{T}(\{\mathrm{agg}_{\mathrm{u}}^{\mathrm{L}},\mathrm{agg}_{\mathrm{v}}^{\mathrm{G}}\},\chi)$ where $\mathcal{T}$ is defined in (3). It suffices to prove that $\chi \preceq \tilde{\chi}$ . + +Consider any vertices $u, v \in \mathcal{V}_G$ and $x, y \in \mathcal{V}_H$ satisfying $\chi_G(u, v) = \chi_H(x, y)$ . Since the mapping $\chi$ is already stable, by definition of GNN-AK-ctx we have + +$$ +\chi_ {G} (v, v) = \chi_ {H} (y, y), \tag {13} +$$ + +$$ +\left\{\left\{\chi_ {G} (u, w): w \in \mathcal {N} _ {G} (v) \right\} \right\} = \left\{\left\{\chi_ {H} (x, z): z \in \mathcal {N} _ {H} (y) \right\} \right\}. \tag {14} +$$ + +Again by definition of the stable color mapping, (13) implies that + +$$ +\{\left\{\chi_ {G} (w, v): w \in \mathcal {V} _ {G} \right\} \} = \{\left\{\chi_ {H} (z, y): z \in \mathcal {V} _ {H} \right\}. \tag {15} +$$ + +Combining with (14) and (15), we obtain that $\tilde{\chi}_G(u,v) = \tilde{\chi}_H(x,y)$ , concluding the proof. + +DSS-WL (Bevilacqua et al., 2022). The aggregation scheme of DSS-WL can be written as + +$$ +\begin{array}{l} \chi_ {G} ^ {(t + 1)} (u, v) = \mathsf {h a s h} (\chi_ {G} ^ {(t)} (u, v), \\ \{\left\{\chi_ {G} ^ {(t)} (u, w): w \in \mathcal {N} _ {G} (v) \right\}, \\ \{\{\chi_ {G} ^ {(t)} (w, v): w \in \mathcal {V} _ {G} \} \}, \\ \{\{\chi_ {G} ^ {(t)} (w, w ^ {\prime}): w \in \mathcal {V} _ {G}, w ^ {\prime} \in \mathcal {N} _ {G} (v) \} \}). \\ \end{array} +$$ + +Here, the last aggregation does not belong to Definition 3.1. DSS-WL also uses the vertex-subgraph pooling. + +Proposition F.3. DSS-WL is as powerful as GSWL. + +Proof. Clearly, DSS-WL is more powerful than the SWL algorithm $A(\{\mathsf{agg}_{\mathbf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathbf{v}}^{\mathsf{G}}\}, \mathsf{VS})$ , which is precisely GSWL. It remains to prove that GSWL is more powerful than DSS-WL. + +Similar to Proposition F.2, the proof is based on Remark E.2(c). Let $\chi$ be the stable color mapping of GSWL. For any vertices $u, v \in \mathcal{V}_G$ and $x, y \in \mathcal{V}_H$ , if $\chi_G(u, v) = \chi_H(x, y)$ , then we have + +$$ +\{\{\chi_ {G} (u, w): w \in \mathcal {N} _ {G} (v) \} \} = \{\{\chi_ {H} (x, z): z \in \mathcal {N} _ {H} (y) \} \}, \tag {16} +$$ + +$$ +\{\left\{\chi_ {G} (w, v): w \in \mathcal {V} _ {G} \right\} \} = \{\left\{\chi_ {H} (z, y): z \in \mathcal {V} _ {H} \right\}. \tag {17} +$$ + +Plugging (17) into (16) yields + +$$ +\{\{\{\chi_ {G} (w, w ^ {\prime}): w \in \mathcal {V} _ {G} \}: w ^ {\prime} \in \mathcal {N} _ {G} (v) \} = \{\{\{\chi_ {H} (z, z ^ {\prime}): z \in \mathcal {V} _ {H} \}: z ^ {\prime} \in \mathcal {N} _ {H} (y) \}. +$$ + +Therefore, + +$$ +\{\{\chi_ {G} (w, w ^ {\prime}): w \in \mathcal {N} _ {G} (v), w ^ {\prime} \in \mathcal {V} _ {G} \} \} = \{\{\chi_ {H} (z, z ^ {\prime}): w \in \mathcal {N} _ {H} (y), z ^ {\prime} \in \mathcal {V} _ {H} \} \}. \tag {18} +$$ + +Combining with (16), (17), and (18), it implies that DSS-WL cannot further refine the stable color mapping $\chi$ , which concludes the proof. + +SUN (Frasca et al., 2022). The WL aggregation scheme can be written as + +$$ +\begin{array}{l} \chi_ {G} ^ {(t + 1)} (u, v) = \mathsf {h a s h} (\chi_ {G} ^ {(t)} (u, v), \chi_ {G} ^ {(t)} (u, u), \chi_ {G} ^ {(t)} (v, v), \\ \{\{\chi_ {G} ^ {(t)} (u, w): w \in \mathcal {N} _ {G} (v) \}, \\ \{\{\chi_ {G} ^ {(t)} (u, w): w \in \mathcal {V} _ {G} \} \}, \\ \{\{\chi_ {G} ^ {(t)} (w, v): w \in \mathcal {V} _ {G} \} \}, \\ \{\{\chi_ {G} ^ {(t)} (w, w ^ {\prime}): w \in \mathcal {V} _ {G}, w ^ {\prime} \in \mathcal {N} _ {G} (v) \} \}). \\ \end{array} +$$ + +We note that the formulation of Frasca et al. (2022) slightly differs from the above WL formula, in that SUN introduces different model parameters separately for the cases of $u = v$ and $u \neq v$ , respectively. However, when using a node marking policy, introducing two sets of parameters for the two cases does not theoretically increase the expressivity (but it may benefit practical performance in real-world tasks). + +Proposition F.4. SUN is as powerful as GSWL. + +Proof. The proof is almost the same as the above one, by using the result that GSWL is as powerful as $\mathsf{A}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}},\mathsf{agg}_{\mathsf{u}}^{\mathsf{G}},\mathsf{agg}_{\mathsf{v}}^{\mathsf{G}},\mathsf{agg}_{\mathsf{uu}}^{\mathsf{P}},\mathsf{agg}_{\mathsf{vv}}^{\mathsf{P}}\}, \mathsf{VS})$ (Corollary 4.7). We omit the details here. + +RelIGN(2) (Frasca et al., 2022). This GNN architecture is motivated by 2-IGN (Maron et al., 2019b;c) by extending each basic equivariant linear operator into various types of local/global aggregations. Each atomic aggregation operation in RelIGN(2) can be symbolized as $\mathsf{agg}^{\mathsf{op}_1,\mathsf{op}_2}$ , where $\mathsf{op}_1$ and $\mathsf{op}_2$ can take one of the following symbols: Pu, Pv, G, Lu, Lv, and D. The semantic of $\mathsf{agg}^{\mathsf{op}_1,\mathsf{op}_2}$ is defined as follows: + +$$ +\operatorname {a g g} ^ {\mathrm {o p} _ {1}, \mathrm {o p} _ {2}} (u, v, G, \chi) = \left\{\left\{\chi \left(w, w ^ {\prime}\right): w \in \bigcirc , w ^ {\prime} \in \bigcirc \right\} \right\}, +$$ + +where the first/second $\bigcirc$ is filled by one of the following expression depending on $\mathsf{op}_1 / \mathsf{op}_2$ , respectively: + +- For symbol $\mathrm{Pu}$ : $\bigcirc$ is filled by $\{u\}$ ; +- For symbol $\mathsf{Pv}$ : $\bigcirc$ is filled by $\{v\}$ ; +- For symbol Lu: $\bigcirc$ is filled by $\mathcal{N}_G(u)$ +- For symbol $\mathrm{Lv}$ : $\bigcirc$ is filled by $\mathcal{N}_G(v)$ ; +- For symbol $G$ : $\bigcirc$ is filled by $\mathcal{V}_G$ ; +- For symbol $\mathrm{D} : \bigcirc$ is filled by $\{w\}$ . This symbol corresponds to diagonal aggregation and can only be used by $\mathrm{op}_2$ . + +Based on the choice of $\mathsf{op}_1$ and $\mathsf{op}_2$ , there are a total of $5 \times 6 - 2 = 28$ nonequivalent aggregation operations. Note that $\mathsf{agg}^{\mathsf{Pu},\mathsf{D}}$ is equivalent to $\mathsf{agg}^{\mathsf{Pu},\mathsf{Pu}}$ and $\mathsf{agg}^{\mathsf{Pv},\mathsf{D}}$ is equivalent to $\mathsf{agg}^{\mathsf{Pv},\mathsf{Pv}}$ . As a result, $\mathsf{RelGN}(2)$ incorporates all these 28 aggregation operations into the WL iteration. Similar to SUN, $\mathsf{RelGN}(2)$ also introduces different model parameters separately for the cases of $u = v$ and $u \neq v$ , respectively. It can be calculated that the total number of linear equivariant transformations is $28 + 11 = 39$ . + +Proposition F.5. RelGN(2) is as powerful as SSWL. + +Proof. First, it is obvious that $\mathsf{RelGN}(2)$ is more powerful than SSWL. Therefore, it remains to prove that SSWL is more powerful than $\mathsf{RelGN}(2)$ . Similar to the previous propositions, the proof is based on Remark E.2(c). Let $\chi$ be the stable color mapping of SSWL. Consider any vertices $u, v \in \mathcal{V}_G$ and $x, y \in \mathcal{V}_H$ satisfying $\chi_G(u, v) = \chi_H(x, y)$ . Since SSWL is as powerful as $\mathsf{A}(\{\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{x}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{u}}^{\mathsf{G}}, \mathsf{agg}_{\mathsf{v}}^{\mathsf{G}}, \mathsf{agg}_{\mathsf{uu}}^{\mathsf{P}}, \mathsf{agg}_{\mathsf{vv}}^{\mathsf{P}}, \mathsf{agg}_{\mathsf{vu}}^{\mathsf{P}}\}, \mathsf{VS})$ , we have + +$$ +\chi_ {G} (u, u) = \chi_ {H} (x, x), +$$ + +$$ +\chi_ {G} (v, v) = \chi_ {H} (y, y), +$$ + +$$ +\chi_ {G} (v, u) = \chi_ {H} (y, x), +$$ + +$$ +\left\{\left\{\chi_ {G} (u, w): w \in \mathcal {V} _ {G} \right\} \right\} = \left\{\left\{\chi_ {H} (x, z): z \in \mathcal {V} _ {H} \right\} \right\}, +$$ + +$$ +\{\left\{\chi_ {G} (w, v): w \in \mathcal {V} _ {G} \right\} \} = \{\left\{\chi_ {H} (z, y): z \in \mathcal {V} _ {H} \right\} \}, +$$ + +$$ +\{\{\chi_ {G} (u, w): w \in \mathcal {N} _ {G} (v) \} \} = \{\{\chi_ {H} (x, z): z \in \mathcal {N} _ {H} (y) \}, +$$ + +$$ +\{\{\chi_ {G} (w, v): w \in \mathcal {N} _ {G} (u) \} \} = \{\{\chi_ {H} (z, y): z \in \mathcal {N} _ {H} (F) \} \}. +$$ + +Using a similar proof technique as previous propositions, we can show that $\chi$ cannot be refined by all $\mathbf{agg}^{\mathrm{op}_1,\mathrm{op}_2}$ . We list one representative example below. + +The diagonal aggregation $\mathrm{agg}^{\mathrm{G},\mathrm{D}}$ . Combining the fourth equation and the second equation above, we obtain + +$$ +\{\{\chi_ {G} (w, w): w \in \mathcal {V} _ {G} \} \} = \{\{\chi_ {H} (z, z): z \in \mathcal {V} _ {H} \}, +$$ + +as desired. + +# G. Proof of Theorems in Section 5 + +This section aims to prove Theorem 5.2. Throughout this section, we denote $G = (\mathcal{V}_G,\mathcal{E}_G)$ and $H = (\mathcal{V}_H,\mathcal{E}_H)$ as any graphs. Denote $\chi^{\mathrm{PSWL}},\chi^{\mathrm{SSWL}},\chi^{\mathrm{FWL}},\chi^{\mathrm{LFWL}}$ , and $\chi^{\mathrm{SLFWL}}$ as the stable color mappings of PSWL(VS), SSWL, FWL(2), LFWL(2), and SLFWL(2), respectively. + +We begin with the following simple fact, which holds by definition of the isomorphism type. + +Fact G.1. Let $\chi \in \{\chi^{\mathrm{FWL}},\chi^{\mathrm{LFWL}},\chi^{\mathrm{SLFWL}}\}$ . For any vertices $u,v\in \mathcal{V}_G$ and $x,y\in \mathcal{V}_H$ , if $\chi_G(u,v) = \chi_H(x,y)$ , then: + +- $u = v \iff x = y$ ; +- $\{u, v\} \in \mathcal{E}_G \iff \{x, y\} \in \mathcal{E}_H$ . + +Lemma G.2. The following relations hold: + +$\chi^{\mathrm{LFWL}} \preceq \chi^{\mathrm{PSWL}}$ +$\chi^{\mathrm{SLFWL}} \preceq \chi^{\mathrm{SSWL}}$ +- $\chi^{\text{SLFWL}} \preceq \chi^{\text{LFWL}}$ ; + +$$ +\cdot \chi^ {\text {F W L}} \preceq \chi^ {\text {S L F W L}}. +$$ + +Proof. Note that all the FWL-type algorithms considered in Lemma G.2 use isomorphism type as initial colors, which is finer than node marking in SWL algorithms. In this case, it is straightforward to see that Remark E.2(c) still applies. Namely, it suffices to prove that the stable color mapping of each stronger algorithm cannot get refined using the aggregation scheme of the weaker algorithm. + +We first prove $\chi^{\mathsf{LFWL}}\preceq \mathcal{T}(\{\mathsf{agg}_{\mathbf{u}}^{\mathsf{L}},\mathsf{agg}_{\mathbf{v}}^{\mathsf{P}}\},\chi^{\mathsf{LFWL}})\coloneqq \tilde{\chi}$ , where $\mathcal{T}$ is defined in (3). Consider any vertices $u,v\in \mathcal{V}_G$ and $x,y\in \mathcal{V}_H$ satisfying $\chi_G^{\mathsf{LFWL}}(u,v) = \chi_H^{\mathsf{LFWL}}(x,y)$ . Then by definition, + +$$ +\{\{(\chi_ {G} ^ {\mathsf {L F W L}} (u, w), \chi_ {G} ^ {\mathsf {L F W L}} (w, v)): w \in \mathcal {N} _ {G} ^ {1} (v) \} \} = \{\{(\chi_ {H} ^ {\mathsf {L F W L}} (x, z), \chi_ {H} ^ {\mathsf {L F W L}} (z, y)): z \in \mathcal {N} _ {H} ^ {1} (y) \} \}. +$$ + +It must be the case that + +$$ +\left(\chi_ {G} ^ {\text {L F W L}} (u, v), \chi_ {G} ^ {\text {L F W L}} (v, v)\right) = \left(\chi_ {H} ^ {\text {L F W L}} (x, y), \chi_ {H} ^ {\text {L F W L}} (y, y)\right), +$$ + +$$ +\{\{(\chi_ {G} ^ {\mathsf {L F W L}} (u, w), \chi_ {G} ^ {\mathsf {L F W L}} (w, v)): w \in \mathcal {N} _ {G} (v) \} \} = \{\{(\chi_ {H} ^ {\mathsf {L F W L}} (x, z), \chi_ {H} ^ {\mathsf {L F W L}} (z, y)): z \in \mathcal {N} _ {H} (y) \}, +$$ + +due to Fact G.1. Therefore, + +$$ +\chi_ {G} ^ {\mathsf {L F W L}} (v, v) = \chi_ {H} ^ {\mathsf {L F W L}} (y, y) +$$ + +and + +$$ +\{\{\chi_ {G} ^ {\mathrm {L F W L}} (u, w): w \in \mathcal {N} _ {G} (v) \} \} = \{\{\chi_ {H} ^ {\mathrm {L F W L}} (x, z): z \in \mathcal {N} _ {H} (y) \}. +$$ + +Namely, $\tilde{\chi}_G(u,v) = \tilde{\chi}_H(x,y)$ . This proves $\chi^{\mathrm{LFWL}}\preceq \chi^{\mathrm{PSWL}}$ . + +We next prove $\chi^{\mathrm{SLFWL}} \preceq \mathcal{T}(\{\mathrm{agg}_{\mathbf{u}}^{\mathsf{L}}, \mathrm{agg}_{\mathbf{v}}^{\mathsf{L}}\}, \chi^{\mathrm{SLFWL}}) \coloneqq \tilde{\chi}$ . Consider any vertices $u, v \in \mathcal{V}_G$ and $x, y \in \mathcal{V}_H$ satisfying $\chi_G^{\mathrm{SLFWL}}(u, v) = \chi_H^{\mathrm{SLFWL}}(x, y)$ . Then by definition, + +$$ +\begin{array}{l} \left\{\left(\chi_ {G} ^ {\text {S L F W L}} (u, w), \chi_ {G} ^ {\text {S L F W L}} (w, v)\right): w \in \mathcal {N} _ {G} ^ {1} (u) \cup \mathcal {N} _ {G} ^ {1} (v) \right\} \\ = \left\{\left(\chi_ {H} ^ {\mathrm {S L F W L}} (x, z), \chi_ {H} ^ {\mathrm {S L F W L}} (z, y)\right): z \in \mathcal {N} _ {H} ^ {1} (x) \cup \mathcal {N} _ {H} ^ {1} (y) \right\}. \\ \end{array} +$$ + +Using Fact G.1 we have + +$$ +\begin{array}{l} \{\{(\chi_ {G} ^ {\mathsf {S L F W L}} (u, w), \chi_ {G} ^ {\mathsf {S L F W L}} (w, v)): w \in \mathcal {N} _ {G} (u) \} \} = \{\{(\chi_ {H} ^ {\mathsf {S L F W L}} (x, z), \chi_ {H} ^ {\mathsf {S L F W L}} (z, y)): z \in \mathcal {N} _ {H} (F) \}, \\ \{\left(\chi_ {G} ^ {\text {S L F W L}} (u, w), \chi_ {G} ^ {\text {S L F W L}} (w, v)\right): w \in \mathcal {N} _ {G} (v) \} = \{\left(\chi_ {H} ^ {\text {S L F W L}} (x, z), \chi_ {H} ^ {\text {S L F W L}} (z, y)\right): z \in \mathcal {N} _ {H} (y) \}. \\ \end{array} +$$ + +Therefore, + +$$ +\begin{array}{l} \{\{\chi_ {G} ^ {\mathsf {S L F W L}} (u, w): w \in \mathcal {N} _ {G} (v) \} \} = \{\{\chi_ {H} ^ {\mathsf {S L F W L}} (x, z): z \in \mathcal {N} _ {H} (y) \}, \\ \{\{\chi_ {G} ^ {\mathsf {S L F W L}} (w, v): w \in \mathcal {N} _ {G} (u) \} \} = \{\{\chi_ {H} ^ {\mathsf {S L F W L}} (z, y): z \in \mathcal {N} _ {H} (F) \} \}. \\ \end{array} +$$ + +Namely, $\tilde{\chi}_G(u,v) = \tilde{\chi}_H(x,y)$ . This proves $\chi^{\mathrm{SLFWL}}\preceq \chi^{\mathrm{SSWL}}$ . + +The third and fourth bullets follow exactly the same procedure, so we omit the proof for clarity. + +Lemma G.3. Let $\chi \in \{\chi^{\mathrm{FWL}},\chi^{\mathrm{LFWL}},\chi^{\mathrm{SLFWL}}\}$ . If + +$$ +\{\{\chi_ {G} (u, v): u, v \in \mathcal {V} _ {G} \} \} = \{\{\chi_ {H} (x, y): x, y \in \mathcal {V} _ {H} \}, +$$ + +then + +$$ +\{\{\{\chi_ {G} (u, v): v \in \mathcal {V} _ {G} \}: u \in \mathcal {V} _ {G} \} = \{\{\{\chi_ {H} (x, y): y \in \mathcal {V} _ {H} \}: x \in \mathcal {V} _ {H} \}. +$$ + +Proof. Based on the assumption of Lemma G.3 and Fact G.1, we have + +$$ +\{\{\chi_ {G} (u, u): u \in \mathcal {V} _ {G} \} \} = \{\{\chi_ {H} (x, x): x \in \mathcal {V} _ {H} \}. +$$ + +Therefore, it suffices to prove that for any vertices $u \in \mathcal{V}_G$ and $x \in \mathcal{V}_H$ , if $\chi_G(u, u) = \chi_H(x, x)$ , then + +$$ +\{\left\{\chi_ {G} (u, v): v \in \mathcal {V} _ {G} \right\} = \{\left\{\chi_ {H} (x, y): y \in \mathcal {V} _ {H} \right\}. \tag {19} +$$ + +Based on Lemma G.2, we have $\chi \preceq \chi^{\mathrm{PSWL}}$ . Note that $\chi^{\mathrm{PSWL}}$ incorporates the aggregation $\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}$ . Therefore, Lemma E.6 applies. We can follow the same proof technique of Lemma E.6 to obtain (19). + +We are now ready to prove Theorem 5.2, which we restate below: + +Theorem 5.2. The following relations hold: + +- LFWL(2) $\preceq$ SLFWL(2) $\preceq$ FWL(2); +- PSWL(VS) $\preceq$ LFWL(2); +- SSWL $\preceq$ SLFWL(2). + +Proof. The first bullet readily follows from Lemma G.2 and Remark E.2(b). For the other two bullets, although these algorithms have different pooling paradigms, we have proved that the pooling paradigm of FWL-type algorithms is as powerful as the pooling paradigm VS (Lemma G.3). Therefore, the results hold by Lemma G.2. $\square$ + +# H. Proof of Theorems in Section 6 + +This section proves the equivalence between SWL/FWL-type algorithms and pebbling games. For ease of presentation, we first define several notations. + +Let $G = (\mathcal{V}_G, \mathcal{E}_G)$ and $H = (\mathcal{V}_H, \mathcal{E}_H)$ be two graphs, and let $u, v$ be two types of pebbles. For each type of pebbles $u$ , the placement information can be represented by a vertex pair $(u_G, u_H)$ where $u_G \in \mathcal{V}_G$ and $u_H \in \mathcal{V}_H$ are the corresponding vertices that hold pebble $u$ . Without abuse of notation, we also use the symbol $u$ to represent the placement information of pebble $u$ , i.e. $u = (u_G, u_H)$ . + +We next define a game modified from Section 6, called the $L$ -round $(u,v)$ -pebbling game. + +Definition H.1. Given aggregation scheme $\mathcal{A}$ and an integer $L\in \mathbb{N}$ , define the $L$ -round $(u,v)$ -pebbling game $\mathsf{G}^{\mathcal{A},L}(u;v)$ as follows. Initially, pebbles $u$ and $v$ are already placed on graphs $G$ and $H$ according to specified locations $u = (u_{G},u_{H})$ , $v = (v_{G},v_{H})$ . The game has $L$ rounds. In each round, Spoiler and Duplicator can change the position of $u$ and $v$ according to the game rules of $\mathcal{A}$ defined in Section 6. Spoiler wins if after certain round $0\leq l\leq L$ , the isomorphism type of vertex pair $(u_{G},v_{G})$ in graph $G$ differs from the isomorphism type of vertex pair $(u_{H},v_{H})$ in graph $H$ . Duplicator wins the game if Spoiler does not win after playing $L$ rounds. + +We are ready to establish the connection between SWL and the $(u,v)$ -pebbling game. Below, denote $\chi^{\mathcal{A},(t)}$ as the color mapping of SWL algorithm $\mathsf{A}(\mathcal{A},\mathsf{Pool})$ at iteration $t$ . + +Lemma H.2. Let $l \in \mathbb{N}$ be any integer. For any vertices $u_{G}, v_{G} \in \mathcal{V}_{G}$ and $u_{H}, v_{H} \in \mathcal{V}_{H}$ , if $\chi_{G}^{\mathcal{A},(l)}(u_{G}, v_{G}) \neq \chi_{H}^{\mathcal{A},(l)}(u_{H}, v_{H})$ , then Spoiler can win the $l$ -round $(u, v)$ -pebbling game $G^{\mathcal{A},l}(u; v)$ with $u = (u_{G}, u_{H})$ , $v = (v_{G}, v_{H})$ . + +Proof. The proof is based on induction over $l$ . First consider the base case of $l = 0$ . If $\chi_G^{\mathcal{A},(0)}(u_G, v_G) \neq \chi_H^{\mathcal{A},(0)}(u_H, v_H)$ , by definition of node marking policy we have either $(u_G = v_G, u_H \neq v_H)$ or $(u_G \neq v_G, u_H = v_H)$ . Clearly, $(u_G, v_G)$ and $(u_H, v_H)$ have different isomorphism types and thus Spoiler wins. + +Now assume that Lemma H.2 holds for $l \leq L$ , and consider $l = L + 1$ . Let $\chi_G^{\mathcal{A},(L + 1)}(u_G, v_G) \neq \chi_H^{\mathcal{A},(L + 1)}(u_H, v_H)$ . If $\chi_G^{\mathcal{A},(L)}(u_G, v_G) \neq \chi_H^{\mathcal{A},(L)}(u_H, v_H)$ , then by induction Spoiler wins. Otherwise, there exists an aggregation operation $\mathrm{agg} \in \mathcal{A}$ such that + +$$ +\mathsf {a g g} (u _ {G}, v _ {G}, G, \chi_ {G} ^ {\mathcal {A}, (L)}) \neq \mathsf {a g g} (u _ {H}, v _ {H}, H, \chi_ {H} ^ {\mathcal {A}, (L)}). +$$ + +We separately consider which type of atomic aggregation operation agg is: + +- Single-point aggregation $\mathsf{agg}_{\mathsf{vu}}^{\mathsf{P}}$ . In this case, we have $\chi_G^{\mathcal{A},(L)}(v_G,u_G)\neq \chi_H^{\mathcal{A},(L)}(v_H,u_H)$ . In the first round,Spoiler can choose to swap pebbles $u$ and $v$ . The remaining game will then be equivalent to $\mathsf{G}^{\mathcal{A},L}(u,v)$ with $u = (v_{G},v_{H})$ , $v = (u_{G},u_{H})$ . By induction,Spoiler wins the game. +- Single-point aggregation $\mathsf{agg}_{uu}^{\mathsf{P}}$ . In this case, we have $\chi_G^{\mathcal{A},(L)}(u_G,u_G)\neq \chi_H^{\mathcal{A},(L)}(u_H,u_H)$ . In the first round, Spoiler can choose to move pebbles $v$ to the position of $u$ . The remaining game will then be equivalent to $\mathsf{G}^{\mathcal{A},L}(u,v)$ with $u = (u_{G},u_{H})$ , $v = (u_{G},u_{H})$ . By induction, Spoiler wins the game. +- Global aggregation $\mathsf{agg}_{\mathfrak{u}}^{\mathsf{G}}$ . In this case, we have + +$$ +\{\{\chi_ {G} ^ {\mathcal {A}, (L)} (u _ {G}, w _ {G}): w _ {G} \in \mathcal {V} _ {G} \} \} \neq \{\{\chi_ {H} ^ {\mathcal {A}, (L)} (u _ {H}, w _ {H}): w _ {H} \in \mathcal {V} _ {H} \}. +$$ + +Therefore, there exists a color $c$ such that $|\mathcal{C}_G(u_G,c)|\neq |\mathcal{C}_H(u_H,c)|$ , where we denote + +$$ +\mathcal {C} _ {G} (u _ {G}, c) = | \{w _ {G} \in \mathcal {V} _ {G}: \chi_ {G} ^ {\mathcal {A}, (L)} (u _ {G}, w _ {G}) = c \} |. +$$ + +If $|\mathcal{C}_G(u_G, c)| > |\mathcal{C}_H(u_H, c)|$ , Spoiler can select the vertex subset $S^{\mathsf{S}} = \mathcal{C}_G(u_G, c) \subset \mathcal{V}_G$ . It can be seen that no matter how Duplicator responds with $S^{\mathsf{D}} \subset \mathcal{V}_H$ , there exists $w_H \in S^{\mathsf{D}}$ such that $\chi_H^{\mathcal{A},(L)}(u_H, w_H) \neq c$ . Spoiler thus selects this vertex $x^{\mathsf{S}} = w_H$ , and no matter how Duplicator responds with $x^{\mathsf{D}} = w_G \in S^{\mathsf{S}}$ , we have $\chi_G^{\mathcal{A},(L)}(u_G, w_G) \neq \chi_H^{\mathcal{A},(L)}(u_H, w_H)$ . The remaining game will then be equivalent to $G^{\mathcal{A},L}(u, v)$ with $u = (u_G, u_H)$ , $v = (w_G, w_H)$ . By induction, Spoiler wins the game. + +If $|\mathcal{C}_G(u_G, c)| < |\mathcal{C}_H(u_H, c)|$ , Spoiler can select the vertex subset $S^{\mathsf{S}} = \mathcal{C}_H(u_H, c) \subset \mathcal{V}_H$ , and the conclusion is the same. + +- Local aggregation $\mathrm{agg}_{\mathrm{u}}^{\mathrm{L}}$ . In this case, we have + +$$ +\{\{\chi_ {G} ^ {\mathcal {A}, (L)} (u _ {G}, w _ {G}): w _ {G} \in \mathcal {N} _ {G} (v _ {G}) \} \} \neq \{\{\chi_ {H} ^ {\mathcal {A}, (L)} (u _ {H}, w _ {H}): w _ {H} \in \mathcal {N} _ {H} (v _ {H}) \} \}. +$$ + +Therefore, there exists a color $c$ such that $|\mathcal{C}_G(u_G,v_G,c)|\neq |\mathcal{C}_H(u_H,v_H,c)|$ , where we denote + +$$ +\mathcal {C} _ {G} (u _ {G}, v _ {G}, c) = | \{w _ {G} \in \mathcal {N} _ {G} (v _ {G}): \chi_ {G} ^ {\mathcal {A}, (L)} (u _ {G}, w _ {G}) = c \} |. +$$ + +If $|\mathcal{C}_G(u_G, v_G, c)| > |\mathcal{C}_H(u_H, v_H, c)|$ , Spoiler can select the vertex subset $\mathcal{S}^{\mathsf{S}} = \mathcal{C}_G(u_G, v_G, c) \subset \mathcal{N}_G(v_G)$ . If $|\mathcal{C}_G(u_G, v_G, c)| < |\mathcal{C}_H(u_H, v_H, c)|$ , Spoiler can select the vertex subset $\mathcal{S}^{\mathsf{S}} = \mathcal{C}_H(u_H, v_H, c) \subset \mathcal{N}_H(v_H)$ . Using a similar analysis as the above case, we can conclude that Spoiler wins the game. + +The cases of $\mathsf{agg}_{\mathsf{v}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{v}}^{\mathsf{G}}, \mathsf{agg}_{\mathsf{vv}}^{\mathsf{P}}$ are similar (symmetric) to $\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{u}}^{\mathsf{G}}, \mathsf{agg}_{\mathsf{uu}}^{\mathsf{P}}$ , so we omit them for clarity. We have concluded the induction step. + +Lemma H.3. Let $l \in \mathbb{N}$ be any integer. Assume $\{\mathrm{agg}_{\mathbf{u}}^{\mathsf{L}}, \mathrm{agg}_{\mathbf{v}}^{\mathsf{L}}\} \cap \mathcal{A} \neq \emptyset$ . For any vertices $u_{G}, v_{G} \in \mathcal{V}_{G}$ and $u_{H}, v_{H} \in \mathcal{V}_{H}$ , if $\chi_{G}^{\mathcal{A}, (l + 1)}(u_{G}, v_{G}) = \chi_{H}^{\mathcal{A}, (l + 1)}(u_{H}, v_{H})$ , then Duplicator can win the $l$ -round $(u, v)$ -pebbling game $\mathsf{G}^{\mathcal{A}, l}(u; v)$ with $u = (u_{G}, u_{H})$ , $v = (v_{G}, v_{H})$ . + +Proof. The proof is based on induction over $l$ . First, consider the base case of $l = 0$ . Let $\chi_G^{\mathcal{A},(1)}(u_G,v_G) = \chi_H^{\mathcal{A},(1)}(u_H,v_H)$ . If $u_{G} = v_{G}$ , then $u_{H} = v_{H}$ (due to the node marking policy). If $\{u_{G},v_{G}\} \in \mathcal{E}_{G}$ , then $\{u_{H},v_{H}\} \in \mathcal{E}_{H}$ (which follows by applying the local aggregation, similar to the proof of Lemma E.4). Therefore, $(u_{G},v_{G})$ and $(u_{H},v_{H})$ have the same isomorphism type. + +Now assume that Lemma H.3 holds for $l \leq L$ , and consider $l = L + 1$ . Let $\chi_G^{\mathcal{A},(L + 2)}(u_G, v_G) = \chi_H^{\mathcal{A},(L + 2)}(u_H, v_H)$ . Then, + +$$ +\operatorname {a g g} \left(u _ {G}, v _ {G}, G, \chi_ {G} ^ {\mathcal {A}, (L + 1)}\right) = \operatorname {a g g} \left(u _ {H}, v _ {H}, H, \chi_ {H} ^ {\mathcal {A}, (L + 1)}\right) \tag {20} +$$ + +holds for all $\mathsf{agg} \in \mathcal{A}$ . Separately consider various possible strategies for Spoiler: + +- If $\mathsf{agg}_{\mathsf{vu}}^{\mathsf{P}} \in \mathcal{A}$ and Spoiler chooses to swap pebbles $u$ and $v$ . Setting $\mathsf{agg} = \mathsf{agg}_{\mathsf{vu}}^{\mathsf{P}}$ in (20) yields $\chi_{G}^{A,(L+1)}(v_{G}, u_{G}) = \chi_{H}^{A,(L+1)}(v_{H}, u_{H})$ . The remaining game is equivalent to $\mathsf{G}^{A,L}(u, v)$ with $u = (v_{G}, v_{H})$ , $v = (u_{G}, u_{H})$ . By induction, Duplicator wins the game. +- If $\mathsf{agg}_{\mathsf{uu}}^{\mathsf{P}} \in \mathcal{A}$ and Spoiler chooses to move pebbles $v$ to the position of pebble $u$ . This case is similar to the above one, and we have $\chi_G^{\mathcal{A},(L+1)}(u_G, u_G) = \chi_H^{\mathcal{A},(L+1)}(u_H, u_H)$ . The remaining game is equivalent to $\mathsf{G}^{\mathcal{A},L}(u, v)$ with $u = (u_G, u_H)$ , $v = (u_G, u_H)$ . By induction, Duplicator wins the game. +- If $\mathrm{agg}_{\mathfrak{u}}^{\mathsf{G}} \in \mathcal{A}$ , and Spoiler chooses a subset $S^{\mathsf{S}}$ . Setting $\mathrm{agg} = \mathrm{agg}_{\mathfrak{u}}^{\mathsf{G}}$ in (20) yields + +$$ +\{\{\chi_ {G} ^ {\mathcal {A}, (L + 1)} \left(u _ {G}, w _ {G}\right): w _ {G} \in \mathcal {V} _ {G} \} \} = \{\{\chi_ {H} ^ {\mathcal {A}, (L + 1)} \left(u _ {H}, w _ {H}\right): w _ {H} \in \mathcal {V} _ {H} \} \}. +$$ + +If $\mathcal{S}^{\mathrm{S}}\subset \mathcal{V}_G$ , then Duplicator can respond with a subset $S^{\mathrm{D}}\subset \mathcal{V}_{H}$ such that + +$$ +\{\{\chi_ {G} ^ {\mathcal {A}, (L + 1)} (u _ {G}, w _ {G}): w _ {G} \in \mathcal {S} ^ {\mathsf {S}} \} \} = \{\{\chi_ {H} ^ {\mathcal {A}, (L + 1)} (u _ {H}, w _ {H}): w _ {H} \in \mathcal {S} ^ {\mathsf {D}} \} \}. +$$ + +If $\mathcal{S}^{\mathrm{S}}\subset \mathcal{V}_H$ , then Duplicator can respond with a subset $\mathcal{S}^{\mathrm{D}}\subset \mathcal{V}_{G}$ such that + +$$ +\{\{\chi_ {G} ^ {\mathcal {A}, (L + 1)} \left(u _ {G}, w _ {G}\right): w _ {G} \in \mathcal {S} ^ {\mathsf {D}} \} \} = \{\{\chi_ {H} ^ {\mathcal {A}, (L + 1)} \left(u _ {H}, w _ {H}\right): w _ {H} \in \mathcal {S} ^ {\mathsf {S}} \} \}. +$$ + +In both cases, we clearly have $|\mathcal{S}^{\mathsf{S}}| = |\mathcal{S}^{\mathsf{D}}|$ . Next, no matter how Spoiler moves pebble $v$ to a vertex $x^{\mathsf{S}} \in \mathcal{S}^{\mathsf{D}}$ , Duplicator can always respond by moving the other pebble $v$ to a vertex $x^{\mathsf{D}} \in \mathcal{S}^{\mathsf{S}}$ , such that $\chi_G^{\mathcal{A},(L+1)}(u_G, \tilde{v}_G) = \chi_H^{\mathcal{A},(L+1)}(u_H, \tilde{v}_H)$ , where $(\tilde{v}_G, \tilde{v}_H)$ is the new position of pebbles $v$ . The remaining game is equivalent to $\mathsf{G}^{\mathcal{A},L}(u, v)$ with $u = (u_G, u_H)$ , $v = (\tilde{v}_G, \tilde{v}_H)$ . By induction, Duplicator wins the game. + +- If $\mathrm{agg}_u^{\mathsf{L}}\in \mathcal{A}$ , then all the procedure is similar to the above one except that the subsets $S^{\mathbb{S}}$ and $S^{\mathbb{D}}$ contain only the neighboring vertices adjacent to pebbles $v$ . + +The cases of $\mathsf{agg}_{\mathsf{v}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{v}}^{\mathsf{G}}, \mathsf{agg}_{\mathsf{vv}}^{\mathsf{P}}$ are similar (symmetric) to $\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{u}}^{\mathsf{G}}, \mathsf{agg}_{\mathsf{uu}}^{\mathsf{P}}$ , so we omit them for clarity. We have concluded the induction step. + +Combining Lemmas H.2 and H.3 immediately yields the following theorem: + +Theorem H.4. Let $\chi^{\mathcal{A}}$ be the stable color mapping of SWL algorithm $\mathsf{A}(\mathcal{A},\mathsf{Pool})$ , satisfying $\{\mathrm{agg}_{\mathsf{u}}^{\mathsf{L}},\mathrm{agg}_{\mathsf{v}}^{\mathsf{L}}\} \cap \mathcal{A}\neq \emptyset$ . For any vertices $u_{G},v_{G}\in \mathcal{V}_{G}$ and $u_{H},v_{H}\in \mathcal{V}_{H}$ , $\chi_G^{\mathcal{A}}(u_G,v_G) = \chi_H^{\mathcal{A}}(u_H,v_H)$ if and only if Duplicator can win the l-round $(u,v)$ -pebbling game $\mathsf{G}^{\mathcal{A},l}(u;v)$ for any $l\in \mathbb{N}$ with $u = (u_{G},u_{H})$ , $v = (v_{G},v_{H})$ . + +We next turn to FWL-type algorithms. We can similarly define the $L$ -round $(u, v)$ -pebbling game $G^L$ for FWL(2), LFWL(2), and SLFWL(2). We have the following theorem parallel to Theorem H.4. + +Theorem H.5. Let $\chi$ be the stable color mapping of any FWL-type algorithm, e.g., FWL(2), LFWL(2), and SLFWL(2). For any vertices $u_{G}, v_{G} \in \mathcal{V}_{G}$ and $u_{H}, v_{H} \in \mathcal{V}_{H}$ , $\chi_{G}(u_{G}, v_{G}) = \chi_{H}(u_{H}, v_{H})$ if and only if Duplicator can win the corresponding $l$ -round $(u, v)$ -pebbling game $\mathsf{G}^{l}(u; v)$ for any $l \in \mathbb{N}$ with $u = (u_{G}, u_{H})$ , $v = (v_{G}, v_{H})$ . + +Proof. The proof is highly similar to the proof of Lemmas H.2 and H.3. For clarity, we only take LFWL(2) as an example. We use induction over $l$ to prove that for any $l \in \mathbb{N}$ , $\chi_G^{(l)}(u_G, v_G) = \chi_H^{(l)}(u_H, v_H)$ if and only if Duplicator can win the $l$ -round $(u, v)$ -pebbling game $G^l(u; v)$ with $u = (u_G, u_H)$ , $v = (v_G, v_H)$ . The base case of $l = 0$ is trivial. + +For the induction step, suppose the result holds for $l \leq L$ and consider $l = L + 1$ . Let $\chi_G^{(L + 1)}(u_G,v_G) \neq \chi_H^{(L + 1)}(u_H,v_H)$ . If $\chi_G^{(L)}(u_G,v_G) \neq \chi_H^{(L)}(u_H,v_H)$ , Spoiler wins by induction. Otherwise, by definition of LFWL(2) we have + +$$ +\{\{(\chi_ {G} ^ {(L)} (u _ {G}, w _ {G}), \chi_ {G} ^ {(L)} (w _ {G}, v _ {G})): w _ {G} \in \mathcal {N} _ {G} ^ {1} (v _ {G}) \} \} \neq \{\{(\chi_ {H} ^ {(L)} (u _ {H}, w _ {H}), \chi_ {H} ^ {(L)} (w _ {H}, v _ {H})): w _ {H} \in \mathcal {N} _ {H} ^ {1} (v _ {H}) \} \}. +$$ + +Therefore, there exists a color $c$ such that $|\mathcal{C}_G(u_G,v_G,c)|\neq |\mathcal{C}_H(u_H,v_H,c)|$ , where we denote + +$$ +\mathcal {C} _ {G} (u, v, c) = | \{w \in \mathcal {N} _ {G} ^ {1} (v): (\chi_ {G} ^ {(L)} (u, w), \chi_ {G} ^ {(L)} (w, v)) = c \} |. +$$ + +Assume $|\mathcal{C}_G(u_G,v_G,c)| > |\mathcal{C}_H(u_H,v_H,c)|$ without loss of generality, then Spoiler can select the vertex subset $S^{\mathrm{S}} = \mathcal{C}_G(u_G,v_G,c) \subset \mathcal{N}_G^1 (v_G)$ . No matter how Duplicator responds with $S^{\mathrm{D}} \subset \mathcal{N}_H^1 (v_H)$ , there exists $w_{H} \in S^{\mathrm{D}}$ such that $(\chi_H^{(L)}(u_H,w_H),\chi_H^{(L)}(w_H,v_H)) \neq c$ . Spoiler thus selects this vertex $x^{\mathrm{S}} = w_{H}$ , and no matter how Duplicator responds with $x^{\mathrm{D}} = w_{G} \in S^{\mathrm{S}}$ , we have either $\chi_G^{(L)}(u_G,w_G) \neq \chi_H^{(L)}(u_H,w_H)$ or $\chi_G^{(L)}(w_G,v_G) \neq \chi_H^{(L)}(w_H,v_H)$ . Spoiler chooses to move pebbles $v$ or $u$ depending on which relation does not hold. The remaining game will then be equivalent to $G^L (u,v)$ with $u = (\tilde{u}_G,\tilde{u}_H)$ , $v = (\tilde{v}_G,\tilde{v}_H)$ such that $\chi_G^{(L)}(\tilde{u}_G,\tilde{v}_G) \neq \chi_H^{(L)}(\tilde{u}_H,\tilde{v}_H)$ . By induction, Spoiler wins the game. + +For the converse direction, the proof is similar and we omit it for clarity. + +Finally, we complete the analysis by incorporating different pooling paradigms into pebbling games. We will prove the following general result: + +Lemma H.6. Let $\chi$ be the stable color mapping of any WL algorithm and let $\mathsf{G}$ be the corresponding pebbling game, such that $\chi_G(u_G,v_G) = \chi_H(u_H,v_H)$ if and only if Duplicator can win the $l$ -round $(u,v)$ -pebbling game $\mathsf{G}^l (u;v)$ for all $l\in \mathbb{N}$ with $u = (u_{G},u_{H})$ , $v = (v_{G},v_{H})$ . Then, + +- $\{\{\chi_G(u_G, v_G) : u_G, v_G \in \mathcal{V}_G\} = \{\{\chi_H(u_H, v_H) : u_H, v_H \in \mathcal{V}_H\}\}$ if and only if Duplicator can win the pebbling game when $u = (u_G, u_H)$ , $v = (v_G, v_H)$ are selected according to the game rule of FWL-type algorithms defined in Section 6; +- $\{\{\{\chi_G(u_G, v_G): v_G \in \mathcal{V}_G\} : u_G \in \mathcal{V}_G\}\} = \{\{\{\chi_H(u_H, v_H): v_H \in \mathcal{V}_H\} : u_H \in \mathcal{V}_H\}\}$ if and only if Duplicator can win the pebbling game when $u = (u_G, u_H)$ , $v = (v_G, v_H)$ are selected according to the game rule of VS pooling defined in Section 6; +- $\{\{\{\chi_G(u_G, v_G): u_G \in \mathcal{V}_G\} : v_G \in \mathcal{V}_G\}\} = \{\{\{\chi_H(u_H, v_H): u_H \in \mathcal{V}_H\} : v_H \in \mathcal{V}_H\}\}$ if and only if Duplicator can win the pebbling game when $u = (u_G, u_H)$ , $v = (v_G, v_H)$ are selected according to the game rule of SV pooling defined in Section 6. + +Proof. We only prove the second bullet and other cases are similar. First assume $\{\{\{\chi_G(u_H,v_G):v_G\in \mathcal{V}_G\} :u_G\in \mathcal{V}_G\} = \{\{\{\chi_H(u_H,v_H):v_H\in \mathcal{V}_H\} :u_H\in \mathcal{V}_H\}$ . According to the game rule, both players first place pebbles $u$ based on a vertex selection procedure. Without loss of generality, suppose Spoiler chooses a subset $\mathcal{S}^{\mathrm{S}}\subset \mathcal{V}_G$ . Then Duplicator can respond with a subset $\mathcal{S}^{\mathrm{D}}\subset \mathcal{V}_H$ such that + +$$ +\{\{\{\chi_ {G} (u _ {G}, v _ {G}): v _ {G} \in \mathcal {V} _ {G} \}: u _ {G} \in \mathcal {S} ^ {\mathsf {S}} \} \} = \{\{\{\chi_ {H} (u _ {H}, v _ {H}): v _ {H} \in \mathcal {V} _ {H} \}: u _ {H} \in \mathcal {S} ^ {\mathsf {D}} \} \}. +$$ + +Then no matter how Spoiler selects a vertex $x^{\mathsf{S}} = u_{H} \in S^{\mathsf{D}}$ , Duplicator can always select $x^{\mathsf{D}} = u_{G} \in S^{\mathsf{S}}$ , such that + +$$ +\{\{\chi_ {G} (u _ {G}, v _ {G}): v _ {G} \in \mathcal {V} _ {G} \} \} = \{\{\chi_ {H} (u _ {H}, v _ {H}): v _ {H} \in \mathcal {V} _ {H} \} \}. +$$ + +Similarly, after selecting the position of pebbles $v$ , Duplicator always has a strategy to ensure that $\chi_G(u_G, v_G) = \chi_H(u_H, v_H)$ . For the remaining game, Duplicator can win due to the assumption of Lemma H.6. + +For the converse direction, assume $\{\{\{\chi_G(u_G,v_G):v_G\in \mathcal{V}_G\} :u_G\in \mathcal{V}_G\} \neq \{\{\{\chi_H(u_H,v_H):v_H\in \mathcal{V}_H\} :u_H\in \mathcal{V}_H\}\}$ . Similar to the proof of global aggregation in Lemma H.2,Spoiler has a strategy to ensure that + +$$ +\{\{\chi_ {G} (u _ {G}, v _ {G}): v _ {G} \in \mathcal {V} _ {G} \} \} \neq \{\{\chi_ {H} (u _ {H}, v _ {H}): v _ {H} \in \mathcal {V} _ {H} \} \} +$$ + +after placing pebble $u$ to position $(u_{G}, u_{H})$ . Again, after placing pebble $v$ to position $(v_{G}, v_{H})$ , Spoiler has a strategy to ensure that $\chi_{G}(u_{G}, v_{G}) \neq \chi_{H}(u_{H}, v_{H})$ . For the remaining game, Spoiler can win due to the assumption of Lemma H.6. + +Consequently, Theorem 6.2 and Theorem 6.3 holds by Theorems H.4 and H.5 and Lemma H.6. + +# I. Proof of Separation Results (Theorem 7.1) + +This section contains the proof of the main result in this paper (Theorem 7.1). The proof is quite involved and is divided into three parts. First, we introduce a novel construction of counterexample graphs that are based on (and greatly extend) the work of Fürer (2001). We provide an in-depth analysis of the isomorphism properties of these counterexample graphs through a set of key theorems. Then, in light of the special properties, we simplify the pebbling game developed in Section 6 for each type of SWL/FWL algorithm, which specifically targets these counterexample graphs. Finally, we prove all separation results in Section 7 using the pebbling game viewpoint and give concrete counterexample graphs for each pair of algorithms. + +# I.1. Generalized Fürer graphs and their properties + +We first introduce a class of graphs which we call the Fürer graphs (Fürer, 2001). + +Definition I.1 (Fürer graphs). Given any connected graph $F = (\mathcal{V}_F, \mathcal{E}_F)$ , the Fürer graph $G(F) = (\mathcal{V}_G, \mathcal{E}_G)$ is constructed as follows: + +$$ +\begin{array}{l} \mathcal {V} _ {G} = \left\{\left(x, \mathcal {X}\right): x \in \mathcal {V} _ {F}, \mathcal {X} \subset \mathcal {N} _ {F} (x), | \mathcal {X} | \bmod 2 = 0 \right\}, \\ \mathcal {E} _ {G} = \{\{(x, \mathcal {X}), (y, \mathcal {Y}) \} \subset \mathcal {V} _ {G}: \{x, y \} \in \mathcal {E} _ {F}, (x \in \mathcal {Y} \leftrightarrow y \in \mathcal {X}) \}. \\ \end{array} +$$ + +Here, $x \in \mathcal{V} \leftrightarrow y \in \mathcal{X}$ means that either $(x \in \mathcal{V}$ and $y \in \mathcal{X})$ or $(x \notin \mathcal{V}$ and $y \notin \mathcal{X})$ . For each vertex $x \in \mathcal{V}_F$ , denote the set + +$$ +\operatorname {M e t a} _ {F} (x) := \{(x, \mathcal {X}): \mathcal {X} \subset \mathcal {N} _ {F} (x), | \mathcal {X} | \bmod 2 = 0 \}, \tag {21} +$$ + +which is called the meta vertices of $G(F)$ associated to vertex $F$ . Clearly, $\mathcal{V}_G = \bigcup_{x\in \mathcal{V}_F}\mathsf{Meta}_F(x)$ . + +![](images/1ddea466a0b5ed5378862c35ccf62c3bd8f005b5a4a1794c4fb5f01dffd8750a.jpg) + +![](images/6bbd080895977b4963044fc2415a7d2d8376fa1a06fe0f663172f424fcb6daa6.jpg) +(a) Base graph $F$ +(b) Fürer graph $G(F)$ +(c) Twisted Fürer graph $H(F)$ for edge $\{2,4\}$ +Figure 2. Illustration of the construction of Fürer graph and twisted Fürer graph. + +![](images/cda6e7a9ac219340e2a857f16c8b18b8b60203a5d8a7e91c03db26522c75a807.jpg) + +We next define an operation called "twist": + +Definition 1.2 (Twist). Let $G(F) = (\mathcal{V}_G, \mathcal{E}_G)$ be the Fürer graph of $F = (\mathcal{V}_F, \mathcal{E}_F)$ , and let $\{x, y\} \in \mathcal{E}_F$ be an edge of $F$ . The twisted Fürer graph for edge $\{x, y\}$ is constructed as follows: $\text{twist}(G(F), \{x, y\}) := (\mathcal{V}_G, \mathcal{E}_H)$ , where + +$$ +\mathcal {E} _ {H} := \mathcal {E} _ {G} \triangle \{\{\xi , \eta \}: \xi \in \operatorname {M e t a} _ {F} (x), \eta \in \operatorname {M e t a} _ {F} (y) \}. +$$ + +Here, $\triangle$ is the symmetric difference operation, i.e., $\mathcal{A}\triangle \mathcal{B} = (\mathcal{A}\backslash \mathcal{B})\cup (\mathcal{B}\backslash \mathcal{A})$ + +In other words, the twisted Fürer graph twist $(G(F),\{x,y\})$ is the graph modified from $G(F)$ by deleting all edges of the form $\{(x,\mathcal{X}),(y,\mathcal{Y})\} \in \mathcal{E}_G$ and adding the following set of edges + +$$ +\{\{(x, \mathcal {X}), (y, \mathcal {Y}) \} \subset \mathcal {V} _ {G}: (x \in \mathcal {Y} \leftrightarrow y \notin \mathcal {X}) \}. +$$ + +We give an illustration of the construction of Fürer graph and twisted Fürer graph for a simple graph $F$ in Figure 2. + +The twist operation can be further generalized into twisting a set of edges. We adopt the following notations: + +$$ +\operatorname {t w i s t} (G (F), \mathcal {E}) := \operatorname {t w i s t} (\dots \operatorname {t w i s t} (G (F), e _ {1}) \dots , e _ {k}) \tag {22} +$$ + +given an edge set $\mathcal{E} = \{e_1, \dots, e_k\} \subset \mathcal{E}_F$ . Note that the resulting graph twist $(G(F), \mathcal{E})$ does not depend on the order of edges $e_1, \dots, e_k$ for twisting, so (22) is well-defined. + +The first key result below shows that if we twist any two edges of a Fürer graph, the resulting graph is isomorphic to the original graph. + +Lemma I.3. Let $G(F) = (\mathcal{V}_G, \mathcal{E}_G)$ be the Fürer graph of $F = (\mathcal{V}_F, \mathcal{E}_F)$ . Then, for any two different edges $\{x, y\}$ , $\{u, v\} \in \mathcal{E}_F$ , + +$$ +\operatorname {t w i s t} (G (F), \{\{x, y \}, \{u, v \} \}) \simeq G (F). +$$ + +Moreover, there exists an isomorphism $f: \mathcal{V}_G \to \mathcal{V}_G$ from $G(F)$ to $\text{twist}(G(F), \{\{x,y\}, \{u,v\}\})$ ) that maps each meta vertex set $\operatorname{Meta}_F(x)$ to itself for all $x \in \mathcal{V}_F$ . + +Proof. Denote $\widehat{G}(F) \coloneqq \mathrm{twist}(G(F), \{\{x, y\}, \{u, v\}\})$ . Since $F$ is connected, one can always find a simple path $(w_0, w_1, \dots, w_k)$ , $k \geq 1$ with $\{w_0, w_1\} = \{x, y\}$ and $\{w_{k-1}, w_k\} = \{u, v\}$ . Denote $\mathcal{P} = \{w_1, \dots, w_{k-1}\}$ . Construct a mapping $f: \mathcal{V}_G \to \mathcal{V}_G$ as follows: + +$$ +f (z, \mathcal {Z}) = \left\{ \begin{array}{l l} \left(z, \mathcal {Z} \bigtriangleup \left\{w _ {i - 1}, w _ {i + 1} \right\}\right) & \text {i f} z = w _ {i}, i \in [ k - 1 ], \\ \left(z, \mathcal {Z}\right) & \text {i f} z \notin \mathcal {P}. \end{array} \right. \tag {23} +$$ + +We will prove that $f$ is an isomorphism from $G(F)$ to $\widehat{G}(F)$ . First, since $|\mathcal{Z}| \bmod 2 = 0$ implies that $|\mathcal{Z} \triangle \{w_{i-1}, w_{i+1}\}| \bmod 2 = 0$ , $f$ is indeed a valid mapping from $\mathcal{V}_G$ to $\mathcal{V}_G$ . Also, it is straightforward to see that $f$ is bijective. It remains to verify that for any edge $\{(z, \mathcal{Z}), (z', \mathcal{Z}')\} \in \mathcal{E}_G$ , $\{f(z, \mathcal{Z}), f(z', \mathcal{Z}')\}$ is an edge of $\widehat{G}(F)$ . Separately consider the following cases: + +- If $z, z' \notin \mathcal{P}$ , then $\{f(z, \mathcal{Z}), f(z', \mathcal{Z}')\} = \{(z, \mathcal{Z}), (z', \mathcal{Z}')\}$ is clearly an edge of $\widehat{G}(F)$ . +- If $z, z' \in \mathcal{P}$ , denote $z = w_i$ and $z' = w_j$ . Then it is straightforward to see that $w_i \in \mathcal{Z}' \leftrightarrow w_j \in \mathcal{Z}$ if and only if $w_i \in \mathcal{Z}'\triangle \{w_{j-1}, w_{j+1}\} \leftrightarrow w_j \in \mathcal{Z}\triangle \{w_{i-1}, w_{i+1}\}$ . Therefore, $\{f(z, \mathcal{Z}), f(z', \mathcal{Z}')\} = \{(z, \mathcal{Z}\triangle \{w_{i-1}, w_{i+1}\}), (z', \mathcal{Z}'\triangle \{w_{j-1}, w_{j+1}\})\}$ is an edge of $\widehat{G}(F)$ . +- If $z = w_{i} \in \mathcal{P}$ , $z' \notin \mathcal{P}$ , $\{z, z'\} \neq \{x, y\}$ , and $\{z, z'\} \neq \{u, v\}$ , then $z' \neq w_{i-1}$ and $z' \neq w_{i+1}$ . Therefore, $z \in \mathcal{Z}' \leftrightarrow z' \in \mathcal{Z}$ if and only if $z \in \mathcal{Z}' \leftrightarrow z' \in \mathcal{Z} \triangle \{w_{i-1}, w_{i+1}\}$ . This implies that $\{f(z, \mathcal{Z}), f(z', \mathcal{Z}')\} = \{(z, \mathcal{Z} \triangle \{w_{i-1}, w_{i+1}\}), (z', \mathcal{Z}')\}$ is an edge of $\widehat{G}(F)$ . +- If $\{z, z'\} = \{x, y\}$ , we can denote $z = w_0$ and $z' = w_1$ . We have $z \in \mathcal{Z}' \leftrightarrow z' \in \mathcal{Z}$ if and only if $z \notin \mathcal{Z}' \triangle \{w_0, w_2\} \leftrightarrow z' \in \mathcal{Z}$ . Note that there is a twist in $\widehat{G}(F)$ for edge $\{x, y\}$ , so we still obtain that $\{f(z, \mathcal{Z}), f(z', \mathcal{Z}')\} = \{(z, \mathcal{Z}), (z', \mathcal{Z}' \triangle \{w_0, w_2\})\}$ is an edge of $\widehat{G}(F)$ . +- Finally, if $\{z,z^{\prime}\} = \{u,v\}$ , the analysis is the same as the above one and $\{f(z,\mathcal{Z}),f(z^{\prime},\mathcal{Z}^{\prime})\}$ is an edge of $\widehat{G} (F)$ . + +In all cases, $\{f(z,\mathcal{Z}),f(z',\mathcal{Z}')\}$ is an edge of $\widehat{G} (F)$ . Moreover, it is clear that $f$ maps each meta vertex set $\operatorname {Meta}_F(x)$ to itself for all $x\in \mathcal{V}_F$ , which concludes the proof. + +Based on Lemma I.3, it is convenient to define a notion called proper isomorphism: + +Definition 1.4 (Proper isomorphism). Let $G(F) = (\mathcal{V}_G, \mathcal{E}_G)$ be the Fürer graph of $F = (\mathcal{V}_F, \mathcal{E}_F)$ and $\widehat{G}(F) = \text{twist}(G(F), \mathcal{E})$ for some $\mathcal{E} \subset \mathcal{E}_F$ . We say $f$ is a proper isomorphism from $G(F)$ to $\widehat{G}(F)$ , if $f$ is an isomorphism from $G(F)$ to $\widehat{G}(F)$ that maps each meta vertex set $\operatorname{Meta}_F(x)$ to itself for all $x \in \mathcal{V}_F$ . + +Lemma I.3 can be generalized into the following corollary: + +Corollary I.5. Let $G(F) = (\mathcal{V}_G, \mathcal{E}_G)$ be the Fürer graph of $F = (\mathcal{V}_F, \mathcal{E}_F)$ . Then, for any edge set $\mathcal{E} \subset \mathcal{E}_F$ and any two different edges $\{x, y\}, \{u, v\} \in \mathcal{E}_F$ , + +$$ +\operatorname {t w i s t} (G (F), \mathcal {E} \triangle \{\{x, y \}, \{u, v \} \}) \simeq \operatorname {t w i s t} (G (F), \mathcal {E}). +$$ + +Moreover, any proper isomorphism $f: \mathcal{V}_G \to \mathcal{V}_G$ from $G(F)$ to $\text{twist}(G(F), \{\{x, y\}, \{u, v\}\})$ is also a proper isomorphism from $\text{twist}(G(F), \mathcal{E})$ to $\text{twist}(G(F), \mathcal{E} \triangleq \{\{x, y\}, \{u, v\}\})$ . + +Proof. Denote $\widehat{G}(F) \coloneqq \mathrm{twist}(G(F), \{\{u, v\}, \{x, y\}\})$ , $H(F) \coloneqq \mathrm{twist}(G(F), \mathcal{E})$ , and $\widehat{H}(F) \coloneqq \mathrm{twist}(\widehat{G}(F), \mathcal{E})$ . Note that by definition of the twist operation, we equivalently have $\widehat{H}(F) = \mathrm{twist}(G(F), \mathcal{E} \triangle \{\{x, y\}, \{u, v\}\})$ . Due to Lemma I.3, we have $\widehat{G}(F) \simeq G(F)$ . Let $f$ be a proper isomorphism from $G(F)$ to $\widehat{G}(F)$ (according to Lemma I.3). It suffices to prove that $f$ is also an isomorphism from $H(F)$ to $\widehat{H}(F)$ . + +For any edge $\{(w,\mathcal{W}),(z,\mathcal{Z})\}$ in $H(F)$ : + +- If $\{w, z\} \in \mathcal{E}$ , then $\{(w, \mathcal{W}), (z, \mathcal{Z})\}$ is not an edge in $G(F)$ . Therefore, $\{f(w, \mathcal{W}), f(z, \mathcal{Z})\}$ is not an edge in $\widehat{G}(F)$ . Since $f$ maps $\operatorname{Meta}_F(w)$ to $\operatorname{Meta}_F(w)$ and maps $\operatorname{Meta}_F(z)$ to $\operatorname{Meta}_F(z)$ , we obtain that $\{f(w, \mathcal{W}), f(z, \mathcal{Z})\}$ is an edge in $\widehat{H}(F)$ . +- If $\{w, z\} \notin \mathcal{E}$ , then $\{(w, \mathcal{W}), (z, \mathcal{Z})\}$ is an edge in $G(F)$ . Therefore, $\{f(w, \mathcal{W}), f(z, \mathcal{Z})\}$ is an edge in $\widehat{G}(F)$ . Since $f$ maps $\operatorname{Meta}_F(w)$ to $\operatorname{Meta}_F(w)$ and maps $\operatorname{Meta}_F(z)$ to $\operatorname{Meta}_F(z)$ , we also obtain that $\{f(w, \mathcal{W}), f(z, \mathcal{Z})\}$ is an edge in $\widehat{H}(F)$ . + +In both cases, $\{f(w,\mathcal{W}),f(z,\mathcal{Z})\}$ is an edge in $\widehat{H} (F)$ . Since Lemma I.3 has proved that $f$ is bijective, $f$ is an isomorphism from $H(F)$ to $\widehat{H} (F)$ and thus $H(F)\simeq \widehat{H} (F)$ . + +As a special case, Corollary I.5 leads to the following important fact: + +Corollary I.6. Let $G(F)$ be the Fürer graph of $F = (\mathcal{V}_F, \mathcal{E}_F)$ . Then, for any two edges $\{x, y\}$ , $\{u, v\} \in \mathcal{E}_F$ , + +$$ +\operatorname {t w i s t} (G (F), \{x, y \}) \simeq \operatorname {t w i s t} (G (F), \{u, v \}). +$$ + +Proof. Setting $\mathcal{E} = \{u, v\}$ in Corollary I.5 readily concludes the proof. + +Corollary I.6 shows that the structure of a twisted Fürer graph does not depend on which edge is twisted. Therefore, we can simply denote $H(F)$ as the twisted Fürer graph of $F$ without specifying the twisted edge $\{x,y\}$ . Moreover, recursively applying Corollary I.5 obtains that, if we twist $k$ edges of a Fürer graph $G(F)$ , the resulting graph is isomorphic to either $G(F)$ or $H(F)$ , depending on whether $k$ is even or odd. To complete the result, we show $G(F)$ and $H(F)$ are actually non-isomorphic under certain conditions: + +Lemma 1.7. Let $G(F) = (\mathcal{V}_G, \mathcal{E}_G)$ be the Fürer graph of $F = (\mathcal{V}_F, \mathcal{E}_F)$ , and let $H(F) = (\mathcal{V}_G, \mathcal{E}_H)$ be the twisted Fürer graph. Then there does not exist a proper isomorphism $f: \mathcal{V}_G \to \mathcal{V}_G$ from $G(F)$ to $H(F)$ . + +Proof. Assume $H(F) = \mathrm{twist}(G(F),\{u,v\})$ for some $\{u,v\} \in \mathcal{E}_F$ and $f:\mathcal{V}_G\to \mathcal{V}_G$ is a proper isomorphism. Then we can write $f(x,\emptyset) = (x,\mathcal{T}_x)$ for all $x\in \mathcal{V}_F$ . Note that for any $\{x,y\} \in \mathcal{E}_F$ , by definition of the Fürer graph there is an edge between vertices $(x,\emptyset)$ and $(y,\emptyset)$ in $G(F)$ . However, we will prove that this is not the case for $H(F)$ : there must exist an odd number of edges $\{x,y\} \in \mathcal{E}_F$ such that $\{(x,\mathcal{T}_x),(y,\mathcal{T}_y)\} \notin \mathcal{E}_H$ . This will lead to a contradiction and finish the proof. + +Formally, let $\mathcal{E} = \{\{(x,\mathcal{T}_x),(y,\mathcal{T}_y)\} : \{x,y\} \in \mathcal{E}_F\}$ , and our goal is to prove that $|\mathcal{E}\backslash \mathcal{E}_H| \mod 2 = 1$ . The proof is based on induction. First, consider the base case when $\mathcal{T}_x = \emptyset$ for all $x \in \mathcal{V}_F$ . Clearly, there is exactly one element $\{(u,\mathcal{T}_u),(v,\mathcal{T}_v)\} \notin \mathcal{E}_H$ since $H(F)$ is obtained from $G(F)$ by twisting edge $\{u,v\}$ . Next, for the induction step, we show if $|\mathcal{E}\backslash \mathcal{E}_H| \mod 2 = 1$ , then $|\tilde{\mathcal{E}}\backslash \mathcal{E}_H| \mod 2 = 1$ holds for any $\tilde{\mathcal{E}}$ that is modified from $\mathcal{E}$ by changing a given $\mathcal{S}_z$ to another feasible $\tilde{\mathcal{S}}_z$ for some $z \in \mathcal{V}_F$ , namely, + +$$ +\tilde {\mathcal {E}} = \{\{(x, \mathcal {T} _ {x}), (y, \mathcal {T} _ {y}) \}: \{x, y \} \in \mathcal {E} _ {F}, x, y \neq z \} \cup \{\{(x, \mathcal {T} _ {x}), (z, \tilde {\mathcal {T}} _ {z}) \}: \{x, z \} \in \mathcal {E} _ {F} \}. +$$ + +This is because + +$$ +\begin{array}{l} | \tilde {\mathcal {E}} \backslash \mathcal {E} _ {H} | - | \mathcal {E} \backslash \mathcal {E} _ {H} | \equiv | (\tilde {\mathcal {E}} \triangle \mathcal {E}) \backslash \mathcal {E} _ {H} | \\ \equiv | \{x \in \mathcal {N} _ {F} (z): x \in \mathcal {T} _ {z} \leftrightarrow x \notin \tilde {\mathcal {T}} _ {z} \} | \\ \equiv \left| \mathcal {T} _ {z} \triangle \tilde {\mathcal {T}} _ {z} \right| \equiv 0 \quad (\mathrm {m o d} 2), \\ \end{array} +$$ + +where the last equation holds because $|\mathcal{T}_z| \equiv |\tilde{\mathcal{T}}_z| \equiv 0$ (mod 2). This concludes the induction step. + +Since any set $\mathcal{E}$ can be obtained from the initial set $\{\{(x,\emptyset),(y,\emptyset)\} : \{x,y\} \in \mathcal{E}_F\}$ by modifying $\emptyset$ to $\mathcal{T}_z$ for each $z \in \mathcal{V}_F$ and the parity of $|\mathcal{E}\backslash \mathcal{E}_H|$ does not change throughout the process, we have concluded the proof. + +Below, we proceed to perform an in-depth analysis of the properties regarding the isomorphisms of (twisted) Fürer graphs. We need several definitions. + +Definition 1.8 (Connected components). Let $F = (\mathcal{V}_F, \mathcal{E}_F)$ be a connected graph and let $S \subset \mathcal{V}_F$ be a vertex set, called separation vertices. We say two edges $\{u, v\}, \{x, y\} \in \mathcal{E}_F$ are in the same connected component if there is a path $(y_0, y_1, \dots, y_k)$ satisfying that $\{y_0, y_1\} = \{u, v\}, \{y_{k-1}, y_k\} = \{x, y\}$ and $y_i \notin S$ for all $i \in [k-1]$ . It is easy to see that the above relationship between edges forms an equivalence relation. Therefore, we can define a partition over the edge set: $\mathbb{C}\mathbb{C}_S(F) = \{\mathcal{P}_i : i \in [M]\}$ , where each $\mathcal{P}_i \subset \mathcal{E}_F$ is called a connected component. + +We are ready to state the central theorem: + +Theorem 1.9. Let $G(F) = (\mathcal{V}_G, \mathcal{E}_G)$ be either the original or twisted Fürer graph of $F = (\mathcal{V}_F, \mathcal{E}_F)$ , and let $S \subset \mathcal{V}_F$ be any set. For each $u \in S$ , let $(u, \mathcal{T}_u), (u, \mathcal{U}_u) \in \mathsf{Meta}_F(u)$ be any given vertex sets. Then, there exists a proper isomorphism $f$ from graph $G(F)$ to graph $\text{twist}(G(F), \mathcal{E})$ for some $\mathcal{E} \subset \mathcal{E}_F$ with $|\mathcal{E}| \mod 2 = 0$ , such that $f(u, \mathcal{T}_u) = (u, \mathcal{U}_u)$ for all $u \in S$ . Moreover, for any $\tilde{\mathcal{E}} \subset \mathcal{E}_F$ , there exists a proper isomorphism $\tilde{f}$ from $G(F)$ to $\text{twist}(G(F), \tilde{\mathcal{E}})$ such that $\tilde{f}(u, \mathcal{T}_u) = (u, \mathcal{U}_u)$ for all $u \in S$ if and only if $|\mathcal{P} \cap \tilde{\mathcal{E}}| \equiv |\mathcal{P} \cap \mathcal{E}|$ (mod 2) for all $\mathcal{P} \in \mathbb{C}\mathbb{C}_{\mathcal{S}}(F)$ . + +Proof. We only consider the case when $G(F)$ is a Fürer graph, and the case of twisted Fürer graph is similar. We prove the theorem by induction over the size of $S$ . For the base case of $|S| = 0$ , there is clearly a trivial isomorphism (identity map) from $G(F)$ to $\text{twist}(G(F), \emptyset) = G(F)$ . + +Now assume that the result holds for set $\mathcal{S}$ , and there exists a proper isomorphism $f$ from $G(F)$ to $\mathrm{twist}(G(F),\mathcal{E})$ for some $\mathcal{E}$ with $|\mathcal{E}| \mod 2 = 0$ , such that $f(u,\mathcal{T}_u) = (u,\mathcal{U}_u)$ for all $u \in \mathcal{S}$ . Consider adding a vertex $v \in \mathcal{V}_F$ and two given sets $(v,\mathcal{T}_v),(v,\mathcal{U}_v) \in \mathrm{Meta}_F(v)$ . We will construct a new proper isomorphism $f^{\mathrm{new}}$ from $G(F)$ to $\mathrm{twist}(G(F),\mathcal{E}^{\mathrm{new}})$ for some + +$\mathcal{E}^{\mathrm{new}} \subset \mathcal{E}_F$ with $|\mathcal{E}^{\mathrm{new}}| \bmod 2 = 0$ , such that $f^{\mathrm{new}}(u, \mathcal{T}_u) = (u, \mathcal{U}_u)$ for all $u \in S$ and $f^{\mathrm{new}}(v, \mathcal{T}_v) = (v, \mathcal{U}_v)$ . Denote $\tilde{\mathcal{U}}_v \coloneqq f(v, \mathcal{T}_v)$ and denote $\mathcal{D}_v = \mathcal{U}_v \triangle \tilde{\mathcal{U}}_v$ . Note that the size of $\mathcal{D}_v$ is even. + +Denote $\mathcal{D}_v = \{x_1,\dots ,x_{2k}\}$ . We define $k$ mappings $f_{i}:\mathcal{V}_{G}\to \mathcal{V}_{G},i\in [k]$ as follows: + +$$ +f _ {i} (w, \mathcal {W}) = \left\{ \begin{array}{l l} (w, \mathcal {W} \bigtriangleup \left\{x _ {2 i - 1}, x _ {2 i} \right\}) & \text {i f} w = v, \\ (w, \mathcal {W}) & \text {o t h e r w i s e .} \end{array} \right. \tag {24} +$$ + +We set $f^{\text{new}}$ to be the composition of a series of mappings $f^{\text{new}} \coloneqq f_k \circ \dots \circ f_1 \circ f$ . By definition, we have + +$$ +f ^ {\mathsf {n e w}} (v, \mathcal {T} _ {v}) = (f _ {k} \circ \dots \circ f _ {1}) (v, \tilde {\mathcal {U}} _ {v}) = (v, \tilde {\mathcal {U}} _ {v} \triangle \mathcal {D} _ {v}) = (v, \mathcal {U} _ {v}), +$$ + +and for all $u\in S$ + +$$ +f ^ {\mathsf {n e w}} (u, \mathcal {T} _ {u}) = \left(f _ {k} \circ \dots \circ f _ {1}\right) (u, \mathcal {U} _ {u}) = (u, \mathcal {U} _ {u}). +$$ + +It remains to verify that $f^{\mathrm{new}}$ is an isomorphism from $G(F)$ to $\mathrm{twist}(G(F),\mathcal{E}^{\mathrm{new}})$ for some $\mathcal{E}^{\mathrm{new}}$ with $|\mathcal{E}^{\mathrm{new}}| \mod 2 = 0$ . Denote $\mathcal{E}^{(i)} = \bigcup_{j=1}^{2i}\{\{v,u_j\}\}$ for $i \in \{0,1,\dots,k\}$ . Based on the proof of Lemma I.3 (i.e., the construction in (23)), $f_i$ is an isomorphism from $G(F)$ to $\mathrm{twist}(G(F),\{\{v,u_{2i-1}\},\{v,u_{2i}\}\})$ . Further using Corollary I.5, $f_i$ is also an isomorphism from $\mathrm{twist}(G(F),\mathcal{E}\triangle \mathcal{E}^{(i-1)})$ to $\mathrm{twist}(G(F),\mathcal{E}\triangle \mathcal{E}^{(i)})$ . Thus by composition, $f^{\mathrm{new}}$ is an isomorphism from $G(F)$ to $\mathrm{twist}(G(F),\mathcal{E}\triangle \mathcal{E}^{(k)})$ , namely, $\mathcal{E}^{\mathrm{new}} = \mathcal{E}\triangle \mathcal{E}^{(k)}$ . Also, since $|\mathcal{E}^{(k)}| = 2k$ and $|\mathcal{E}| \mod 2 = 0$ , we have $|\mathcal{E}^{\mathrm{new}}| \mod 2 = 0$ , as desired. This finishes the induction step and concludes the proof of the first part. + +For the second part, let us first consider how to find a proper isomorphism $\tilde{f}$ from $G(F)$ to $\mathrm{twist}(G(F),\tilde{\mathcal{E}})$ for other $\tilde{\mathcal{E}}$ satisfying $|\mathcal{P}\cap \tilde{\mathcal{E}} |\equiv |\mathcal{P}\cap \mathcal{E}|$ (mod 2) for all $\mathcal{P}\in \mathbb{C}\mathbb{C}_S(F)$ , such that $\tilde{f} (u,\mathcal{T}_u) = (u,\mathcal{U}_u)$ for all $u\in S$ . For each $\mathcal{P}\in \mathbb{C}\mathbb{C}_S(F)$ , denote $\mathcal{D}_{\mathcal{P}}\coloneqq (\mathcal{E}\triangle \tilde{\mathcal{E}})\cap \mathcal{P}$ . Clearly, $\bigcup_{\mathcal{P}\in \mathbb{C}\mathbb{C}_S(F)}\mathcal{D}_{\mathcal{P}} = \mathcal{E}\triangle \tilde{\mathcal{E}}$ . By assumption we have + +$$ +\begin{array}{l} | \mathcal {D} _ {\mathcal {P}} | = | (\mathcal {E} \triangle \tilde {\mathcal {E}}) \cap \mathcal {P} | = | (\mathcal {E} \cap \mathcal {P}) \triangle (\tilde {\mathcal {E}} \cap \mathcal {P}) | \\ \equiv | \mathcal {E} \cap \mathcal {P} | + | \tilde {\mathcal {E}} \cap \mathcal {P} | \equiv 0 \pmod {2}. \\ \end{array} +$$ + +Therefore, there is a proper isomorphism $f^{\mathcal{P}}$ from $G(F)$ to $\mathrm{twist}(G(F),\mathcal{D}_{\mathcal{P}})$ based on Lemma I.3 and Corollary I.5. Concretely, denote $\mathcal{D}_{\mathcal{P}} = \{e_1,\dots ,e_{2k}\}$ , then $f^{\mathcal{P}}$ can be constructed as a composition $f^{\mathcal{P}} = f_k^{\mathcal{P}}\circ \dots \circ f_1^{\mathcal{P}}$ where each $f_{i}^{\mathcal{P}}$ is a proper isomorphism from $\mathrm{twist}(G(F),\bigcup_{j = 1}^{2(i - 1)}\{e_j\})$ to $\mathrm{twist}(G(F),\bigcup_{j = 1}^{2i}\{e_j\})$ . Since all edges $e_j\in \mathcal{D}_{\mathcal{P}}$ are in the same connected component, there exists a path containing edges $e_{2i - 1}$ and $e_{2i}$ and it does not go through vertices in $S$ . Therefore, by construction of (23) in Lemma I.3, all the mappings $f_{i}^{\mathcal{P}}$ does not change the value for inputs $\mathrm{Meta}_F(u)$ for all $u\in S$ . Namely, $f^{\mathcal{P}}(u,\mathcal{U}_u) = (u,\mathcal{U}_u)$ for all $u\in S$ . Finally, we set $\tilde{f} = (\circ_{\mathcal{P}\in \mathbb{C}\mathbb{C}_{S}(F)}f^{\mathcal{P}})\circ f$ to be the composition of $f$ and all $f^{\mathcal{P}}$ . We have $\tilde{f} (u,\mathcal{T}_u) = (\circ_{\mathcal{P}\in \mathbb{C}\mathbb{C}_S(F)}f^{\mathcal{P}})(u,\mathcal{U}_u) = (u,\mathcal{U}_u)$ , as desired. Moreover, $\tilde{f}$ is indeed a proper isomorphism from $G(F)$ to $\mathrm{twist}(G(F),\mathcal{E}\triangle (\bigcup_{\mathcal{P}\in \mathbb{C}\mathbb{C}_{S}(F)}\mathcal{D}_{\mathcal{P}})) = \mathrm{twist}(G(F),\tilde{\mathcal{E}})$ . + +Conversely, suppose $\tilde{\mathcal{E}}$ satisfies that $|\mathcal{P} \cap \tilde{\mathcal{E}}| \not\equiv |\mathcal{P} \cap \mathcal{E}| \pmod{2}$ for some $\mathcal{P} \in \mathbb{C}\mathbb{C}_S(F)$ . We will prove that any proper isomorphism $\hat{f}$ from $G(X)$ to $\mathrm{twist}(G(X),\tilde{\mathcal{E}})$ cannot satisfy $\hat{f}(u,\mathcal{T}_u) = (u,\mathcal{U}_u)$ for all $u \in S$ . To prove the result, it suffices to prove that any proper isomorphism $\hat{f}$ from $\mathrm{twist}(G(X),\mathcal{E})$ to $\mathrm{twist}(G(X),\tilde{\mathcal{E}})$ cannot satisfy $\hat{f}(u,\mathcal{U}_u) = (u,\mathcal{U}_u)$ for all $u \in S$ . Let $\mathcal{V}_F^{\mathcal{P}} := \bigcup_{\{x,y\} \in \mathcal{P}}\{x,y\} \subset \mathcal{V}_F$ be the set of vertices associated to the connected component $\mathcal{P}$ , and let $\mathcal{V}_G^{\mathcal{P}} := \bigcup_{x \in \mathcal{V}_F^{\mathcal{P}}} \mathrm{Meta}_F(x)$ . It thus suffices to prove that any proper isomorphism $f^{\mathcal{P}}: \mathcal{V}_G^{\mathcal{P}} \to \mathcal{V}_G^{\mathcal{P}}$ from the induced subgraph $G^{\mathcal{P}} := \mathrm{twist}(G(X),\mathcal{E})[\mathcal{V}_G^{\mathcal{P}}]$ to the induced subgraph $H^{\mathcal{P}} := \mathrm{twist}(G(X),\tilde{\mathcal{E}})[\mathcal{V}_G^{\mathcal{P}}]$ cannot satisfy $f^{\mathcal{P}}(u,\mathcal{U}_u) = (u,\mathcal{U}_u)$ for all $u \in S \cap \mathcal{V}_F^{\mathcal{P}}$ . + +The proof follows the same technique as Lemma I.7. For each $x \in \mathcal{V}_F^{\mathcal{P}} \backslash S$ , pick an arbitrary meta vertex $(x, \mathcal{U}_x) \in \operatorname{Meta}_F(x)$ . Combined with all $(u, \mathcal{U}_u)$ for $u \in S \cap \mathcal{V}_F^{\mathcal{P}}$ , now each vertex $x \in \mathcal{V}_F^{\mathcal{P}}$ is associated with a set $\mathcal{U}_x$ . First consider the base case when $f^{\mathcal{P}}(x, \mathcal{U}_x) = (x, \mathcal{U}_x)$ for all $x \in \mathcal{V}_F^{\mathcal{P}}$ . It can be proved that the set $\{\{(x, \mathcal{U}_x), (y, \mathcal{U}_y)\} : \{x, y\} \in \mathcal{P}\}$ contains an odd/even number of edges in $G^{\mathcal{P}}$ but an even/odd number of edges in $H^{\mathcal{P}}$ , i.e., their parity differs. This is because $H^{\mathcal{P}}$ can be obtained from $G^{\mathcal{P}}$ by twisting the edge set $(\mathcal{E} \triangle \tilde{\mathcal{E}}) \cap \mathcal{P}$ , which contains an odd number of edges. Therefore, $f^{\mathcal{P}}$ is not a proper isomorphism from $G^{\mathcal{P}}$ to $H^{\mathcal{P}}$ in this case. For the induction step, consider gradually changing the output $f^{\mathcal{P}}(w, \mathcal{U}_w) = (w, \mathcal{U}_w)$ to $f^{\mathcal{P}}(w, \mathcal{U}_w) = (w, \tilde{\mathcal{U}}_w)$ for each $w \in \mathcal{V}_F^{\mathcal{P}} \backslash S$ where $(w, \tilde{\mathcal{U}}_w) \in \operatorname{Meta}_F(w)$ can be an arbitrary meta vertex. It can be proved that the parity defined above does not change throughout the process (following the proof of Lemma I.7). We have thus proved the induction step, and in all cases there does not exist a proper isomorphism $f^{\mathcal{P}}: \mathcal{V}_G^{\mathcal{P}} \to \mathcal{V}_G^{\mathcal{P}}$ from $G^{\mathcal{P}}$ to $H^{\mathcal{P}}$ satisfying $f^{\mathcal{P}}(u, \mathcal{U}_u) = (u, \mathcal{U}_u)$ for all $u \in S \cap \mathcal{V}_F^{\mathcal{P}}$ . + +Theorem I.9 partially answers the question of how to construct a proper isomorphism $f$ from a Fürer graph $G(F)$ to another Fürer graph twist $(G(F), \mathcal{E})$ when the mapped outputs $f(\xi)$ are specified for several given inputs $\xi \in \mathcal{V}_G$ . However, it does not fully address the problem, because it does not consider the case when two or more inputs $\xi$ are from the same set $\operatorname{Meta}_F(u)$ for some $u \in \mathcal{V}_F$ . In the following, we will focus on this general setting. We first consider the special case when all $\xi$ are from the same meta vertex set $\operatorname{Meta}_F(u)$ . We have the following result: + +Lemma I.10. Let $G(F) = (\mathcal{V}_G, \mathcal{E}_G)$ be either the original or twisted Fürer graph of $F = (\mathcal{V}_F, \mathcal{E}_F)$ . Let $u \in \mathcal{V}_F$ and $(u, \mathcal{T}^1), (u, \mathcal{U}^1), \dots, (u, \mathcal{T}^k), (u, \mathcal{U}^k) \in \operatorname{Meta}_F(u)$ be vertices in $\mathcal{V}_G$ . Then, there exists a set $\mathcal{D} \subset \mathcal{N}_F(u)$ such that $\mathcal{T}^i \triangle \mathcal{U}^i = \mathcal{D}$ for all $i \in [k]$ , if and only if there exists a proper isomorphism $f$ from graph $G(F)$ to graph twist $(G(F), \mathcal{E})$ for some $\mathcal{E} \subset \mathcal{E}_F$ with $|\mathcal{E}| \mod 2 = 0$ , such that $f(u, \mathcal{T}^i) = (u, \mathcal{U}^i)$ for all $i \in [k]$ . + +Proof. “ $\Rightarrow$ ”. Let $\mathcal{D} \subset \mathcal{N}_F(u)$ satisfy that $\mathcal{T}^i \triangle \mathcal{U}^i = \mathcal{D}$ for all $i \in [k]$ . Clearly, $|\mathcal{D}| \mod 2 = 0$ . Denote $\mathcal{D} = \{v_1, \dots, v_{2l}\}$ . Similar to the construction in (24), construct a mapping $f: \mathcal{V}_G \to \mathcal{V}_G$ to be $f = f_l \circ \dots \circ f_1$ , where + +$$ +f _ {j} (w, \mathcal {W}) = \left\{ \begin{array}{l l} (w, \mathcal {W} \triangle \{v _ {2 j - 1}, v _ {2 j} \}) & \text {i f} w = u, \\ (w, \mathcal {W}) & \text {o t h e r w i s e}. \end{array} \right. +$$ + +Using a similar analysis, we obtain that $f$ is a proper isomorphism from $G(F)$ to $\mathrm{twist}(G(F), \bigcup_{j=1}^{2l} \{\{u, v_j\}\})$ and $f(u, \mathcal{T}^i) = (u, \mathcal{T}^i \triangle \mathcal{D}) = (u, \mathcal{U}^i)$ for all $i \in [k]$ . + +$\Leftarrow$ Assume there does not exist $\mathcal{D} \subset \mathcal{N}_F(u)$ satisfying $\mathcal{T}^i \triangle \mathcal{U}^i = \mathcal{D}$ for all $i \in [k]$ . Then, there must exist two indices $i, j$ and a vertex $v \in \mathcal{N}_F(u)$ such that $v \in \mathcal{T}^i \triangle \mathcal{U}^i$ but $v \notin \mathcal{T}^j \triangle \mathcal{U}^j$ . We show any proper isomorphism $f$ from $G(F)$ to $\operatorname{twist}(G(F), \mathcal{E})$ cannot satisfy both $f(u, \mathcal{T}^i) = (u, \mathcal{U}^i)$ and $f(u, \mathcal{T}^j) = (u, \mathcal{U}^j)$ . Let $f(v, \emptyset) = (v, \mathcal{V})$ . This is simply due to the following fact: + +- If $f(u, \mathcal{T}^i) = (u, \mathcal{U}^i)$ , then by definition of isomorphism we have $u \in \emptyset \leftrightarrow v \in \mathcal{T}^i \iff u \in \mathcal{V} \leftrightarrow v \in \mathcal{U}^i$ . Since $v \in \mathcal{T}^i \triangle \mathcal{U}^i$ , we obtain $u \in \mathcal{V}$ ; +- If $f(u, \mathcal{T}^j) = (u, \mathcal{U}^j)$ , then by definition of isomorphism we have $u \in \emptyset \leftrightarrow v \in \mathcal{T}^j \iff u \in \mathcal{V} \leftrightarrow v \in \mathcal{U}^j$ . Since $v \notin \mathcal{T}^j \triangle \mathcal{U}^j$ , we obtain $u \notin \mathcal{V}$ . + +This yields a contradiction and concludes the proof. + +![](images/0afd1b6553253679ed7e25f74af660cc5f8f79ddbb134205c90162af1b2e3996.jpg) + +We finally consider the most general setting. Our result is present as follows: + +Corollary I.11. Let $G(F) = (\mathcal{V}_G, \mathcal{E}_G)$ be either the original or twisted Fürer graph of $F = (\mathcal{V}_F, \mathcal{E}_F)$ . Let $\{(u_i, \mathcal{T}_i)\}_{i=1}^k \subset \mathcal{V}_G$ and $\{(u_i, \mathcal{U}_i)\}_{i=1}^k \subset \mathcal{V}_G$ be two vertex sets of $G(F)$ . Define $\mathcal{R}(u) := \{(\mathcal{T}_i, \mathcal{U}_i) : u_i = u\}$ and let $S = \{u_i : i \in [k]\}$ . The following two items are equivalent: + +- There exists a proper isomorphism $f$ from graph $G(F)$ to graph twist $(G(F), \mathcal{E})$ for some $\mathcal{E} \subset \mathcal{E}_F$ with $| \mathcal{E} | \mod 2 = 0$ , such that $f(u_i, \mathcal{T}_i) = (u_i, \mathcal{U}_i)$ for all $i \in [k]$ . +- For all $v \in \mathcal{V}_F$ , there exists $\mathcal{D}_v \subset \mathcal{N}_F(v)$ such that $\mathcal{T}_i \triangle \mathcal{U}_i = \mathcal{D}_v$ holds for all $(\mathcal{T}_i, \mathcal{U}_i) \in \mathcal{R}(v)$ . + +Moreover, if the first item holds, then for any $\tilde{\mathcal{E}}\subset \mathcal{E}_F$ , there exists a proper isomorphism $\tilde{f}$ from $G(F)$ to $\mathrm{twist}(G(F),\tilde{\mathcal{E}})$ such that $f(u_{i},\mathcal{T}_{i}) = (u_{i},\mathcal{U}_{i})$ for all $i\in [k]$ , if and only if $|\mathcal{P}\cap \tilde{\mathcal{E}} |\equiv |\mathcal{P}\cap \mathcal{E}|$ (mod 2) for all $\mathcal{P}\in \mathbb{C}\mathbb{C}_{\mathcal{S}}(F)$ . + +Proof. First assume the second item does not hold for some $v \in \mathcal{V}_F$ . Then by Lemma I.10, there does not exist a proper isomorphism $f$ from graph $G(F)$ to some twist $(G(F), \mathcal{E})$ such that $f(v, \mathcal{T}_i) = (v, \mathcal{U}_i)$ for all $(\mathcal{T}_i, \mathcal{U}_i) \in \mathcal{R}(v)$ . Clearly, the first item of Corollary I.11 does not hold either. + +Now assume the second item holds. For each $v \in S$ , pick an arbitrary element in $\mathcal{R}(v)$ , denoted as $(\mathcal{T}^v, \mathcal{U}^v)$ . We can then invoke Theorem I.9 with $\{\mathcal{T}^v\}_{v \in S}$ and $\{\mathcal{U}^v\}_{v \in S}$ . Denote $f$ as the proper isomorphism from $G(F)$ to $\mathrm{twist}(G(F), \mathcal{E})$ for some $\mathcal{E}$ with $|\mathcal{E}| \mod 2 = 0$ returned by Theorem I.9, such that $f(v, \mathcal{T}^v) = (v, \mathcal{U}^v)$ for all $v \in S$ . It remains to prove that for all other elements $(\mathcal{T}_i, \mathcal{U}_i) \notin \{(\mathcal{T}^v, \mathcal{U}^v) : v \in S\}$ , we still have $f(u_i, \mathcal{T}_i) = (u_i, \mathcal{U}_i)$ . + +Observe that the construction of $f$ in Theorem I.9 has the form $f(w, \mathcal{W}) = (w, \mathcal{W} \triangle \mathcal{D}_w)$ for all $(w, \mathcal{W}) \in \mathcal{V}_G$ where $\mathcal{D}_w$ is a fixed set for each $w \in \mathcal{V}_F$ (which can be seen from the proof of Theorem I.9). Under the notation above, we have $\mathcal{T}^v \triangle \mathcal{D}_v = \mathcal{U}^v$ for all $v \in S$ . If $f(u_i, \mathcal{T}_i) \neq (u_i, \mathcal{U}_i)$ for some $i$ , then $\mathcal{T}_i \triangle \mathcal{W}_{u_i} \neq \mathcal{U}_i$ . This implies that $\mathcal{T}^{u_i} \triangle \mathcal{U}^{u_i} \neq \mathcal{T}_i \triangle \mathcal{U}_i$ , which contradicts the second item of Corollary I.11 and concludes the proof. + +Before closing this subsection, we finally introduce a notion called proper Fürer graphs, which will be widely used in subsequent analysis. + +Definition 1.12 (Proper Fürer graphs). A Fürer graph $G(F)$ is called proper, if the base graph $F = (\mathcal{V}_F, \mathcal{E}_F)$ has the following properties: + +- $F$ is a connected graph and the degree of any vertex $u \in \mathcal{V}_F$ is at least two; +- There is at least one vertex $u \in \mathcal{V}_F$ with a degree of at least three. + +Proposition I.13. Let $G(F)$ and $H(F)$ be any proper Fürer graph and its twisted graph, respectively. Then both $G(F)$ and $H(F)$ are connected, and the degree of any vertex in both $G(F)$ and $H(F)$ is at least two. + +Proof. By definition of (twisted) Füer graphs, the degree of a vertex $(u,\mathcal{U})$ in $G(F)$ is $\sum_{v\in \mathcal{N}_F(u)}|\{(v,\mathcal{V})\in \mathsf{Meta}_F(v): u\in \mathcal{V}\leftrightarrow v\in \mathcal{U}\} |.$ By the assumption that $|\mathcal{N}_F(v)|\geq 2$ for all $v\in \mathcal{V}_F$ , the degree of a vertex $(u,\mathcal{U})$ is always $\sum_{v\in \mathcal{N}_F(u)}2^{|\mathcal{N}_F(v)| - 2}\geq \sum_{v\in \mathcal{N}_F(u)}1\geq 2.$ The case of $H(F)$ is similar. Thus all vertices in both $G(F)$ and $H(F)$ have a degree of at least two. + +We next investigate the connectivity of $G(F)$ and $H(F)$ . Denote $u$ as any vertex in $G(F)$ or $H(F)$ with a degree of at least 3. We first show that any two vertices $(u, \mathcal{T})$ , $(u, \mathcal{U}) \in \mathcal{V}_G$ satisfying $|\mathcal{T} \triangle \mathcal{U}| = 2$ are in the same connected component. To see this, pick any $w \in \mathcal{N}_F(u) \backslash (\mathcal{T} \triangle \mathcal{U})$ (which exists since $|\mathcal{N}_F(u)| \geq 3$ ), and consider the two vertices $(w, \emptyset)$ and $(w, \mathcal{W})$ satisfying $u \in \mathcal{W}$ (the existence of $\mathcal{W}$ is due to $w \in \mathcal{N}_F(u)$ and $|\mathcal{N}_F(w)| \geq 2$ ). Then, + +- If $\{(w, \emptyset), (u, \mathcal{T})\}$ is an edge of $G(F) / H(F)$ , then $\{(w, \emptyset), (u, \mathcal{U})\}$ is an edge of $G(F) / H(F)$ (because $w \in \mathcal{T}$ if and only if $w \in \mathcal{U}$ ): +- If $\{(w, \emptyset), (u, \mathcal{T})\}$ is not an edge of $G(F) / H(F)$ , then $\{(w, \mathcal{W}), (u, \mathcal{T})\}$ is an edge of $G(F) / H(F)$ (because $u \notin \emptyset$ but $u \in \mathcal{W}$ ). Therefore, $\{(w, \mathcal{W}), (u, \mathcal{U})\}$ is an edge of $G(F) / H(F)$ (because $w \in \mathcal{T}$ if and only if $w \in \mathcal{U}$ ). + +In both cases, there is a path from $(u,\mathcal{T})$ to $(u,\mathcal{U})$ . Next, we can simply remove the assumption $|\mathcal{T}\triangle \mathcal{U}| = 2$ : any $(u,\mathcal{T}), (u,\mathcal{U}) \in \mathcal{V}_G$ are also in the same connected component. Finally, for any vertex $(x,\mathcal{X})$ in graph $G(F) / H(F)$ , there is a path from $(x,\mathcal{X})$ to some vertex in $\operatorname{Meta}_F(u)$ since $F$ is connected. Using $\operatorname{Meta}_F(u)$ as a "transit set", we have proved that the graph $G(F) / H(F)$ is connected. + +# I.2. Simplified pebbling games for Fürer graphs + +In Section 6, we developed a unified analyzing framework for all types of WL algorithms based on pebbling games. While the game viewpoint provides interesting and novel insights into the power of different algorithms, it is still quite challenging to directly applying such games for Fürer graphs due to their sophisticated structure. In this subsection, we propose a class of simplified pebbling game motivated from Fürer (2001), which makes our analysis much easier. + +We begin by introducing the augmented Fürer graphs defined as follows: + +Definition I.14 (Augmented Fürer graphs). Let $G(F) = (\mathcal{V}_G, \mathcal{E}_G)$ be either the original or twisted proper Fürer graph of $F = (\mathcal{V}_F, \mathcal{E}_F)$ where $\mathcal{V}_F = [n]$ . Let $\tilde{G}(F)$ be the graph augmented from $G(F)$ in the following way: for each $u \in \mathcal{V}_F$ , add a chain $C_u$ of length $u + 1$ and link one endpoint of $C_u$ to all vertices in $\operatorname{Meta}_F(u)$ (see Figure 3 for an illustration). The vertices on the chains are called auxiliary vertices. For each vertex $\xi$ in $\tilde{G}(F)$ , by construction it is associated with a vertex $u$ in the base graph. We call $u$ is the base vertex of $\xi$ and denote $B(\xi) = u$ . + +The main motivation of Definition I.14 is that the added auxiliary vertices help distinguish the sets $\mathsf{Meta}_F(u)$ for different $u$ since the lengths of chains $C_u$ are different. Indeed, let A be any SWL algorithm containing a local aggregation or any FWL-type algorithm, and consider playing the pebbling game for A on augmented Fürer graphs $\tilde{G}(F)$ and its twisted version $\tilde{H}(F)$ . We have the following result, which shows that Duplicator's best strategy is to match the base vertices for each pair of pebbles. + +Lemma 1.15. Consider the pebbling game for any WL algorithm $\mathsf{A}$ played on graphs $\tilde{G} (F)$ and $\tilde{H} (F)$ . Let $(\xi_{G},\xi_{H})$ be the position of any pebbles $u / v$ after any round. If $B(\xi_G)\neq B(\xi_H)$ , then Spoiler can win the remaining game. + +Proof. It suffices to consider the (weakest) vanilla SWL algorithm $\mathsf{SWL}(\mathsf{VS})$ , because Spoiler has more choices to win when considering more powerful WL algorithms. Also, if $B(\xi_G) \neq B(\xi_H)$ holds for pebbles $u$ , Spoiler can just move pebble $v$ in $\tilde{G}(F)$ in subsequent rounds so that the position of $v$ eventually coincides with pebble $u$ . This is feasible because + +![](images/73783a3eb9e81cbc4b64fd6f95657ffafbe9ea3aa6cdf76f763e44ade52cfe83.jpg) +Figure 3. Illustration of the augmented Fürer graph for the Fürer graph in Figure 2. Here, the nodes in gray regions are the vertices of the original Fürer graph and other vertices are from the chains. We also use different colors to distinguish different types of edges. + +$\tilde{G}(F)$ is connected (Proposition I.13). Now if the position of pebble $v$ in $\tilde{H}(F)$ does not coincide with pebble $u$ , Spoiler already wins. Therefore, in the remaining proof we can assume that $B(\xi_G) \neq B(\xi_H)$ holds for pebbles $v$ . + +Without loss of generality, assume $B(\xi_G) \coloneqq v_G < v_H \coloneqq B(\xi_H)$ (note that $\mathcal{V}_F$ is a number set). Spoiler's strategy is then to move pebble $v$ in $\tilde{G}(F)$ towards the endpoint of the chain $C_{v_G}$ . Throughout the process, Duplicator has to keep the pebble $v$ in $\tilde{H}(F)$ located on the chain $C_{v_H}$ (otherwise, the vertices not from the chains must have a degree of at least three (Proposition I.13), and when the degrees of $v_G$ and $v_H$ do not match, Spoiler can win in the next round). When Spoiler finally places pebble $v$ in $\tilde{G}(F)$ to the endpoint of chain $C_{v_G}$ , Duplicator cannot place the other pebble $v$ in $\tilde{H}(F)$ to a vertex of degree 1, so Spoiler can win in the next round. + +Similarly, Duplicator has to ensure that for any pebbles $u / v$ on the two graphs, either they are both placed on the auxiliary vertices, or neither of them is placed on the auxiliary vertices. When they are both placed on the auxiliary vertices, the distance to the corresponding chain endpoint must be the same. It is easy to see that Duplicator can always achieve this goal. Therefore, when Duplicator follows her best strategy, there is no reason for Spoiler to place pebbles on these auxiliary vertices. + +We next consider the case when the positions of both pebbles $u, v$ in $\tilde{G}(F)$ correspond to the same base vertex. We have the following result: + +Lemma I.16. Consider the pebbling game for any WL algorithm A played on graphs $\tilde{G}(F)$ and $\tilde{H}(F)$ . Let $u = (\xi_G, \xi_H)$ and $v = (\eta_G, \eta_H)$ be the positions of pebbles $u$ and $v$ after any round. Assume all pebbles are not placed on auxiliary vertices and correspond to the same base vertex $w \in \mathcal{V}_F$ . Denote $\xi_G = (w, \mathcal{T}_G)$ , $\xi_H = (w, \mathcal{T}_H)$ , $\eta_G = (w, \mathcal{U}_G)$ , $\eta_H = (w, \mathcal{U}_H)$ . If $\mathcal{T}_G \triangle \mathcal{U}_G \neq \mathcal{T}_H \triangle \mathcal{U}_H$ , Spoiler can win the remaining game. + +Proof. The reason why Spoiler can win is essentially given in the proof of Lemma I.10. Similar to the proof of Lemma I.15, we only consider the (weakest) vanilla SWL algorithm SWL(VS). Since $\mathcal{T}_G\triangle \mathcal{U}_G\neq \mathcal{T}_H\triangle \mathcal{U}_H$ , there is a vertex $x\in$ $\mathcal{T}_G\triangle \mathcal{T}_H\triangle \mathcal{U}_G\triangle \mathcal{U}_H$ . Clearly, $x\in \mathcal{N}_F(w)$ . Spoiler's strategy is then to move pebble $v$ to any of its neighbors $(x,\tilde{\mathcal{U}}_G)$ for some $\tilde{\mathcal{U}}_G$ , and Duplicator should move the other pebble $v$ to any of its neighbors $(x,\tilde{\mathcal{U}}_H)$ for some $\tilde{\mathcal{U}}_H$ . By definition of Fürer graphs, since $\{(w,\mathcal{U}_G),(x,\tilde{\mathcal{U}}_G)\}$ is an edge of $\tilde{G} (F)$ and $\{(w,\mathcal{U}_H),(x,\tilde{\mathcal{U}}_H)\}$ is an edge of $\tilde{H} (F)$ , we have + +$$ +\begin{array}{l} (x \in \mathcal {U} _ {G} \leftrightarrow w \in \tilde {\mathcal {U}} _ {G}) = (x \in \mathcal {U} _ {H} \leftrightarrow w \in \tilde {\mathcal {U}} _ {H}) \\ \Longleftrightarrow x \in \mathcal {U} _ {G} \triangle \mathcal {U} _ {H} \leftrightarrow w \in \tilde {\mathcal {U}} _ {G} \triangle \tilde {\mathcal {U}} _ {H} \\ \Longleftrightarrow x \notin \mathcal {T} _ {G} \triangle \mathcal {T} _ {H} \leftrightarrow w \in \tilde {\mathcal {U}} _ {G} \triangle \tilde {\mathcal {U}} _ {H} \\ \Longleftrightarrow (x \in \mathcal {T} _ {G} \leftrightarrow w \in \tilde {\mathcal {U}} _ {G}) \neq (x \in \mathcal {T} _ {H} \leftrightarrow w \in \tilde {\mathcal {U}} _ {H}) \\ \end{array} +$$ + +Therefore, the isomorphism type of the two vertex pairs $(\xi_{G},\tilde{\eta}_{G})$ and $(\xi_{H},\tilde{\eta}_{H})$ is not the same, where $\tilde{\eta}_G\coloneqq (x,\tilde{\mathcal{U}}_G)$ and $\tilde{\eta}_H\coloneqq (x,\tilde{\mathcal{U}}_H)$ . Spoiler thus wins the game. + +Lemma I.16 further limits Duplicator's strategy when two pebbles share the same base vertex. Moreover, when Duplicator follows her best strategy, it also implies that Spoiler cannot gain extra advantage when he places multiple pebbles to positions that belong to the same base vertex. Actually, we will show below that the strategy for both Spoiler and Duplicator can be reduced to focusing only on the information of the base vertices placed by pebbles $u, v$ . In particular, this results in a simplified pebbling game defined as follows. + +Simplified pebbling game for augmented Fürer graphs. Let $F = (\mathcal{V}_F, \mathcal{E}_F)$ be the base graph of a proper Fürer graph. The simplified pebbling game is played on $F$ . There are three pebbles $u, v, w$ of different types. Initially, all three pebbles are left outside the graph $F$ . We first describe the game rule for Spoiler, which is similar to Section 6 but is much simpler. + +First consider SWL algorithms $\mathsf{A}(\mathcal{A},\mathsf{Pool})$ . If $\mathsf{Pool} = \mathsf{VS}$ , Spoiler first places pebble $u$ on any vertex of $F$ and then places pebble $v$ on any vertex of $F$ . If $\mathsf{Pool} = \mathsf{SV}$ , Spoiler first places pebble $v$ and then places pebble $u$ . + +The game then cyclically executes the following process. Depending on the aggregation scheme $\mathcal{A}$ , Spoiler can freely choose one of the following ways to play: + +- Local aggregation $\mathsf{agg}_{\mathfrak{u}}^{\mathsf{L}} \in \mathcal{A}$ . Spoiler first places pebble $w$ adjacent to the vertex placed by pebble $v$ , then swaps pebbles $v$ and $w$ , and finally places pebble $w$ outside the graph $F$ . +- Global aggregation $\mathrm{agg}_{\mathfrak{u}}^{\mathsf{G}} \in \mathcal{A}$ . Spoiler first places pebble $w$ on any vertex of $F$ , then swaps pebbles $v$ and $w$ , and finally places pebble $w$ outside $F$ . +- Single-point aggregation $\mathsf{agg}_{\mathsf{uu}}^{\mathsf{P}} \in \mathcal{A}$ . Spoiler first places pebble $w$ to the position of pebble $u$ , then swaps pebbles $v$ and $w$ , and finally places pebble $w$ outside $F$ . +- Single-point aggregation $\mathsf{agg}_{\mathsf{vu}}^{\mathsf{P}} \in \mathcal{A}$ . Spoiler swaps the position of pebbles $u$ and $v$ . + +The cases of $\mathsf{agg}_{\mathsf{v}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{v}}^{\mathsf{G}}, \mathsf{agg}_{\mathsf{v}}^{\mathsf{P}}$ are similar (symmetric) to $\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{u}}^{\mathsf{G}}, \mathsf{agg}_{\mathsf{uu}}^{\mathsf{P}}$ , so we omit them for clarity. + +Next consider FWL-type algorithms. Initially, Spoiler simultaneously places pebbles $u$ and $v$ on two vertices of $F$ . The game then cyclically executes the following process. For LFWL(2), Spoiler first places pebble $w$ to some vertex in $\mathcal{N}_F^1(v)$ , then either swaps pebbles $v$ , $w$ or swaps pebbles $u$ , $w$ , and finally places $w$ outside the graph $F$ . The cases of SLFWL(2) and FWL(2) are similar, expect that $\mathcal{N}_F^1(v)$ is replaced by $\mathcal{N}_F^1(u) \cup \mathcal{N}_F^1(v)$ and $\mathcal{V}_F$ , respectively. + +We next describe the game rule for Duplicator, which is of a very different kind. In brief, she maintains a subset $\mathcal{Q}$ of connected components $\mathcal{Q} \subset \mathbb{C}\mathcal{C}_{\mathcal{S}}(F)$ (Definition I.8) where the set $\mathcal{S}$ contains vertices of $F$ on which the pebbles $u, v, w$ are currently located. Initially, $\mathcal{Q} \coloneqq \mathbb{C}\mathcal{C}_{\emptyset}(F) = \{\mathcal{E}_F\}$ . Note that throughout the game, Spoiler only performs three types of basic operations: (i) add a pebble/two pebbles and place it/them on vertices of $F$ ; (ii) remove a pebble and leave it outside the graph $F$ ; (iii) swap the positions of two pebbles. Once Spoiler performs an operation above, Duplicator will update $\mathcal{Q}$ according to the following rules so that the parity of $|\mathcal{Q}|$ is always odd throughout the game. + +- When Spoiler places some pebble(s) on vertices of $F$ , there are two cases. If $\mathsf{CC}_S(F)$ does not change, then Duplicator does nothing. Otherwise, the presence of new pebbles will split some connected components into a set of smaller regions. For each original connected component $\mathcal{P} \subset \mathcal{E}_F$ that is split into $\mathcal{P}_1, \dots, \mathcal{P}_k$ with $\bigcup_{i=1}^k \mathcal{P}_i = \mathcal{P}$ , Duplicator can replace $\mathcal{Q}$ by $\tilde{\mathcal{Q}} = (\mathcal{Q} \backslash \mathcal{P}) \cup \{\mathcal{P}_{j_1}, \dots, \mathcal{P}_{j_l}\}$ for some $j_1, \dots, j_l \in [k]$ , such that $|\tilde{\mathcal{Q}}| \mod 2 = 1$ . In other words, Duplicator updates the set $\mathcal{Q}$ by removing the old connected component $\mathcal{P}$ (if in the set) and adding some new partitioned components, while ensuring that the parity of the size of $\mathcal{Q}$ does not change. +- When Spoiler removes a pebble and leaves it outside the graph $F$ , there are also two cases. If $\mathbb{C}\mathbb{C}_S(F)$ does not change, then Duplicator does nothing. Otherwise, the removal of a pebble will merge several connected components $\mathcal{P}_1,\dots ,\mathcal{P}_k$ into a larger one $\mathcal{P} = \bigcup_{i = 1}^{k}\mathcal{P}_{i}$ . Duplicator then replaces $\mathcal{Q}$ by either $\tilde{\mathcal{Q}} = \mathcal{Q}\backslash \{\mathcal{P}_1,\dots ,\mathcal{P}_k\}$ or $\tilde{\mathcal{Q}} = (\mathcal{Q}\backslash \{\mathcal{P}_1,\dots ,\mathcal{P}_k\})\cup \mathcal{P}$ , depending on which one satisfies $|\tilde{\mathcal{Q}} |\mod 2 = 1$ . In other words, Duplicator updates the set $\mathcal{Q}$ by removing these small connected components and optionally adding the merged component to preserve parity. +- When Spoiler swaps the positions of two pebbles, $\mathsf{CC}_S(F)$ clearly does not change and thus Duplicator does nothing + +For the case of local aggregation $\mathsf{agg}_{\mathfrak{u}}^{\mathsf{L}}$ , there is an extra constraint for Duplicator: after Spoiler places pebble $w$ adjacent to $v$ and Duplicator updates $\mathcal{Q}$ , Duplicator should additionally ensure that $\{\{v, w\}\} \notin \mathcal{Q}$ . Similar game rule applies for local aggregation $\mathsf{agg}_{\mathsf{V}}^{\mathsf{L}}$ . + +After any round, Spoiler wins if pebble $u$ is adjacent to $v$ and $\{\{u,v\}\} \in \mathcal{Q}$ . In other words, Spoiler wins if there is a connected component in $\mathcal{Q}$ with only one edge. Finally, Duplicator wins if Spoiler cannot win after any number of rounds. + +Below, we will prove that the simplified pebbling game designed above is actually equivalent to the original pebbling game. Importantly, the simplified pebbling game is played on the base graph $F$ rather than the sophisticated (augmented) Fürer graphs and avoids the complicated vertex selection procedure (Definition 6.1), which greatly eases the analysis of players' strategies. + +Theorem I.17. Let $\tilde{G}(F)$ and $\tilde{H}(F)$ be any augmented proper Fürer graph and its twisted version for some base graph $F$ . For any WL algorithm $\mathsf{A}$ considered in this paper, Spoiler can win the corresponding pebbling game on graphs $\tilde{G}(F)$ and $\tilde{H}(F)$ if and only if he can win the simplified pebbling game on graph $F$ . + +Proof. In the original pebbling game, let $\tilde{H}(F) = \mathrm{twist}(\tilde{G}(F), \mathcal{E})$ for some $\mathcal{E}$ with $|\mathcal{E}| \bmod 2 = 1$ . Based on Lemma I.15, after any round we can assume that the pebbles $u, v$ are placed on $u = (\xi_G, \xi_H)$ , $v = (\eta_G, \eta_H)$ with matching base vertices, i.e., we can denote $\xi_G = (x, \mathcal{T}_x)$ , $\xi_H = (x, \mathcal{U}_x)$ , $\eta_G = (y, \mathcal{T}_y)$ , $\eta_H = (y, \mathcal{U}_y)$ . We also assume that the condition of Lemma I.16 holds when $x = y$ . The proof is divided into the following parts. + +Part 1 (understanding the relationship between the two types of pebbling games). Consider two game states with different pebble positions: + +- State 1: the positions of pebbles are $u = ((x, \mathcal{T}_x), (x, \mathcal{U}_x^{(1)}))$ , $v = ((y, \mathcal{T}_y), (y, \mathcal{U}_y^{(1)}))$ ; +- State 2: the positions of pebbles are $u = ((x, \mathcal{T}_x), (x, \mathcal{U}_x^{(2)}))$ , $v = ((y, \mathcal{T}_y), (y, \mathcal{U}_y^{(2)}))$ . + +In other words, the positions of pebbles on graph $\tilde{G}(F)$ are the same for the two states, but the positions of pebbles on graph $\tilde{H}(F)$ differ. By Corollary I.11, there is a proper isomorphism $f$ from $\tilde{H}(F)$ to $\mathrm{twist}(\tilde{H}(F),\tilde{\mathcal{E}})$ for some $\tilde{\mathcal{E}}$ with $|\tilde{\mathcal{E}}| \bmod 2 = 0$ , such that $f(x,\mathcal{U}_x^{(1)}) = (x,\mathcal{U}_x^{(2)})$ and $f(y,\mathcal{U}_y^{(1)}) = (y,\mathcal{U}_y^{(2)})$ . Note that the second bullet of Corollary I.11 is satisfied since we assume that Duplicator follows the strategy of Lemma I.16 and thus $\mathcal{U}_x^{(1)}\triangle \mathcal{U}_y^{(1)} = \mathcal{T}_x\triangle \mathcal{T}_y = \mathcal{U}_x^{(2)}\triangle \mathcal{U}_y^{(2)}$ when $x = y$ . Now using Corollary I.11 again, there is a proper automorphism $\tilde{f}$ of $\tilde{H}(F)$ satisfying $\tilde{f}(x,\mathcal{U}_x^{(1)}) = (x,\mathcal{U}_x^{(2)})$ and $\tilde{f}(y,\mathcal{U}_y^{(1)}) = (y,\mathcal{U}_y^{(2)})$ , if and only if $|\tilde{\mathcal{E}}\cap \mathcal{P}|\bmod 2 = 0$ for all $\mathcal{P}\in \mathbb{C}\mathbb{C}_{\{x,y\}}(F)$ . In other words, if $|\tilde{\mathcal{E}}\cap \mathcal{P}|\bmod 2 = 0$ for all $\mathcal{P}\in \mathbb{C}\mathbb{C}_{\{x,y\}}(F)$ , then the two states are equivalent. + +Similarly, since $\tilde{H}(F) = \mathrm{twist}(\tilde{G}(F), \mathcal{E})$ , one can also find for each $i = 1,2$ a proper isomorphism $f_i$ from $\tilde{G}(F)$ to $\mathrm{twist}(\tilde{H}(F), \tilde{\mathcal{E}}_i)$ , such that $f(x, \mathcal{T}_x) = (x, \mathcal{U}_x^{(i)})$ and $f(y, \mathcal{T}_y) = (y, \mathcal{U}_y^{(i)})$ . Based on the above analysis, whether Spoiler can win the game at state $i$ will thus depend purely on the set + +$$ +\begin{array}{l} \mathcal {Q} _ {i} = \left\{\mathcal {P} \in \mathbb {C C} _ {\{x, y \}} (F): \left| \mathcal {E} _ {i} \cap \mathcal {P} \right| \bmod 2 = 0 \right\} \\ = \left\{\mathcal {P} \in \mathbb {C C} _ {\{x, y \}} (F): | (\mathcal {E} \triangle \mathcal {E} _ {i}) \cap \mathcal {P} | \bmod 2 = 1 \right\}. \\ \end{array} +$$ + +Namely, if $\mathcal{Q}_1 = \mathcal{Q}_2$ , then the two states are equivalent. This is why in the simplified pebbling game Duplicator only maintains the set $\mathcal{Q}$ , which has a similar meaning to $\mathcal{Q}_i$ . + +Part 2 (regarding vertex selection). We show vertex selection in Definition 6.1 can be simplified to satisfy $|S^{\mathrm{S}}| = |S^{\mathrm{D}}| = 1$ . First, when $S^{\mathrm{S}}$ contains multiple vertices that correspond to different base vertices in $F$ , Duplicator must respond by matching each base vertex separately and merging them to obtain $S^{\mathrm{D}}$ , otherwise Spoiler can win according to Lemma I.15. When Duplicator follows this strategy, there is no reason for Spoiler to choose multiple base vertices. Next, when $S^{\mathrm{S}}$ contains multiple vertices that correspond to the same base vertex in $F$ , Duplicator must match each $(x, \mathcal{X}) \in S^{\mathrm{S}}$ with $(x, \mathcal{X} \triangle D) \in S^{\mathrm{D}}$ by selecting a set $\mathcal{D}$ (according to Lemma I.16). In this case, Spoiler still does not gain an additional benefit by selecting $S^{\mathrm{S}}$ with multiple elements. Moreover, it does not make any difference whether Spoiler chooses to move pebbles on $\tilde{G}(F)$ or on $\tilde{H}(F)$ . Therefore, the pebbling game can be simplified so that Spoiler directly moves a pebble in $\tilde{G}(F)$ and Duplicator responds by moving the corresponding pebble in $\tilde{H}(F)$ . (Nevertheless, note that the vertex selection procedure is still necessary when dealing with auxiliary vertices as in the proof of Lemma I.15). + +Part 3 (equivalence between updating pebble positions and updating $\mathcal{Q}$ for Duplicator). Suppose that in a certain round Spoiler places a pebble $w$ to vertex $(z, \mathcal{T}_z)$ in $\tilde{G}(F)$ . If the placement of $w$ does not increase the number of connected components, then no matter how Duplicator responds by replacing the other pebble $w$ to $(z, \mathcal{U}_z)$ in $\tilde{H}(F)$ , the game is equivalent due to Part 1 and the set $\mathcal{Q}$ should not change, which coincides with the game rule. If the placement of $w$ increases the number of connected components, then how Duplicator chooses the position of the other pebble $w$ will matter. Suppose Duplicator places the other pebble $w$ on the vertex $(z, \mathcal{U}_z)$ , then the value of $\mathcal{T}_z \triangle \mathcal{U}_z$ determines the update of $\mathcal{Q}$ by Corollary I.11. Conversely, each possible game rule for updating $\mathcal{Q}$ also corresponds to at least one feasible position $(z, \mathcal{U}_z)$ . + +For the local aggregation $\mathrm{agg}_{\mathfrak{u}}^{\mathsf{L}}$ , there is an additional restriction that the pebble $w$ should be adjacent to pebble $v$ . Clearly, the presence of $w$ will make a new connected component $\{\{v,w\}\}$ . It is easy to see that $\{\{v,w\}\} \notin \mathcal{Q}$ , otherwise pebble $w$ is not adjacent to pebble $v$ in $\tilde{H}(F)$ . For localized FWL aggregations, although pebble $w$ should also be placed in the neighborhood of some pebble (e.g., $w \in \mathcal{N}_{\tilde{G}(F)}^{1}(v)$ ), we may not add this restriction for Duplicator, because if Duplicator does not obey the game rule,Spoiler can always win after this round by swapping a pair of pebbles (e.g., swapping $u$ and $w$ ) such that the isomorphism types of pebbles $u$ and $v$ differ between $\tilde{G}(F)$ and $\tilde{H}(F)$ . + +Similarly, when Spoiler places a pebble $w$ outside the graph $\tilde{G}(F)$ , the connected components may merge, and $\mathcal{Q}$ should be updated accordingly while preserving the parity of its size. This matches the design of the simplified pebbling game. Finally, if Spoiler swaps a pair of pebbles, all connected components remains unchanged, so Duplicator does nothing in the simplified pebbling game. + +# I.3. Concrete constructions + +In this section, we give concrete constructions to prove all results of Theorem 7.1. We split the proof into a collection of lemmas. All the proofs are based on constructing base graphs $F$ and studying the simplified pebbling game developed in Appendix I.2 on $F$ . + +Illustration. For clarity, we illustrate the proof of each lemma with a set of figures (Figures 4 to 11). In each of these figures, the node in orange/green/purple responds to the vertex that holds pebble $u / v / w$ , respectively. We use bold red edges to denote connected components in $\mathcal{Q}$ chosen by Duplicator. + +Lemma I.18. There exist two non-isomorphic graphs such that + +- SWL(SV) can distinguish them; +- SWL(VS) cannot distinguish them; +- PSWL(VS) cannot distinguish them. + +Proof. The base graph is constructed in Figure 4. We separately consider each algorithm. + +We first analyze the simplified pebbling game for algorithm SWL(VS). Initially, Spoiler should first place pebble $u$ on some vertex. Due to the symmetry of the graph, there are three cases: vertex 4, vertex 2, and vertex 1. We separately consider each case below: + +- If Spoiler places pebble $u$ on vertex 4, then the graph is split into two connected components. By symmetry, without loss of generality suppose Duplicator selects the component at the right of $u$ (Figure 4(a)). Next, Spoiler will place pebble $v$ on some vertex. Clearly, his best strategy is to choose vertex 5 (or equivalently, vertex 6), which can further split the connected component into two parts. Duplicator has to respond by choosing the larger part (Figure 4(b)). In the next round, according to the game rule, Spoiler should place pebble $w$ adjacent to pebble $v$ . He'd better place it on vertex 6. Duplicator can respond appropriately without losing the game (Figure 4(c)). Then Spoiler swaps pebbles $v$ and $w$ and leaves $w$ outside the graph. It can be seen that multiple connected components are then merged into a larger one, yielding Figure 4(d). Now the game state is equivalent to Figure 4(b) by symmetry. It is easy to see that Spoiler can never win the game. +- If Spoiler places pebble $u$ on vertex 2, then the connected component remains unchanged, so Duplicator just does nothing (Figure 4(e)). Next, Spoiler will place vertex $v$ on some vertex, e.g., vertex 3 or vertex 4. Regardless of where he places pebble $v$ , Duplicator's strategy is always to choose the rightmost connected component (see Figure 4(f) and Figure 4(h) for the two cases). First consider the case when pebble $v$ is placed on vertex 3 (Figure 4(f)). In the next round, Spoiler should place pebble $w$ adjacent to pebble $v$ . He'd better place it on vertex 4 to further split the connected component. Duplicator just responds by selecting again the rightmost component as shown in Figure 4(g). Spoiler then swaps pebbles $v$ and $w$ and leaves $w$ outside the graph. It can be seen that the game returns to Figure 4(h)). When Spoiler continues to place pebble $w$ adjacent to pebble $v$ , Duplicator again responds by updating the connected component (Figure 4(i)). However, when Spoiler swaps pebbles $v$ and $w$ and leaves $w$ outside the graph, multiple connected components then merge into a whole, as shown in Figure 4(j). Clearly, Spoiler cannot win the game as well. +- If Spoiler places pebble $u$ on vertex 1, we can similarly prove that Spoiler cannot win the game. Actually, placing pebble $u$ on vertex 1 is clearly not optimal. + +![](images/e79286be02e253264e3af4e85dc9e243642af4752a7b5445b06760aed381635a.jpg) +Figure 4. Illustration of the proof of Lemma I.18. When Duplicator follows her optimal strategy, the game process of SWL(VS) corresponds to a sequence of figures, such as (a, b, c, d, ...), (e, f, g, h, i, j, ...), or (e, h, i, j, ...), depending how Spoiler plays. In all cases, Spoiler cannot win. The game process of PSWL(VS) is similar. In contrast, the game process of SWL(SV) corresponds to figures (k, l, m, n, o) and Spoiler eventually wins as shown in figure (o). + +We next analyze the simplified pebbling game for algorithm PSWL(VS), which is similar to SWL(VS) except that Spoiler has the additional ability to move pebble $u$ to the position of pebble $v$ . However, when Spoiler performs this operation, the resulting game will simply be equivalent to the three cases studied above, e.g., Figure 4(a) or Figure 4(e), except that pebble $v$ is also present and coincides with $u$ . As already proved above, Spoiler cannot win the game. + +We finally analyze the simplified pebbling game for algorithm SWL(SV). In the beginning, Spoiler can first place pebble $v$ on vertex 4, and suppose Duplicator chooses the connected component at the right of $v$ (Figure 4(k)). Spoiler can then place pebble $u$ on vertex 5 to further split this connected component, and Duplicator has to respond by choosing the rightmost component (Figure 4(l)). In the next round, Spoiler can place $w$ on vertex 6. Duplicator has not lost the game yet (see Figure 4(m)). Then it comes to the major difference: when Spoiler swaps pebbles $v$ and $w$ and leaves $w$ outside the graph, the rightmost connected component is not merged into a larger one due to the position of pebbles $u, v$ (see Figure 4(n)). Therefore, in the next round, Spoiler can further use pebble $w$ to split the component as shown in Figure 4(o), and Duplicator has no choice other than selecting the connected component $\{\{5,7\}\}$ . Duplicator loses the game after this round. + +Insight into Lemma I.18. The reason why SWL(SV) is stronger lies in the fact that Spoiler can specify the position of pebble $u$ after seeing Duplicator's response because pebble $v$ is first placed before pebble $u$ is placed. In this way, Spoiler can exploit such information to better choose the position of pebble $u$ . Importantly, note that pebble $u$ cannot be moved easily according to the game rule, therefore determining its position later may have additional benefits. + +Lemma I.19. There exist two non-isomorphic graphs such that + +- PSWL(VS) can distinguish them; +- SWL(VS) cannot distinguish them; +- SWL(SV) cannot distinguish them. + +Proof. The base graph in constructed in Figure 5, which can be seen as a simple adaptation of Figure 4. + +![](images/c0c69783832c36c6679c4169225ff77e7cf6bab45f306845c165534b8af6b6d8.jpg) +Figure 5. Illustration of the proof of Lemma I.19. When Duplicator follows her optimal strategy, the game process of SWL(VS) (or SWL(SV)) may correspond to figures (a, b, c, ...) or figures (d, e, f, g, h, ...) depending on how Spoiler chooses the initial positions of pebbles $u$ , $v$ . In both cases, Spoiler cannot win. In contrast, the game process of PSWL(VS) corresponds to figures (d, e, f, i, j, k) and Spoiler eventually wins in figure (k). + +We first analyze the simplified pebbling game for algorithm SWL(VS) or SWL(SV). Initially, Spoiler should choose the positions for pebbles $u, v$ . We will show that it does not matter whether $u$ or $v$ is placed first. By symmetry, there are mainly two types of strategies which we separately investigate below. Other strategies are similar in analysis and we omit the proof for clarity. + +- Strategy 1: Spoiler places pebbles $u$ and $v$ on vertices 1 and 8, respectively. In this case, Duplicator's strategy is to ensure that the middle connected component is selected after pebbles $u, v$ are present, as shown in Figure 5(a). According to the game rule, Spoiler should then place pebble $w$ on some vertex adjacent to $v$ , and clearly, he'd better place $w$ on vertex 4 (or vertex 5 by symmetry). Duplicator can respond appropriately without losing the game (shown in Figure 5(b)). When Spoiler swaps pebbles $v, w$ and leaves $w$ outside the graph, the chosen connected component will be merged (Figure 5(c)). It is easy to see that Spoiler can never win the game after any number of rounds. +- Strategy 2: Spoiler places pebbles $u$ and $v$ on vertices 4 and 5, respectively. By symmetry, suppose Duplicator chooses the connected component on the right (Figure 5(d)). Then Spoiler should place pebble $w$ on some vertex adjacent to $v$ , and he'd clearly place $w$ on vertex 8. Duplicator must respond by choosing the rightmost triangle, resulting in Figure 5(e). When Spoiler swaps pebble $v$ , $w$ and leaves $w$ outside the graph, the triangle component remains unchanged due to the presence of pebble $v$ (Figure 5(f)). In the next round, Spoiler should place pebble $w$ to further split the triangle, like Figure 5(g). However, he cannot win: when he swaps pebble $v$ , $w$ and leaves $w$ outside the graph, all previous components merge into a whole as shown in Figure 5(h). Spoiler has no idea how to win. + +We next turn to algorithm PSWL(VS). Initially, the game is the same as SWL(VS) until reaching the state of Figure 5(f). In the next round, Spoiler can resort to the game rule of $\mathsf{agg}_{\mathsf{vw}}^{\mathsf{P}}$ and move pebble $u$ to the position of pebble $v$ (Figure 5(i)). Now pebble $u$ becomes useful and Spoiler can easily win the remaining game, like Figure 5(j, k). + +Insight into Lemma I.19. The reason why PSWL(VS) is stronger lies in the fact that Spoiler can change the position of pebble $u$ throughout the game process. In contrast, for SWL(VS) and SWL(SV), pebble $u$ has to be kept fixed once it is placed on the graph, which severely limits the utility of the pebble $u$ in the subsequent game. + +Lemma I.20. There exist two non-isomorphic graphs such that + +- GSWL can distinguish them; +- PSWL(SV) cannot distinguish them. + +![](images/a252d012689012a7b8a533f07ee9d6b513738a5a2321c760c0ab7b22e2c44a9f.jpg) + +![](images/c50d650635c9cd3b4587b49f9fb0f627d509b98d216631278089f6f8967686af.jpg) + +![](images/d4793a2d2be55ddcac3cba54d05b16c80222bf6d3c0c884284e0d6353127f8b4.jpg) + +![](images/1c17c36567e43c34f75de35f0ef56455c6fbd40049f60d6dea72b5a34d2fb7b4.jpg) + +![](images/16cb8216605e60f325ad51db37a425fb3d4e5151697ff1d4550abd9c580f6883.jpg) + +![](images/76c45ffa61b50ff787f9b76bfedecbc8a237f14de9f8aed646f32d748eb63632.jpg) + +![](images/cd896ddf4c3ef771967c34f2ad39b340f308c8469b922bc05745f4070931202b.jpg) + +![](images/5b553ed17f5abad853232959e1149bec14ddea314e28834ea757c7a7326f66b8.jpg) + +![](images/4a339abdaca5e69272484455b66b1af543ea0c443088ca39056665ec123c4571.jpg) + +![](images/3f960332a5405862965d215bff3a77d8ab980e97c80b5ebf607e5fce7b49c5fb.jpg) +Figure 6. Illustration of the proof of Lemma I.20. When Duplicator follows her optimal strategy, the game process of PSWL(SV) may correspond to figures (a, b, c, d, ...) or figures (a, e, f, g, ...) depending on how to choose the initial position of pebble $u$ . In both cases,Spoiler cannot win. In contrast, the game process of GSWL corresponds to figures (a, b, c, h, i, j) andSpoiler eventually wins in figure (j). + +![](images/947630fb7dfe02bd0452eb32db32f80c114ffa04e22bad2a38a0867d6f4298e1.jpg) + +Proof. The base graph in constructed in Figure 6, which can be seen as a further extension of the counterexample in Figure 4. + +We first analyze the simplified pebbling game for algorithm PSWL(SV). Initially, Spoiler should place pebble $v$ on some vertex. We only consider the case of choosing vertex 6 (or equivalently, vertex 4), which is intuitively the best choice. Other choices can be similarly analyzed and we omit them for clarity. Since the presence of $v$ splits the graph into two connected components, Duplicator should select the larger one (Figure 6(a)). Next, Spoiler should place pebble $u$ on some vertex. + +- We first consider the case whenSpoiler places $u$ on vertex 4, which further splits the left connected component. In this case, Duplicator just selects the left diamond-shaped component (Figure 6(b)). In the next round,Spoiler will play according to Figure 6(c) by placing pebble $w$ on vertex 4 adjacent to $v$ , swapping $v$ and $w$ , and leaving pebble $w$ outside the graph. Duplicator just does nothing. The remaining game can be illustrated in Figure 6(d), and the analysis is the same as the previous proof of Lemma I.18. In short,Spoiler can never split the red connected component $\{\{1,2\}, \{1,3\}\}$ shown in Figure 6(d). Note that althoughSpoiler can additionally use the game rule of single-point aggregation $\mathrm{agg}_{\mathrm{vw}}^{\mathrm{P}}$ , he should better not change the position of $u$ : if he leaves pebble $u$ from vertex 4, the connected component will be merged. Therefore,Spoiler cannot win the game. +- Seeing why Spoiler cannot win in Figure 6(d), let us restart from Figure 6(a) with a different strategy. Suppose this time Spoiler places pebble $u$ on vertex 2 (shown in Figure 6(e)). Since the red connected component remains unchanged, Duplicator does nothing. In the next round, Spoiler should place pebble $w$ on vertex 4 adjacent to $v$ , which splits the red connected component in Figure 6(e) into two parts. However, seeing the position of pebble $u$ , this time Duplicator chooses a different strategy: she selects the upper triangle (Figure 6(f)). When Spoiler swaps pebbles $v$ , $w$ and leaves $w$ outside the graph, the upper triangle is merged into a larger connected component (see Figure 6(g)). It is easy to see that Spoiler still cannot win the game after any number of rounds. + +We next turn to algorithm GSWL. Initially, the game is the same as PSWL(SV) until reaching the state of Figure 5(c). In the next round, Spoiler can choose a different way to play: according to the game rule of $\mathsf{agg}_{\mathsf{v}}^{\mathsf{G}}$ , Spoiler can place pebble $w$ on vertex 2 and swap $w$ with $u$ . Clearly, Duplicator has to respond by selecting the left connected component as shown in Figure 6(h). Now the remaining game is easy for Spoiler. As illustrated in Figure 6(i) and Figure 6(j), Spoiler can finally win the game. + +![](images/9cdf9c1a7ec76522d0eb290d0cc1c835751629664d5adfb90a74ba7fa7b29808.jpg) +Figure 7. Illustration of the proof of Lemma I.21. When Duplicator follows her optimal strategy, the game process of GSWL may correspond to figures (c, d, e, ...), (c, f, g, ...), (a, b, c, d, e, ...), or (a, b, c, f, g, ...), depending on Spoiler's strategy. In all cases, Spoiler cannot win. In contrast, the game process of SSWL corresponds to figures (a, b, c, d, h, i) and Spoiler eventually wins in figure (i). The game process of LFWL(2) is similar and Spoiler can also win. + +Insight into Lemma I.20. The proof of Lemma I.20 clearly shows why global aggregation is more powerful than the corresponding single-point aggregation (Theorem 4.4). + +Lemma I.21. There exist two non-isomorphic graphs such that + +- GSWL cannot distinguish them; +- SSWL can distinguish them; +- LFWL(2) can distinguish them. + +Proof. The base graph is constructed in Figure 6, which is precisely the graph originally analyzed in Fürer (2001) and is often called the Fürer grid graph (Qian et al., 2022, Appendix D). + +We first analyze the simplified pebbling game for algorithm GSWL. Depending on how Spoiler chooses the initial positions of pebbles $u$ and $v$ , we main consider the two cases illustrated in Figure 7(a) and Figure 7(c) due to symmetry of the graph. Other cases are clearly not optimal. For the first case, Duplicator will select the larger connected component on the right (Figure 7(a)). In the next round, Spoiler may place pebble $w$ on vertex 6 adjacent to pebble $v$ , and Duplicator updates her selected component accordingly (Figure 7(b)). Spoiler then swaps pebbles $v$ and $w$ and leaves $w$ outside the graph, returning to Figure 7(c). What follows is the central part of the proof. Spoiler should clearly place pebble $w$ on vertex 5 to further split the component selected by Duplicator, but he has two different ways to achieve this: + +- He plays according to the game rule of $\mathsf{agg}_{\mathfrak{u}}^{\mathsf{L}}$ . Duplicator knows this information and thus responds by selecting the connected component on the right (see Figure 7(d)). Then Spoiler should swap pebbles $w$ and $v$ and leave $w$ outside the graph. However, this will merge multiple components as shown in Figure 7(e). Clearly, Spoiler cannot win the subsequent game. +- A better choice would be to follow the game rule of $\mathsf{agg}_{\mathsf{v}}^{\mathsf{G}}$ because it can move pebble $u$ to vertex 5 after this round. However, Duplicator knows this information and thus responds differently: she selects the connected component of $\{\{3,5\}\}$ containing only one edge (see Figure 7(f)). Note that Duplicator does not lose the game, because for global aggregation Duplicator can freely choose a connected component of one edge (while for local aggregation she cannot). Now, when Spoiler swaps pebbles $w$ and $u$ and leaves $w$ outside the graph, the component $\{\{3,5\}\}$ is merged into a larger component, as shown in Figure 7(g), which is equivalent to Figure 7(a) by symmetry. Again, Spoiler cannot win the subsequent game. + +We next turn to algorithm SSWL. Initially, the game is the same as GSWL until reaching the state of Figure 7(d). Now it comes to the major difference:Spoiler can play according to the game rule of $\mathsf{agg}_{\mathsf{v}}^{\mathsf{L}}$ . This time Duplicator can no longer choose the connected component of $\{\{3,5\}\}$ since it is prohibited by the game rule. Therefore, her only choice is to select + +![](images/43c8d2bf1f60368b61483fbf575f0abae0a1971adbb3d83a76c84f426feaf07c.jpg) +Figure 8. Illustration of the proof of Lemma I.22. When Duplicator follows her optimal strategy, the game process of SSWL may correspond to figures (a, b, c, ...), and Spoiler cannot win. In contrast, the game process of LFWL(2) or SLFWL(2) corresponds to figures (a, b, d) and Spoiler can eventually win. + +![](images/dc9567c90468cf848442e6327d4ab61e7adfb77c300a2232e13f73a5d57910f8.jpg) + +![](images/6f358c41aba079045124659bd29ecd384688e6ac203fa00900120fab1345eec7.jpg) + +![](images/2a9d8b68517366039fc3d7922aabce0daae8d0006bba4b06cd435aefbe6971ba.jpg) + +![](images/ea852fd79e4d3e27374044ed6f3dd71e9b0ed2bb80ffffc5bed55740fec47629.jpg) + +the rightmost component as shown in Figure 7(d). This then yields Figure 7(h) when Spoiler swaps pebbles $u, w$ and leaves $w$ outside the graph. The remaining game will be quite easy for Spoiler and it is easy to see that Spoiler can win when playing according to Figure 7(i). + +We finally turn to algorithm LFWL(2). Initially, the game is also the same as GSWL until reaching the state of Figure 7(d). Now it comes to the major difference: Duplicator does not know whether Spoiler will swap pebbles $u, w$ or swap pebbles $v, w$ . Therefore, depending on Duplicator's response, Spoiler can adopt different strategies: + +- If Duplicator chooses the rightmost connected component, then Spoiler swaps pebbles $u, w$ . This corresponds to Figure 7(h) when $w$ is left outside the graph, and we have proved that Spoiler can win. +- If Duplicator chooses the component containing only one edge $\{\{3,5\}\}$ , then Spoiler swaps pebbles $v, w$ . This corresponds to Figure 7(f) and Spoiler already wins after this round. +- Similarly, if Duplicator chooses the component containing only one edge $\{\{5,6\}\}$ , then Spoiler swaps pebbles $u, w$ and wins after this round. + +In all cases, Spoiler has a winning strategy. + +Insight into Lemma I.21. The proof of Lemma I.21 shows why local aggregation is more powerful than global aggregation (Theorem 4.4). Importantly, in local aggregation there is an additional constraint that Duplicator cannot choose the connected component containing the neighboring edge. The proof also reveals the power of FWL-type algorithms. Intuitively, in FWL-type algorithms Duplicator cannot "see" Spoiler's strategy before making choices, and thus Spoiler can gain an additional advantage by deliberately playing against Duplicator's strategy. + +Based on the proof of Lemma I.21, curious readers may ask whether there is an expressivity relationship between SSWL and LFWL(2). However, below we will show that it is not the case: they are actually incomparable (due to Lemmas I.22 and I.23). + +Lemma I.22. There exist two non-isomorphic graphs such that + +- SSWL cannot distinguish them; +- SLFWL(2) can distinguish them; +- LFWL(2) can distinguish them. + +Proof. The base graph is constructed in Figure 8. + +We first analyze the simplified pebbling game for algorithm SSWL. Initially, Spoiler should place pebbles $u$ and $v$ on vertices of the graph. Due to symmetry, we can assume that Spoiler places $u$ on vertex 2 and places $v$ on vertex 3 (other nonequivalent cases are clearly not optimal). Duplicator then selects the bottom connected component split by pebbles $u, v$ (see Figure 8(a)). In the next round, Spoiler should place $w$ on vertex 5 to further split the connected component. By definition of SSWL, he can play according to the rule of either $\mathrm{agg}_{\mathrm{u}}^{\mathrm{L}}$ or $\mathrm{agg}_{\mathrm{v}}^{\mathrm{L}}$ . By symmetry, it suffices to analyze the case of $\mathrm{agg}_{\mathrm{u}}^{\mathrm{L}}$ . Since Duplicator knows the information that Spoiler plays according to $\mathrm{agg}_{\mathrm{u}}^{\mathrm{L}}$ , she selects the connected component in the lower right corner (see Figure 8(b)). Then, Spoiler should swap pebbles $v, w$ and leave $w$ outside the graph, which leads to the merging of multiple connected components. The resulting game, as illustrated in Figure 8(c), is equivalent to Figure 8(a) by symmetry. Therefore, Spoiler cannot win the game. + +![](images/0661026e54df20e1b3015be615a47066c336c1cf56b22244319cc86e72b7876d.jpg) +Base graph + +![](images/2342f4b32675b6724692d55948457d0d6b89f04730244e0d3948db82924dda58.jpg) + +![](images/c7205e572e2971dd34fbb1cc1a419ff98c6198e17c1f55eba5c8521fde2c5406.jpg) +(b) + +![](images/67c13dc13874bd016fcbe63bf9312e8a320f145d519cca2d7391b0f9565c144a.jpg) +(c) + +![](images/61d6bf0f65c75a146b30415524e71bb740db7d7fb2de1b163ca3438cbe758da5.jpg) +(d) + +![](images/b6f4c715edd952232b338954fa0b6916537b47f125aabd5129d7ac3f7c91db2a.jpg) +(a) +(e) + +![](images/c8dc821b3f710b4d6abdc2b7174cc9595384f5d4a731ac1cc988f8bfd4e13420.jpg) +(f) + +![](images/7b202395ed76e3cfedaab7bac01a9c0d24f60627c48c772b14263822525f2339.jpg) +(g) +Figure 9. Illustration of the proof of Lemma I.23. When Duplicator follows her optimal strategy, the game process of LFWL(2) may correspond to figures (a, b, c, ...) or figures (a, b, d, ...) depending on Spoiler's strategy. In both cases, Spoiler cannot win. Similarly, the game process of GSWL may correspond to figures (a, b, c, ...) or figures (a, e, f, ...), and Spoiler still cannot win. In contrast, the game process of SSWL or SLFWL(2) corresponds to figures (a, g, h) and Spoiler can eventually win. + +![](images/aa5539a190e1abf2421fb53389782fd85103ca36489417cf93d0317de705d5ac.jpg) +(h) + +We next analyze the simplified pebbling game for algorithm LFWL(2) or SLFWL(2). Initially, the game is the same as SSWL until reaching the state of Figure 8(b). Now it comes to the major difference: Duplicator does not know whether Spoiler will swap pebbles $u$ , $w$ or swap pebbles $v$ , $w$ . Therefore, Duplicator can only choose either the bottom left component or the bottom right one at random, which is equivalent by symmetry. Spoiler can then play against Duplicator's strategy and swap the pebbles so that after leaving pebble $w$ outside the graph, the connected component selected by Duplicator is not merged, as shown in Figure 8(d). Clearly, Spoiler can win the remaining game. + +Insight into Lemma I.22. Lemma I.22 shows the inherent advantage of FWL-type algorithms compared with SWL algorithms, answering an open problem raised in Frasca et al. (2022). + +Lemma I.23. There exist two non-isomorphic graphs such that + +- LFWL(2) cannot distinguish them; +- SLFWL(2) can distinguish them; +- GSWL cannot distinguish them; +- SSWL can distinguish them. + +Proof. The base graph is constructed in Figure 9. + +We first analyze the simplified pebbling game for algorithm LFWL(2). Initially, Spoiler should place pebbles $u$ and $v$ on vertices of the graph. Due to symmetry, there are mainly two cases we need to consider, as shown in Figure 9(a) and Figure 9(f), respectively. Here, we only consider the case of Figure 9(a), where pebble $u$ is placed on vertex 2 and pebble $v$ is placed on vertex 5; the other case is similar to analyze. Duplicator will respond by selecting the top connected component (Figure 9(a)). In the next round, Spoiler should place pebble $w$ adjacent to pebble $v$ . Clearly, he should place it on vertex 3 or 7, which is equivalent. Assume that he places $w$ on vertex 7. Duplicator can easily respond by choosing the larger component (Figure 9(b)). According to the game rule of LFWL(2), he can either swap pebbles $v$ , $w$ or swap pebbles $u$ , $w$ , and then leaves $w$ outside the graph. As shown in Figure 9(c) and Figure 9(d), in both cases the connected components get merged after leaving pebble $w$ . Clearly, Spoiler cannot win. + +![](images/96dd75cd8f25134c7d3200dafb1d739c701250f239e450f0a1dd43f847911a9b.jpg) +Base graph + +![](images/019eb1f4110daf190e0467ffa254dc7c838b68e064dcc7751b4d000246fe2638.jpg) +(a) + +![](images/dc143f353d42705cf85f3cf11eef0c9065ac4e7101e8a50ac9e7b7574709e231.jpg) +(b) +Figure 10. Illustration of the proof of Lemma I.24. When Duplicator follows her optimal strategy, the game process of SLFWL(2) may correspond to figures (a, b, c, ...), and Spoiler cannot win. In contrast, the game process of FWL(2) corresponds to figures (a, d, e) and Spoiler can eventually win. + +![](images/64bfa58dd34e7abc8ea53d914c060a18f2fc7573c818391c38820d018fd45601.jpg) +(c) + +![](images/8e4764009f61c0fe725ea714857d8dcc4f739fd52f67c89647fcd9a482a6447c.jpg) +(d) + +![](images/a929db8e809a1e0c3e3acfd6d1959b4580430ba77a9f30d2e5cea99ec3b499a5.jpg) +(e) + +We next turn to algorithm GSWL. Initially, the game is similar (Figure 9(a)). Now Spoiler has the additional choice to place pebble $w$ on vertex 1 according to the game rule of $\mathsf{agg}_{\mathsf{v}}^{\mathsf{G}}$ . In this case, Duplicator responds by choosing the connected component $\{\{1,2\}\}$ which contains only one edge (see Figure 9(e)). Note that Duplicator does not lose the game. Then Spoiler will swap pebbles $u$ and $w$ and leaves $w$ outside the graph, yielding Figure 9(f). Spoiler cannot win the game either. + +We finally turn to algorithm SSWL. This time Spoiler can place pebble $w$ adjacent to pebble $u$ according to the game rule of $\mathrm{agg}_v^{\perp}$ , and Duplicator can only choose either the connected component $\{\{1,3\}, \{3,5\}\}$ or $\{\{1,7\}, \{7,5\}\}$ (Figure 9(g)). Note that She cannot choose the component $\{\{1,2\}\}$ according to the game rule. After swapping pebbles $u, w$ and leaving $w$ outside the graph (Figure 9(h)), the remaining game is quite easy for Spoiler and he can eventually win. + +Insight into Lemma I.23. Lemma I.23 shows the inherent advantages of "symmetrized" WL algorithms compared with WL algorithms that only aggregate local information of one vertex. + +Lemma I.24. There exist two non-isomorphic graphs such that + +- SLFWL(2) cannot distinguish them; +- FWL(2) can distinguish them. + +Proof. The base graph in constructed in Figure 10. + +We first analyze the simplified pebbling game for algorithm SLFWL(2). Initially, Spoiler should place pebbles $u$ and $v$ on vertices of the graph. Due to symmetry, we can assume that he places pebble $u$ on vertex 1 and places pebble $v$ on vertex 2. Other nonequivalent choices are clearly not optimal. Duplicator then responds by choosing the largest connect component as shown in Figure 10(a). In the next round, Spoiler can place pebble $w$ on any vertex in $\mathcal{N}_F^1(1) \cup \mathcal{N}_F^1(2)$ , namely, any vertex except vertex 3. Due to symmetry, we can assume that he places $w$ on vertex 6. Duplicator can easily respond according to Figure 10(b). No matter how Spoiler swaps pebbles, as long as pebble $w$ is left outside the graph, multiple connected components then merge as shown in Figure 10(c) and Spoiler has no idea how to win. + +We next turn to algorithm FWL(2). Starting from Figure 10(a), this timeSpoiler can place pebble $w$ on vertex 3. Then Duplicator should choose an odd number of connected components from the four components: $\{\{1,6\} ,\{6,3\} \} ,\{\{1,7\} ,\{7,3\} \}$ $\{\{2,8\} ,\{8,3\} \}$ , and $\{\{2,9\} ,\{9,3\} \}$ . Regardless of Duplicator's choice, after swapping either pebbles $u,w$ or pebbles $v,w$ , the chosen component are always surrounded by pebbles $u$ and $v$ (Figure 10(e)). Therefore,Spoiler can easily win the remaining game. + +Insight into Lemma I.24. Lemma I.24 shows there is an inherent gap between 2-FWL and all $O(nm)$ -complexity algorithms considered in this paper. It can also be used to settle the open problem raised in Frasca et al. (2022). + +Lemma I.25. There exist two non-isomorphic graphs such that + +- SWL(SV) can distinguish them; +- LFWL(2) cannot distinguish them. + +Proof. The base graph is constructed in Figure 11. + +![](images/3d521cfd8a6bbf1b0e6ab0889a0deca24731fd7553a824ce9459d809162d9479.jpg) +Base graph + +![](images/cc2ed4e1a627aacba1c1dd7f9ba534b3ac7311eadf1b7ba01fea9c9f7c797dc4.jpg) +(a) + +![](images/6cb27c730b34534f7a9ac9bcdfc172f0bf1c62f1515440c18b4f322ceec9a7a2.jpg) +(b) + +![](images/baaaf6d5b1cb851a7176ba38991617162d6afe69dc01c6783d73b3399814f90b.jpg) +(c) + +![](images/3696f5dd7b6515ac3b1217958872e5e713c7ed787a5a80c62f9362397d227ae7.jpg) +(d) + +![](images/51bc956ca7096e5b7cfd64ccf99ad7a9dff5862935a6c1ac78f884c1d4d27992.jpg) +(e) + +![](images/a4256aa579c8ee60303b55a10664de699278afd3168d468c57a4731256db6280.jpg) +(f) + +![](images/6fd7018a485d88459d71f7421169b276def10884ded8ec40f4f3a83ff2954ca4.jpg) +(g) + +![](images/508bacd560dfd8861c3f8a97ae185b67f3620e928305b01a4e62c4658f891cb0.jpg) +(h) + +![](images/b3f195e9b1db502afbcf1a895a395b4829a7e2d02ca62edd130d32885d270d73.jpg) +(i) + +![](images/115f482729e831c947c0d081121d40410b54ec739ce28a5f7e81faca4aba6938.jpg) +(j) + +![](images/00c123fab08658c45445040330b21d66a49f407ac374060905196c03db8f5f9f.jpg) +(k) + +![](images/9a01e852ad2e0c25e2285a76c8e3f5922636372920f0cfdf4464f9e84dad35ab.jpg) +(1) + +![](images/36172ab7779ec97f4c51f9c7df631e834740f65fbb591577187f2634c608ba6a.jpg) +(m) + +![](images/82374c903489360b7027a0acbe003ab3cd59201546bc38fb9ad79d2a3b1df11f.jpg) +(n) +Figure 11. Illustration of the proof of Lemma I.25. When Duplicator follows her optimal strategy, the game process of SWL(SV) corresponds to figures (a, b, c, d), and Spoiler can eventually win the game. In contrast, the game process of LFWL(2) may correspond to figures (e, f, g, h, i, ...) or figures (j, k, l, g, h, i, ...) or figures (j, k, l, m, n, ...), depending how Spoiler swaps pebbles. In both cases, Spoiler cannot win. + +We first analyze the simplified pebbling game for algorithm SWL(SV). Initially,Spoiler places pebble $v$ on vertex 8, which splits the graph into two equal parts. Due to symmetry, suppose Duplicator selects the left component (Figure 11(a)).Spoiler then places pebble $u$ on vertex 1 to further split the connected component. Due to symmetry, suppose Duplicator selects the top-left component (Figure 11(b)). In the next round,Spoiler places pebble $w$ adjacent to $v$ on vertex 6. Duplicator should choose the connected component of either $\{\{1,3\},\{3,6\}\}$ or $\{\{1,2\},\{2,6\}\}$ , which is equivalent by symmetry (see Figure 11(c)).Spoiler then swaps pebbles $v,w$ and leaves $w$ outside the graph (Figure 11(d)). The remaining game is straightforward to analyze andSpoiler can easily win. + +We next turn to the algorithm LFWL(2). Initially, Spoiler should simultaneously place pebbles $u$ and $v$ on two vertices of the graph. Without loss of generality, assume that he places one pebble on vertex 8 and places the other pebble on vertex 1 (other cases are similar to analyze). Depending on which pebble is placed on vertex 8, there are two cases: + +- Spoiler places pebble $u$ on vertex 1 and places pebble $v$ on vertex 8. In this case, Duplicator responds by choosing the connected component on the right (Figure 11(e)). In the next round, Spoiler should better place pebble $w$ on vertex 9 (or vertex 10) adjacent to pebble $v$ , and Duplicator can respond accordingly (Figure 11(f)). Spoiler should then swap pebbles $u$ and $w$ and leave $w$ outside the graph (Figure 11(g)). In the next round, Spoiler can similarly place pebble $w$ on vertex 10 adjacent to pebble $v$ to further split the connected component. This corresponds to Figure 11(h) after swapping pebbles $v$ and $w$ and leaving $w$ outside the graph. In subsequent rounds, Spoiler can continue to place pebble $w$ adjacent to pebble $v$ (Figure 11(i)). However, whether he swaps pebbles $u, w$ or pebbles $v, w$ , as long as pebble $w$ is left outside the graph, multiple connected components then merge into a whole. Clearly, Spoiler cannot win the game. +- Spoiler places pebble $u$ on vertex 8 and places pebble $v$ on vertex 1. In this case, Duplicator similarly responds by choosing the connected component on the right (Figure 11(j)). In subsequent rounds, Spoiler should gradually move pebble $v$ until it reaches the position of pebble $u$ (Figure 11(k)). Next, Spoiler will continue to place pebble $w$ adjacent to pebble $v$ on vertex 9, and Duplicator can respond accordingly (Figure 11(l)). Spoiler should then either swap pebbles $u, w$ or swap pebbles $v, w$ and leave $w$ outside the graph. The former case corresponds to (Figure 11(g)) and has been analyzed. Now consider the latter case, which corresponds to Figure 11(m). In subsequent rounds, Spoiler can perform arbitrary operations, but he can never change the position of pebbles $u$ . Otherwise, the lack of a pebble on vertex 8 will cause the merging of multiple connected components. With the position of pebble $u$ unchanged, the best state Spoiler can achieve is illustrated in Figure 11(i). It is not hard to figure out that Spoiler cannot win the game, either. + +In both cases, Spoiler cannot win. + +Insight into Lemma I.25. The reason why $\mathsf{SWL}(\mathsf{SV})$ is stronger in this case is due to the SV pooling strategy. Lemma I.25 shows that LFWL(2) does not have the ability to implement the SV pooling strategy. + +We are now ready to prove Theorem 7.1. + +Proof of Theorem 7.1. Theorem 7.1 is a direct consequence of Corollary 4.7, Theorem 5.2, and Lemmas I.18 to I.25. $\square$ + +# J. Proof of Theorems in Appendix A + +# J.1. Proof of Theorem A.2 + +We first define several notations. Consider a path $P = (x_0, \dots, x_d)$ (not necessarily simple) in graph $G$ of length $d \geq 1$ . We say $P$ is a hitting path, if $x_i \neq x_d$ for all $i \in \{0, 1, \dots, d-1\}$ . Denote $\mathcal{Q}_G^d(u, v)$ to be the set of all hitting paths from node $u$ to node $v$ of length $d$ . Denote $\mathrm{dis}_G^H(u, v)$ as the hitting time distance between vertices $u$ and $v$ in graph $G$ , i.e., the average hitting time in a random walk from vertex $u$ to $v$ . Then, + +$$ +\mathrm {d i s} _ {G} ^ {\mathsf {H}} (u, v) = \sum_ {d = 0} ^ {\infty} d \cdot \sum_ {(x _ {0}, \dots , x _ {d}) \in \mathcal {Q} _ {G} ^ {d} (u, v)} 1 / \left(\prod_ {i = 0} ^ {d - 1} \deg (x _ {i})\right). +$$ + +Given a path $P = (x_0, \dots, x_d)$ , define $\omega(P) \coloneqq (\deg_G(x_1), \dots, \deg_G(x_{d-1}))$ , which is a tuple of length $d - 1$ . Our proof is based on the following lemma: + +Lemma J.1. Let $G = (\mathcal{V}_G, \mathcal{E}_G)$ and $H = (\mathcal{V}_H, \mathcal{E}_H)$ be two graphs, and let $t \in \mathbb{N}_{+}$ be a positive integer. Consider any SWL algorithm $\mathsf{A}(\mathcal{A}, \mathsf{Pool})$ with $\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}} \in \mathcal{A}$ and denote $\chi^{(t)}$ to be the color mapping after iteration $t$ . Given nodes $u, v \in \mathcal{V}_G$ , + +and $x,y\in \mathcal{V}_H$ , if $\chi_G^{(t)}(u,v) = \chi_H^{(t)}(x,y)$ , then $\{\{\omega (Q):Q\in \mathcal{Q}_G^t (v,u)\} \} = \{\{\omega (Q):Q\in \mathcal{Q}_H^t (y,x)\} \}$ + +Proof. The proof is based on induction over $t$ . For the base case of $t = 1$ , it is easy to see that $\{\{\omega(Q):Q\in \mathcal{Q}_G^1 (v,u)\} \}$ depends only on whether $\{u,v\} \in \mathcal{E}_G$ or not. Clearly, if $\chi_G^{(1)}(u,v) = \chi_H^{(1)}(x,y)$ , then $\{u,v\} \in \mathcal{E}_G\leftrightarrow \{x,y\} \in \mathcal{E}_G$ holds (Lemma E.4), implying $\{\{\omega(Q):Q\in \mathcal{Q}_G^1 (v,u)\} \} = \{\{\omega(Q):Q\in \mathcal{Q}_H^1 (y,x)\} \}$ . + +Now assume that the lemma holds for all $t \leq T$ , and consider the case of $t = T + 1$ . When $\chi_G^{(T + 1)}(u,v) = \chi_H^{(T + 1)}(x,y)$ , by definition of $\mathsf{agg}_{\mathfrak{u}}^{\mathsf{L}}$ we have + +$$ +\{\{\chi_ {G} ^ {(T)} (u, w): w \in \mathcal {N} _ {G} (v) \} \} = \{\{\chi_ {H} ^ {(T)} (x, z): z \in \mathcal {N} _ {H} (y) \} \}. +$$ + +Since $T \geq 1$ , we have $\chi_G^{(T)}(u, w) = \chi_H^{(T)}(x, z) \Rightarrow \deg_G(w) = \deg_H(z)$ for any $w \in \mathcal{V}_G$ and $z \in \mathcal{V}_H$ . Therefore, + +$$ +\{\{(\chi_ {G} ^ {(T)} (u, w), \deg_ {G} (w)): w \in \mathcal {N} _ {G} (v) \} \} = \{\{(\chi_ {H} ^ {(T)} (x, z), \deg_ {H} (z)): z \in \mathcal {N} _ {H} (y) \} \}. +$$ + +By definition of node marking policy, we further obtain + +$$ +\{\{(\chi_ {G} ^ {(T)} (u, w), \deg_ {G} (w)): w \in \mathcal {N} _ {G} (v) \backslash \{u \} \} \} = \{\{(\chi_ {H} ^ {(T)} (x, z), \deg_ {H} (z)): z \in \mathcal {N} _ {H} (y) \backslash \{x \} \}. +$$ + +By induction, + +$$ +\begin{array}{l} \{\left(\deg_ {G} (w), \left\{\omega (Q): Q \in \mathcal {Q} _ {G} ^ {T} (w, u) \right\}\right): w \in \mathcal {N} _ {G} (v) \backslash \{u \} \} \} \\ = \{\left(\deg_ {H} (z), \{\{\omega (Q): Q \in \mathcal {Q} _ {H} ^ {T} (z, x) \}\right): z \in \mathcal {N} _ {H} (y) \backslash \{x \} \left. \right\}. \\ \end{array} +$$ + +Therefore, $\{\{\omega(Q): Q \in \mathcal{Q}_G^{T+1}(v, u)\}\} = \{\{\omega(Q): Q \in \mathcal{Q}_H^{T+1}(y, x)\}\}$ , concluding the induction step. + +Corollary J.2. Consider any SWL algorithm $\mathsf{A}(\mathcal{A},\mathsf{Pool})$ with $\mathsf{agg}_{\mathsf{u}}^{\mathsf{L}}\in \mathcal{A}$ and let $\chi$ be the stable color mapping. For any vertices $u,v\in \mathcal{V}_G$ and $x,y\in \mathcal{V}_H$ , if $\chi_G(u,v) = \chi_H(x,y)$ , then $\mathrm{dis}_G^{\mathsf{H}}(v,u) = \mathrm{dis}_H^{\mathsf{H}}(y,x)$ . + +Proof. By definition of hitting time distance, + +$$ +\operatorname {d i s} _ {G} ^ {\mathsf {H}} (v, u) = \sum_ {i = 0} ^ {\infty} i \cdot \sum_ {Q \in \mathcal {Q} _ {G} ^ {i} (v, u)} 1 / q (Q), +$$ + +where $q(Q) = \deg_G(x_0)\prod_{i = 1}^{d - 1}\deg (x_i)$ for path $Q = (x_0,x_1,\dots ,x_d)$ . Therefore, $q(Q)$ is fully determined by $\omega (Q)$ and $\deg_G(x_0)$ . If $\mathrm{dis}_G^{\mathsf{H}}(v,u)\neq \mathrm{dis}_H^{\mathsf{H}}(y,x)$ , then either $\deg_G(v)\neq \deg_H(y)$ or there exists a length $t$ such that $\{\{\omega (Q):Q\in \mathcal{Q}_G^t (v,u)\} \} \neq \{\{\omega (Q):Q\in \mathcal{Q}_H^t (y,x)\} \}$ . Therefore, by using Lemma J.1 we have $\chi_G(u,v)\neq \chi_H(x,y)$ , as desired. + +We are now ready to prove the main theorem: + +Theorem J.3. Define a variant of GD-WL that incorporates the shortest path distance and the hitting time distance as follows: + +$$ +\chi_ {G} ^ {(t + 1)} (u) = \left\{\left\{\left(\left(\operatorname {d i s} _ {G} (v, u), \operatorname {d i s} _ {G} ^ {\mathsf {H}} (v, u)\right), \chi_ {G} ^ {(t)} (v)\right): v \in \mathcal {V} _ {G} \right\} \right\}. +$$ + +Then, $\mathrm{GD - WL}\preceq \mathrm{PSWL(VS)}$ + +Proof. Denote $\chi^{\mathsf{P}}$ as the stable color mapping of PSWL(VS). Consider a pair of graphs $G$ and $H$ indistinguishable by PSWL(VS). Then, we clearly have + +$$ +\{\{\chi_ {G} ^ {\mathsf {P}} (u, v): u, v \in \mathcal {V} _ {G} \} \} = \{\{\chi_ {H} ^ {\mathsf {P}} (x, y): x, y \in \mathcal {V} _ {H} \} \}. +$$ + +By definition of the node marking policy, + +$$ +\{\{\chi_ {G} ^ {\mathsf {P}} (u, u): u \in \mathcal {V} _ {G} \} \} = \{\{\chi_ {H} ^ {\mathsf {P}} (x, x): x \in \mathcal {V} _ {H} \} \}. +$$ + +![](images/c3fc5255500e4e7818f71590ab0124999643a19f36fcc6528de927c37b582af2.jpg) +Base graph +Figure 12. Illustration of the proof of Lemma J.6. When Duplicator follows her optimal strategy, the game process of SWL(VS) corresponds to figures (a, b, c) and Spoiler can eventually win. + +![](images/246939da46683fd3c1ba43c7474d5895e8a681c9c0b3e38c4725f90356998074.jpg) +(a) + +![](images/0159e96b68aa2d193e1e0163fe182a7b99de730e0cecf05916c927d5462f37d9.jpg) +(b) + +![](images/278085cb1b3d1d3435977177d5b1eed0d7ae8aef6ba8b7987f54e6beae7c6b7f.jpg) +(c) + +Now consider any vertices $u \in \mathcal{V}$ and $x \in \mathcal{V}_H$ satisfying $\chi_G^{\mathrm{P}}(u, u) = \chi_H^{\mathrm{P}}(x, x)$ . Since $\mathrm{agg}_{\mathrm{u}}^{\mathrm{L}}$ is present in PSWL(VS), we can invoke Lemma E.6, which obtains that + +$$ +\{\{\chi_ {G} ^ {\mathsf {P}} (u, w): w \in \mathcal {V} _ {G} \} \} = \{\{\chi_ {H} ^ {\mathsf {P}} (x, z): z \in \mathcal {V} _ {H} \} \}. +$$ + +Further using Corollaries E.5 and J.2 yields + +$$ +\{\{(\chi_ {G} ^ {\mathsf {P}} (u, w), \mathrm {d i s} _ {G} (w, u), \mathrm {d i s} _ {G} ^ {\mathsf {H}} (w, u)): w \in \mathcal {V} _ {G} \} \} = \{\{(\chi_ {H} ^ {\mathsf {P}} (x, z), \mathrm {d i s} _ {G} (z, x), \mathrm {d i s} _ {G} ^ {\mathsf {H}} (z, x)): z \in \mathcal {V} _ {H} \} \}. +$$ + +Next, by definition of the aggregation $\mathsf{agg}_{\mathsf{vv}}^{\mathsf{P}}$ , we have + +$$ +\{\{(\chi_ {G} ^ {\mathsf {P}} (w, w), \operatorname {d i s} _ {G} (w, u), \operatorname {d i s} _ {G} ^ {\mathsf {H}} (w, u)): w \in \mathcal {V} _ {G} \} \} = \{\{(\chi_ {H} ^ {\mathsf {P}} (z, z), \operatorname {d i s} _ {G} (z, x), \operatorname {d i s} _ {G} ^ {\mathsf {H}} (z, x)): z \in \mathcal {V} _ {H} \}. +$$ + +The above equation shows that $\chi_G^{\mathsf{P}}$ induces a finer vertex partition (i.e., by treating $\chi_G^{\mathsf{P}}(u) \coloneqq \chi_G^{\mathsf{P}}(u, u)$ ) compared with the stable color mapping of GD-WL. Concretely, based on Remark E.2(c), we have $\chi_G^{\mathsf{P}}(u, u) = \chi_H^{\mathsf{P}}(x, x) \Rightarrow \chi_G(u) = \chi_H(x)$ . This finally yields + +$$ +\{\{\chi_ {G} (u): u \in \mathcal {V} _ {G} \} \} = \{\{\chi_ {H} (x): x \in \mathcal {V} _ {H} \}, +$$ + +concluding the proof. + +Remark J.4. We note that the form of GD-WL in Theorem J.3 slightly differs from Zhang et al. (2023), in that they use resistance distance instead of hitting time distance. Nevertheless, similar to resistance distance, hitting time distance also satisfies the following key property: for any vertices $u, v, w \in \mathcal{V}_G$ in graph $G$ , $\mathrm{dis}_G^{\mathsf{H}}(u, v) = \mathrm{dis}_G^{\mathsf{H}}(u, w) + \mathrm{dis}_G^{\mathsf{H}}(w, v)$ if and only if $w$ is a cut vertex of $G$ (see Zhang et al. (2023, Appendix C.5.1)). This property is crucial to prove the expressivity for vertex-biconnectivity. Following the almost same proof, we can show that the variant of GD-WL defined in Theorem J.3 is also fully expressive for vertex-biconnectivity. + +Finally, for the original GD-WL defined in Zhang et al. (2023) that incorporates SPD and RD, currently we can only prove the following result, which is a straightforward extension of Theorem J.3: + +Theorem J.5. Consider the WL algorithm GD-WL that incorporates the shortest path distance and the resistance distance. Then, GD-WL $\preceq$ SSWL. + +Proof. Note that $\mathrm{dis}_G^{\mathsf{R}}(u,v) = (\mathrm{dis}_G^{\mathsf{H}}(u,v) + \mathrm{dis}_G^{\mathsf{H}}(v,u)) / 2|\mathcal{E}_G|$ . The proof follows by noting that both $\mathrm{agg}_{\mathfrak{u}}^{\mathsf{L}}$ and $\mathrm{agg}_{\mathfrak{v}}^{\mathsf{L}}$ are present in SSWL. + +However, it remains unclear whether RD-WL $\preceq$ GSWL or RD-WL $\preceq$ PSWL(VS) holds. We leave them as open problems. + +# J.2. Counterexamples + +We first show that the vanilla SWL has additional power than GD-WL. + +Lemma J.6. There exist two non-isomorphic graphs such that + +- SWL(VS) can distinguish them; +- RD-WL, HTD-WL, SPD-WL cannot distinguish them. + +![](images/1fad6ef9ea0b7d0a5a1f036b24f6170bfc311317d23be1bb43364f02b095630c.jpg) +Base graph + +![](images/d55d46ceeb23915838c11fd61d6f3e1d1240d6447e7afc012a73a2f0bc2b6218.jpg) +(a) +Figure 13. Illustration of the proof of Lemma J.7. For SWL(SV), when Duplicator follows her optimal strategy, Spoiler can never win the game. + +![](images/b2c7c20d54791702a5f67e74fc3885562c755419094a94f9c16d15123f5338b8.jpg) +(b) + +Proof. The proof is based on Appendix I.3 using generalized Fürer graphs. The base graph is constructed in Figure 12. + +We first consider SWL(VS) and analyze the simplified pebbling game developed in Appendix I.2. At the beginning,Spoiler just places pebble $u$ on vertex 1, and Duplicator does nothing. Spoiler then places pebble $v$ on vertex 4, splitting the connected component into three parts. It is easy to see that Duplicator should select the largest connected component on the left (Figure 12(a)). In the next round,Spoiler places pebbles $w$ adjacent to pebble $v$ on vertex 3. Duplicator has to respond according to Figure 12(b). Spoiler then swaps pebbles $v,w$ and leaves $w$ outside the graph (Figure 12(c)). It is easy to see that Spoiler can win the remaining game. + +We next consider GD-WL, for which we do not have a corresponding game. Nevertheless, a good news is that the corresponding (twisted) Fürer graph has only 20 vertices. We can thus directly verify that the stable colors of the Fürer graph match those of the twisted Fürer graph. A deep understanding of why GD-WL cannot distinguish the two graphs is left for future work. + +Conversely, we next show that GD-WL also has additional power than the vanilla SWL. + +Lemma J.7. There exist two non-isomorphic graphs such that + +- SWL(SV) cannot distinguish them; +- SPD-WL can distinguish them; +- RD-WL can distinguish them; +- HTD-WL can distinguish them. + +Proof. The proof is based on Appendix I.3 using generalized Fürer graphs. The base graph is constructed in Figure 13. + +We first consider SWL(SV) and analyze the simplified pebbling game developed in Appendix I.2. At the beginning, Spoiler should place pebble $v$ on some vertex. Regardless of Spoiler's choice, Duplicator just selects the largest connected component after pebble $v$ is placed. Next, Spoiler should place pebble $u$ on some vertex. Now Duplicator's strategy is to select a connected component such that it contains a triangle with no pebbles. It is easy to see that Duplicator can always achieve her goal (see Figure 13(a) and Figure 13(b) for two representative cases). The remaining game is easy to analyze: since pebble $u$ cannot be moved throughout the game, there is a triangle that holds at most two pebbles and cannot be split into three single-edge components. Clearly, Duplicator can always respond without losing the game. + +We next turn to SPD-WL, for which we do not have a corresponding game. Nevertheless, a good news is that the corresponding (twisted) Fürer graph has only 26 vertices. We can thus directly verify that the stable colors of the Fürer graph do not match those of the twisted Fürer graph. The case of RD-WL and HTD-WL can be similarly verified. A deep understanding of why these algorithms can distinguish the two graphs is left for future work. + +# K. Experimental Details + +We conduct experiments on three standard benchmark datasets: ZINC (Dwivedi et al., 2020), Counting Substructure (Zhao et al., 2022a; Frasca et al., 2022), and OGBG-molhiv (Hu et al., 2020). ZINC is a standard benchmark for molecular property prediction, where the task is to predict the constrained solubility of a molecule, which is an important chemical + +property for drug discovery. We train and evaluate our proposed GNN-SSWL and GNN-SSWL+ on both ZINC (consisting of 250k molecular graph) and ZINC-subset (a 12K-subset selected as in Dwivedi et al. (2020)). Counting Substructure is a widely-used synthetic task in the expressive GNN community, where the task is to predict the number of a given substructure (such as cycle or star) in an input graph. We use the same dataset in Zhao et al. (2022a); Frasca et al. (2022) and further extend it to include the setting of counting 5/6-cycles motivated by Huang et al. (2022). Finally, we additionally consider the OGBG-molhiv dataset in Appendix K.4. + +# K.1. Model details + +We implement our model using Pytorch (Paszke et al., 2019) and Pytorch Geometric (Fey & Lenssen, 2019) (available respectively under the BSD and MIT license). All experiments are run on a single NVIDIA Tesla V100 GPU. Our code will be released at https://github.com/subgraph23/SWL. + +Motivated by Propositions 4.2 and E.3, for all SWL models, the graph generation policy is chosen as the distance encoding on the original graph. Such a policy achieves the maximal power among other policies (as expressive as node marking) while explicitly introducing inductive biases, which may be beneficial for real-world tasks. Concretely, we initialize the feature of node $v$ in subgraph $G^{u}$ by summing the atom embedding $h^{\mathrm{atom}}(v)$ and the distance encoding $h^{\mathrm{dis}}(\mathrm{dis}_G(u,v))$ , where the atom embedding is a learnable vector determined by the atom type of $v$ , and the distance encoding is a learnable vector determined by the shortest path distance between $u$ and $v$ . Mathematically, $h_{G}^{(0)}(u,v) = h^{\mathrm{atom}}(v) + h^{\mathrm{dis}}(\mathrm{dis}_{G}(u,v))$ . Distances exceeding max_dis (including infinity) are all encoded as a shared embedding. For tasks without atom features (e.g., Counting Substructure dataset), we set $h^{\mathrm{atom}}(v)$ to zero. + +As an instance of Definition 2.1, our subgraph GNN layer can be written in the following form: + +$$ +h _ {G} ^ {(l + 1)} (u, v) = \operatorname {R e L U} \left(\sum_ {i = 1} ^ {r} \mu^ {(l + 1, i)} \left(h _ {G} ^ {(l)} (u, v), \mathsf {o p} _ {i} (u, v, G, h _ {G} ^ {(l)})\right)\right), \tag {25} +$$ + +where each $\mathsf{op}_i$ can take one of the following forms, depending on the atomic aggregations in the SWL algorithm: + +- For $\mathbf{agg}_{\mathbf{uu}}^{\mathsf{P}}$ : $\mathsf{op}_i(u,v,G,h) = h(u,u)$ ; +- For $\mathbf{agg}_{\mathbf{vv}}^{\mathsf{P}}$ : $\mathsf{op}_i(u,v,G,h) = h(v,v)$ ; +For $\mathsf{agg}_{\mathsf{u}}^{\mathsf{G}}$ .. $\mathsf{op}_i(u,v,G,h) = \sum_{w\in \mathcal{V}_G}h(u,w)$ +For $\mathsf{agg}_{\mathsf{v}}^{\mathsf{G}}$ .. $\mathsf{op}_i(u,v,G,h) = \sum_{w\in \mathcal{V}_G}h(w,v);$ +For $\mathsf{agg}_{\mathfrak{u}}^{\mathsf{L}}$ .. $\mathsf{op}_i(u,v,G,h) = \sum_{w\in \mathcal{N}_G(v)}\mathsf{ReLU}((h(u,w) + g(w,v));$ +For $\mathsf{agg}_{\mathbf{v}}^{\mathsf{L}}$ : $\mathsf{op}_i(u,v,G,h) = \sum_{w\in \mathcal{N}_G(u)}\mathsf{ReL}\mathsf{U}(h(w,v) + g(u,w))$ + +Note that we have included the single-point aggregation $\mathsf{agg}_{\mathsf{uv}}^{\mathsf{P}}$ directly in the update formula (25). For the last two local aggregations, we further encode the edge embedding $g(w,v)$ for edge $\{w,v\} \in \mathcal{E}_G$ (or $g(u,w)$ for edge $\{w,u\} \in \mathcal{E}_G$ ) when there is additional information for each edge (e.g., the bond information in the ZINC dataset). In the above equation, each $\mu^{(l+1,i)}$ is implemented by a GIN base encoder (Xu et al., 2019): + +$$ +\mu^ {(l + 1, i)} (h _ {1}, h _ {2}) = \mathsf {M L P} ^ {(l + 1, i)} \left((1 + \epsilon^ {(l + 1, i)}) h _ {1} + h _ {2}\right), +$$ + +where $\epsilon^{(l + 1,i)}$ is a learnable scalar and $\mathsf{MLP}^{(l + 1,i)}$ is a one-hidden-layer MLP with hidden size equal to the input dimension. Batch Normalization (Ioffe & Szegedy, 2015) is adopted in the hidden layer of each MLP as well as in (25) before taking ReLU. + +The final pooling layer is implemented as an MLP over the summation of $v$ -dimension for VS-pooling scheme (or $u$ for SV-pooling) and a global mean pooling, namely + +$$ +f (G) = \frac {1}{| \mathcal {V} _ {G} |} \sum_ {u \in \mathcal {V} _ {G}} \mathsf {M L P} \left(\sum_ {v \in \mathcal {V} _ {G}} h _ {G} ^ {(L)} (u, v)\right) +$$ + +Batch Normalization is adopted similarly. + +# K.2. Training details + +ZINC. Throughout all experiments, we set the number of layers $L = 6$ , similar to Frasca et al. (2022). To constrain the parameter budget within 500k, the feature dimension of each layer is set to 96. The initial atom embedding, distance embedding, and edge embedding is also set to 96. The hyper-parameter max_dis is set to 5. We adopt the Adam optimizer (Kingma & Ba, 2014) with an initial learning rate 0.001. The learning rate will be decayed by a factor of 0.5 when the MAE on validation set plateaus for 20 epochs (similar to Frasca et al. (2022)). The batch size is set to 128. On ZINC-12K subset, the model is trained for 400 epochs according to Frasca et al. (2022), and it takes roughly 1 to 2 hours for a single run. On ZINC-250K full set, we find that the model still does not converge after 400 epochs, so we adjust the configuration to 500 epochs, which takes about 40 hours. For each setting, we run the model 10 times with different seeds from 1 to 10 and report both the mean value and the standard deviation of MAE. + +We also compare our model performance with various subgraph GNN baselines. The performance numbers of these baselines in Table 2 are generally brought from the original works. For some baselines such that GNN-AK and GNN-AK-ctx, the results are obtained from Frasca et al. (2022). The NGNN result is obtained from Huang et al. (2022). Since other baseline models did not present the results on ZINC-full, we obtain their performance by running the code provided on the authors' official GitHub repo. We run each model 10 times with different seeds and report the mean performance and standard deviation. For ESAN and SUN, we tried both the $k$ -ego network policy and the $k$ -ego network policy with marking and reported the better performance among the two policies. We find that the results are almost the same for SUN, and the $k$ -ego network policy with marking is slightly better for ESAN. + +Counting Substructure. Throughout all experiments, we simply follow the same model configuration as ZINC to use a 6-layer GNN with a hidden size of 96. The hyper-parameter max_dis is also set to 5. We adopt the Adam optimizer (Kingma & Ba, 2014) with an initial learning rate 0.002. The learning rate is decayed with cosine annealing. The batch size is set to 512. The models are trained for 600 epochs. Note that unlike prior works, we use the same model/training hyper-parameters for all six substructures. For each setting, we run the model 5 times with different seeds from 1 to 5 and report the mean performance of MAE. We found that the standard deviation is very small. + +We also compare our model performance with various subgraph GNN baselines. The performance numbers of GNN-AK and SUN in Table 1 are brought from the Zhao et al. (2022a) and Frasca et al. (2022), respectively. For the tasks of counting 5/6-cycles, we obtain their performance by running the code provided on the authors' official GitHub repo. To make the SUN result convincing, we grid search the network width from $\{64,96,110\}$ , depth from $\{5,6\}$ , and search the $k$ -hop ego net with $k \in \{2,3\}$ as suggested by Frasca et al. (2022). We consider both the ego network policy with and without marking and find that $k$ -ego network with marking is better. + +# K.3. Ablation study on ZINC + +In this subsection, we present a set of ablation results to investigate the effect of different aggregation operations in GNN-SSWL+. We fix all model details and hyper-parameters presented above, and remove one or both of the additional two operations $\mathsf{agg}_{\mathsf{v}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{w}}^{\mathsf{P}}$ in GNN-SSWL+ or change the pooling paradigm. This results in several types of models with expressivity corresponding to SSWL, PSWL(VS), PSWL(SV), SWL(SV), and SWL(VS), respectively. The results are shown in Table 3. It can be seen that both of these additional aggregations $\mathsf{agg}_{\mathsf{v}}^{\mathsf{L}}, \mathsf{agg}_{\mathsf{w}}^{\mathsf{P}}$ provide a significant improvement in performance. Moreover, SV pooling is significantly better than VS for vanilla SWL. + +We further design an ablation experiment to verify that our introduced local aggregation $\mathrm{agg}_{\mathrm{v}}^{\mathrm{L}}$ is actually crucial and cannot be replaced by other aggregations. Here, we consider the model SUN proposed in Frasca et al. (2022), which comprises a large number of basic aggregation operations but without $\mathrm{agg}_{\mathrm{v}}^{\mathrm{L}}$ . SUN uses different parameters to compute the on-diagonal (i.e. $h_{G}^{(l)}(u,u)$ ) and off-diagonal (i.e. $h_{G}^{(l)}(u,v), v \neq u$ ) features in order to further enhance the model flexibility. We rerun their code by changing the graph generation policy to distance encoding (on the original graph) and use exactly the same training configuration as our model. Notably, the feature dimension is increased to 96 (instead of 64), resulting in a larger model with roughly 1163k parameters. We can see in Table 3 that, even with distance encoding policy and larger model size, there is still a remarkable gap between SUN and GNN-SSWL+ (less than 400k parameters). This confirms that the introduced aggregation $\mathrm{agg}_{\mathrm{v}}^{\mathrm{L}}$ in SSWL not only theoretically improves the expressive power of the GNN model, but also leads to significantly better performance on real datasets. + +Table 3. Ablation study of GNN-SSWL+ on ZINC-subset. + +
MethodPoolingTest MAE ↓
GNN-SSWL+VS0.0703 ± 0.0046
w/o aggvv(GNN-SSWL)VS0.0822 ± 0.0029
w/o aggLvVS0.0765 ± 0.0028
w/o aggLvSV0.0758 ± 0.0037
w/o aggL and aggvvVS0.1103 ± 0.0090
w/o aggL and aggvvSV0.0999 ± 0.0044
SUN (Distance Encoding)-0.0802 ± 0.0024
+ +# K.4. Additional experiments on OGBG-molhiv + +We further run experiments on the OGBG-molhiv dataset. Following Frasca et al. (2022), we use a 2-layer GNN-SSWL+ model with a network width of 64, and add residual connection between different layers. The hyper-parameter max_dis is also set to 5. To prevent overfitting, we similarly use the ASAM optimizer (Kwon et al., 2021) with a batch size of 32, a learning rate of 0.01, and a dropout ratio of 0.3. Moreover, we change each MLP to a linear layer following Frasca et al. (2022). We train the model for 100 epochs. We run our model 8 times with different seeds ranging from 1 to 8 and report the average ROC AUC as well as the standard deviation. The result is presented in Table 4. + +Table 4. Performance comparison on OGBG-molivir. + +
ModelReferenceTest ROC-AUC (%)
GCNKipf & Welling (2017)76.06±0.97
GINXu et al. (2019)75.58±1.40
PNACorso et al. (2020)79.05±1.32
GSNBouritsas et al. (2022)80.39±0.90
CINBodnar et al. (2021a)80.94±0.57
Recon. GNNCotta et al. (2021)76.32±1.40
DS-GNN (EGO+)Bevilacqua et al. (2022)77.40±2.19
DSS-GNN (EGO+)Bevilacqua et al. (2022)76.78±1.66
GNN-AK+Zhao et al. (2022a)79.61±1.19
SUNFrasca et al. (2022)80.03±0.55
GNN-SSWL+This paper79.58±0.35
\ No newline at end of file diff --git a/acompleteexpressivenesshierarchyforsubgraphgnnsviasubgraphweisfeilerlehmantests/images.zip b/acompleteexpressivenesshierarchyforsubgraphgnnsviasubgraphweisfeilerlehmantests/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a6324719328365c2e142b13b9939d77254433718 --- /dev/null +++ b/acompleteexpressivenesshierarchyforsubgraphgnnsviasubgraphweisfeilerlehmantests/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:37d5efb4407ba01a988890825993db4be13dc82fcd1145de0a3195c008ab92a9 +size 2318395 diff --git a/acompleteexpressivenesshierarchyforsubgraphgnnsviasubgraphweisfeilerlehmantests/layout.json b/acompleteexpressivenesshierarchyforsubgraphgnnsviasubgraphweisfeilerlehmantests/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..61d7f2505f2e8478c657a0f3f141451aaf526909 --- /dev/null +++ b/acompleteexpressivenesshierarchyforsubgraphgnnsviasubgraphweisfeilerlehmantests/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2ab35708819d08c83d075534747db5c79805a0492e59082c0dcefe70979b51ff +size 4169704 diff --git a/aconditionalnormalizingflowforacceleratedmulticoilmrimaging/dad5fa8b-794f-4094-bd4b-2f7c40ff0fce_content_list.json b/aconditionalnormalizingflowforacceleratedmulticoilmrimaging/dad5fa8b-794f-4094-bd4b-2f7c40ff0fce_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c63e56997c7e3f5f953dcb71b6f06df51acb3e32 --- /dev/null +++ b/aconditionalnormalizingflowforacceleratedmulticoilmrimaging/dad5fa8b-794f-4094-bd4b-2f7c40ff0fce_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b6ca090caaaff0a2b2a13b0029c3002f4f2893772ebf093c201095e5da2ca27 +size 96066 diff --git a/aconditionalnormalizingflowforacceleratedmulticoilmrimaging/dad5fa8b-794f-4094-bd4b-2f7c40ff0fce_model.json b/aconditionalnormalizingflowforacceleratedmulticoilmrimaging/dad5fa8b-794f-4094-bd4b-2f7c40ff0fce_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3d7dc689dba75c02f065fc834e0b2a59ce02c0b2 --- /dev/null +++ b/aconditionalnormalizingflowforacceleratedmulticoilmrimaging/dad5fa8b-794f-4094-bd4b-2f7c40ff0fce_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5eb4a30733c6e109cc636a38fbefe7ea5ae65ebf14150ccc86cbdd7e57ad8bee +size 116733 diff --git a/aconditionalnormalizingflowforacceleratedmulticoilmrimaging/dad5fa8b-794f-4094-bd4b-2f7c40ff0fce_origin.pdf b/aconditionalnormalizingflowforacceleratedmulticoilmrimaging/dad5fa8b-794f-4094-bd4b-2f7c40ff0fce_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a29b8c1cdd44fc737453acfafa6be230597516b0 --- /dev/null +++ b/aconditionalnormalizingflowforacceleratedmulticoilmrimaging/dad5fa8b-794f-4094-bd4b-2f7c40ff0fce_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:94b68a14932085f594b26e73b0a8faced413ed0c1efa4a87039574224daad2cf +size 7316093 diff --git a/aconditionalnormalizingflowforacceleratedmulticoilmrimaging/full.md b/aconditionalnormalizingflowforacceleratedmulticoilmrimaging/full.md new file mode 100644 index 0000000000000000000000000000000000000000..197cd734e3088fe5d34582867e00291b98f60555 --- /dev/null +++ b/aconditionalnormalizingflowforacceleratedmulticoilmrimaging/full.md @@ -0,0 +1,415 @@ +# A Conditional Normalizing Flow for Accelerated Multi-Coil MR Imaging + +Jeffrey Wen1 Rizwan Ahmad2 Philip Schniter1 + +# Abstract + +Accelerated magnetic resonance (MR) imaging attempts to reduce acquisition time by collecting data below the Nyquist rate. As an ill-posed inverse problem, many plausible solutions exist, yet the majority of deep learning approaches generate only a single solution. We instead focus on sampling from the posterior distribution, which provides more comprehensive information for downstream inference tasks. To do this, we design a novel conditional normalizing flow (CNF) that infers the signal component in the measurement operator's nullspace, which is later combined with measured data to form complete images. Using fastMRI brain and knee data, we demonstrate fast inference and accuracy that surpasses recent posterior sampling techniques for MRI. Code is available at https://github.com/jwen307/mri_cnf + +# 1. Introduction + +Magnetic resonance imaging (MRI) is a routine diagnostic imaging tool that has the potential to provide high-quality soft-tissue images without exposure to ionizing radiation. However, MRI exams are generally time-consuming, which reduces throughput, compromises patient comfort, and increases the likelihood of artifacts from patient motion. Scan time can be reduced by sampling below the Nyquist rate, but this makes the image reconstruction process more challenging. Hence, recovering high-accuracy images from highly subsampled MRI scans has become an active area of research (Knoll et al., 2020). + +Many approaches have been proposed to recover MR images from subsampled measurements. Parallel imaging, which is available on all commercial scanners, takes advantage of + +$^{1}$ Dept. of ECE, The Ohio State University, Columbus, OH 43210, USA. $^{2}$ Dept. of BME, The Ohio State University, Columbus, OH 43210, USA. Correspondence to: Jeffrey Wen . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +the availability of multiple receiver coils. After estimating coil-sensitivity maps or interpolation kernels, methods like SENSE (Pruessmann et al., 1999) and GRAPPA (Griswold et al., 2002) can use subsampled data from multiple coils to remove aliasing artifacts in the final reconstruction. However, parallel imaging alone can typically allow only two-to three-fold acceleration of the acquisition process. For higher acceleration, methods based on compressed-sensing (CS) have been proposed (Lustig et al., 2007). The CS methods are framed as iteratively minimizing the sum of a data-fidelity term and a regularization term, where the regularization term incorporates prior knowledge about the images. The prior knowledge could be that the true images are sparse in some transform domain, as in traditional CS, or that the true images are preserved by some denoising function, as in "plug-and-play" recovery (Ahmad et al., 2020). Deep neural networks have also been proposed for MR image recovery, based on end-to-end approaches like (Zbontar et al., 2018; Eo et al., 2018; Sriram et al., 2020) or algorithmic unrolling (Hammernik et al., 2018). Yet another approach, known as compressed sensing with a generative model (CSGM) (Bora et al., 2017), trains a deep image generator and then optimizes its input to give the image that, after application of the forward model, best matches the measurements. + +Although they achieve high reconstruction quality, the aforementioned methods provide only a point estimate. Yet, accelerated MRI is an ill-posed inverse problem, where there exist many possible reconstructions that are consistent with a given prior and set of subsampled measurements. Since small variations in image content can impact the final diagnosis, it is crucial for radiologists to know whether a visual structure is truly reflective of the patient anatomy or merely an imaging artifact. Problems of this form fall into the realm of uncertainty quantification (UQ) (Abdar et al., 2021). + +One approach that facilitates UQ is Bayesian imaging, where the goal is not to compute a single "good" image estimate but rather to sample from the posterior distribution. The availability of a large batch of posterior samples enables many forms of UQ. For example, a simple approach is to generate the pixel-wise standard-deviation map, which quantifies which pixels are more trustworthy. A more involved approach is to construct a hypothesis test for the absence of a particular (multi-pixel) visual structure (Repetti et al., 2019). + +In this paper, we focus on the task of sampling from the posterior, which facilitates future work that uses those samples for uncertainty quantification, adaptive sampling (Sanchez et al., 2020), counterfactual diagnosis (Chang et al., 2019), or other applications. + +There exist several deep-learning based approaches to sample from the posterior, including those based on conditional generative adversarial networks (CGANs) (Isola et al., 2017; Adler & Oktem, 2018), conditional variational autoencoders (CVAEs) (Edupuganti et al., 2021; Tonolini et al., 2020), conditional normalizing flows (CNFs) (Ardizzone et al., 2019; Winkler et al., 2019), and score/Langevin/diffusion-based approaches (Kadkhodaie & Simoncelli, 2020; Laumont et al., 2022; Ho et al., 2020). In this paper, we focus on the CNF approach. Compared to the other methods, CNFs yield rapid inference and require only simple, likelihood-based training. In a recent super-resolution (SR) contest (Lugmayr et al., 2022), a CNF (by Song et al. (2022)) won, beating all CGAN, CVAE, and diffusion-based competitors. + +Inspired by the success of CNFs in SR, we design the first CNF for accelerated multi-coil MRI. Previous applications of CNFs to MRI (Denker et al., 2021a) showed competitive results but were restricted to single-coil recovery of magnitude images. As the vast majority of modern MRI scanners capture multi-coil data, the extension to multi-coil, complex-valued data is crucial for real-world adoption. However, the order-of-magnitude increase in dimensionality makes this transition non-trivial. For this purpose, we propose a novel CNF that infers only the signal component in the nullspace of the measurement operator and combines its output with the measured data to generate complete images. Using fastMRI brain and knee data, we demonstrate that our approach outperforms existing posterior samplers based on CGANs (Adler & Oktem, 2018) and MRI-specific score/Langevin-based approaches (Jalal et al., 2021a; Chung & Ye, 2022a) in almost all accuracy metrics, while retaining fast inference and requiring minimal hyperparameter tuning. + +# 2. Background + +# 2.1. Measurement Model + +In MRI, measurements of the $D$ -pixel true image $\pmb{i}_{\mathrm{true}} \in \mathbb{C}^{D}$ are collected in the spatial Fourier domain, known as the "k-space." In a multi-coil system with $C$ coils, measurements from the $c$ th coil can be written as + +$$ +\boldsymbol {k} _ {c} = \boldsymbol {P F S} _ {c} \boldsymbol {i} _ {\text {t r u e}} + \boldsymbol {\epsilon} _ {c} \in \mathbb {C} ^ {M}, \tag {1} +$$ + +where $\pmb{P} \in \mathbb{R}^{M \times D}$ is a sampling matrix containing $M$ rows of the $D \times D$ identity matrix $\pmb{I}$ , $\pmb{F}$ is the $D \times D$ 2D unitary discrete Fourier transform (DFT) matrix, $S_{c} \in \mathbb{C}^{D \times D}$ is the coil-sensitivity map of the $c$ th coil, and $\epsilon_{c} \in \mathbb{C}^{M}$ is measurement noise. We will assume that $\{\pmb{S}_{c}\}_{c=1}^{C}$ have + +been obtained from ESPIRiT (Uecker et al., 2014), in which case $\sum_{c=1}^{C} S_{c}^{\mathsf{H}} S_{c} = I$ . In the case of single-coil MRI, $C = 1$ and $S_{1} = I$ . + +We now rewrite the model in terms of the "coil images" $\boldsymbol{x}_c \triangleq \boldsymbol{S}_c \boldsymbol{i}$ and their corresponding "zero-filled" estimates $\boldsymbol{y}_c \triangleq \boldsymbol{F}^{\mathsf{H}} \boldsymbol{P}^{\top} \boldsymbol{k}_c$ , and then stack all the coils together via $\boldsymbol{x}_{\mathrm{true}} \triangleq [\boldsymbol{x}_1^\top, \dots, \boldsymbol{x}_C^\top]^\top$ and $\boldsymbol{y} \triangleq [\boldsymbol{y}_1^\top, \dots, \boldsymbol{y}_C^\top]^\top$ to obtain + +$$ +\boldsymbol {y} = \boldsymbol {A} \boldsymbol {x} _ {\text {t r u e}} + \varepsilon , \tag {2} +$$ + +with $\varepsilon = [(\pmb{F}^{\mathsf{H}}\pmb{P}^{\top}\pmb{\epsilon}_{1})^{\top},\dots,(\pmb{F}^{\mathsf{H}}\pmb{P}^{\top}\pmb{\epsilon}_{C})^{\top}]^{\top}$ and forward operator + +$$ +\boldsymbol {A} = \operatorname {b l k d i a g} \left\{\boldsymbol {F} ^ {\mathsf {H}} \boldsymbol {P} ^ {\top} \boldsymbol {P} \boldsymbol {F}, \dots , \boldsymbol {F} ^ {\mathsf {H}} \boldsymbol {P} ^ {\top} \boldsymbol {P} \boldsymbol {F} \right.. \tag {3} +$$ + +To perform image recovery, one can first compute $\pmb{y}$ , then estimate $\widehat{\pmb{x}}$ from $\pmb{y}$ , and finally either "coil-combine" to yield a complex-valued image estimate + +$$ +\widehat {\boldsymbol {i}} = \left[ \boldsymbol {S} _ {1} ^ {\mathrm {H}}, \dots , \boldsymbol {S} _ {C} ^ {\mathrm {H}} \right] \widehat {\boldsymbol {x}} \tag {4} +$$ + +or perform root-sum-of-squares (RSS) reconstruction to obtain a magnitude-only image estimate + +$$ +\left| \widehat {\boldsymbol {i}} \right| = \sqrt {\sum_ {c = 1} ^ {C} \left| \widehat {\boldsymbol {x}} _ {c} \right| ^ {2}}. \tag {5} +$$ + +In the "fully sampled" case, $M = D$ and so $\mathbf{y} = \mathbf{x}_{\mathrm{true}} + \varepsilon$ . But fully sampled acquisition is very slow, and so we are interested in accelerated MRI, where one collects $M < D$ measurements per coil to save time. This gives an "acceleration factor" of $R \triangleq D / M$ , but it makes $A$ rank deficient. In this latter case, accurate recovery of $\mathbf{x}_{\mathrm{true}}$ requires the use of prior information about $\mathbf{x}_{\mathrm{true}}$ , such as the knowledge that $\mathbf{x}_{\mathrm{true}}$ is a vector of MRI coil images. + +# 2.2. Posterior Sampling + +In the case of MRI, the posterior distribution that we would ultimately like to sample from is $p_{i|k}(\cdot|\mathbf{k})$ , where $\mathbf{k} \triangleq [k_1^\top, \ldots, k_C^\top]$ . Equivalently, we could consider $p_{i|y}(\cdot|\mathbf{y})$ since $\mathbf{y}$ and $\mathbf{k}$ contain the same information. Another option is to sample from $p_{x|y}(\cdot|\mathbf{y})$ and then use (4) or (5) to combine coil images into a single image. We take the latter approach. + +For CNFs and CGANs, posterior sampling is accomplished by designing a neural network that maps samples from an easy-to-generate latent distribution (e.g., white Gaussian) to the target distribution (i.e., the distribution of $x$ given $y$ , with density $p_{x|y}$ ). Once that network is trained, sample generation is extremely fast. For Langevin dynamics, an algorithm is run for hundreds or thousands of iterations to generate each sample, and each iteration involves calling a neural network. Consequently, the inference time is much longer than that of CNFs and CGANs. + +# 2.3. Conditional Normalizing Flows + +Normalizing flows (NF) (Dinh et al., 2015; 2017; Kingma & Dhariwal, 2018; Papamakarios et al., 2021) have emerged as powerful generative models capable of modeling complex data distributions. Normalizing flows learn an invertible mapping between a target data distribution and a simple latent distribution, generally a Gaussian. More concretely, for a latent sample $z$ drawn from the latent distribution $p_{z}$ , the normalizing flow defines an invertible transformation $f_{\theta}(\cdot): \mathbb{R}^{Q} \to \mathbb{R}^{Q}$ . This transformation is parameterized by $\theta$ , and $x = f_{\theta}(z)$ defines a sample in the target data domain. This mapping of the latent distribution induces a probability in the target data domain with a probability density derived from the change-of-variable formula + +$$ +\widehat {p} _ {x} (\boldsymbol {x}; \boldsymbol {\theta}) = p _ {z} (f _ {\boldsymbol {\theta}} ^ {- 1} (\boldsymbol {x})) \det \left(\frac {\partial f _ {\boldsymbol {\theta}} ^ {- 1} (\boldsymbol {x})}{\partial \boldsymbol {x}}\right), \qquad (6) +$$ + +where $\operatorname{det}(\cdot)$ denotes the determinant. The goal of the normalizing flow is to approximate the underlying data distribution $p_x$ with $\widehat{p}_x(\cdot;\theta)$ . Given a set of data samples $\{\pmb{x}^{(i)}\}_{i=1}^N$ , the parameters $\pmb{\theta}$ can be fit using a maximum likelihood loss + +$$ +\begin{array}{l} L (\boldsymbol {\theta}) = \sum_ {i = 1} ^ {N} \ln \widehat {p} _ {x} \left(\boldsymbol {x} ^ {(i)}; \boldsymbol {\theta}\right) (7) \\ = \sum_ {i = 1} ^ {N} \ln p _ {z} \left(f _ {\boldsymbol {\theta}} ^ {- 1} \left(\boldsymbol {x} ^ {(i)}\right)\right) + \ln \det \left(\frac {\partial f _ {\boldsymbol {\theta}} ^ {- 1} \left(\boldsymbol {x} ^ {(i)}\right)}{\partial \boldsymbol {x} ^ {(i)}}\right) (8) \\ \end{array} +$$ + +Once the training is complete, samples from the target distribution can be rapidly generated by drawing samples from the latent distribution and passing them through the normalizing flow $f_{\theta}$ . + +It is worth noting that maximizing $L(\pmb{\theta})$ is equivalent to minimizing the Kullback-Leibler (KL) divergence between $\widehat{p}_x(\cdot; \pmb{\theta})$ and $p_x$ (Papamakarios et al., 2021), which aligns with the goal of approximating $p_x$ with $\widehat{p}_x(\cdot; \pmb{\theta})$ . The maximum-likelihood loss provides stable training with minimal hyperparameter tuning and has been shown to be robust to mode collapse. + +Conditional normalizing flows (CNFs) (Ardizzone et al., 2021) generalize normalizing flows by adding a conditioning signal $\pmb{y}$ . With the CNF denoted as $h_{\pmb{\theta}}(\cdot, \cdot): \mathbb{R}^Q \times \mathbb{R}^Q \to \mathbb{R}^Q$ , the forward process from the latent domain to the data domain is given by $\pmb{x} = h_{\pmb{\theta}}(\pmb{z}, \pmb{y})$ . For complex-valued, multi-coil MRI, we have $Q = 2CD$ . The inclusion of $\pmb{y}$ alters the objective of the CNF to approximating the unknown posterior distribution $p_{x|y}(\cdot|\pmb{y})$ with $\widehat{p}_{x|y}(\cdot|\pmb{y}; \pmb{\theta})$ . As before, the change-of-variable formula implies the induced distribution + +$$ +\widehat {p} _ {x \mid y} (\boldsymbol {x} \mid \boldsymbol {y}; \boldsymbol {\theta}) = p _ {z} \left(h _ {\boldsymbol {\theta}} ^ {- 1} (\boldsymbol {x}, \boldsymbol {y})\right) \det \left(\frac {\partial h _ {\boldsymbol {\theta}} ^ {- 1} (\boldsymbol {x} , \boldsymbol {y})}{\partial \boldsymbol {x}}\right), \tag {9} +$$ + +where $h_{\theta}^{-1}$ refers to the inverse mapping of $h_\theta$ with respect to its first argument. + +Given a dataset $\{(\pmb{x}^{(i)},\pmb{y}^{(i)})\}_{i = 1}^{N}$ , the maximum likelihood loss can be utilized to optimize the parameters $\pmb{\theta}$ + +$$ +\begin{array}{l} L (\boldsymbol {\theta}) = \sum_ {i = 1} ^ {N} \ln \widehat {p} _ {x | y} \left(\boldsymbol {x} ^ {(i)} \mid \boldsymbol {y} ^ {(i)}; \boldsymbol {\theta}\right) (10) \\ = \sum_ {i = 1} ^ {N} \ln p _ {z} \left(h _ {\boldsymbol {\theta}} ^ {- 1} \left(\boldsymbol {x} ^ {(i)}, \boldsymbol {y} ^ {(i)}\right)\right) + \ln \det \left(\frac {\partial h _ {\boldsymbol {\theta}} ^ {- 1} \left(\boldsymbol {x} ^ {(i)} , \boldsymbol {y} ^ {(i)}\right)}{\partial \boldsymbol {x} ^ {(i)}}\right) (11) \\ \end{array} +$$ + +CNFs have shown promising performance in solving inverse problems, such as super-resolution (Lugmayr et al., 2020; Kim & Son, 2021; Song et al., 2022), making it an exciting avenue of exploration for accelerated MRI. Denker et al. (2021a) developed a CNF for single-coil, magnitude-only knee images. This study showed promising initial results, but the limited scope did not demonstrate performance in the more realistic multi-coil, complex-valued domain. As this transition increases the dimensionality by an order of magnitude, non-trivial architectural changes are required. In this paper, we build on the latest advances in CNFs to create a method that is capable of generating high-quality posterior samples of multi-coil, complex-valued MRI images. + +# 3. Method + +Our CNF consists of two networks, a conditioning network $g_{\theta}$ and a conditional flow model $h_{\theta}$ . The conditioning network takes the vector of zero-filled (ZF) coil-images $\mathbf{y}$ as input and produces features that are used as conditioning information by the flow model $h_{\theta}$ . Aided by the conditioning information, $h_{\theta}$ learns an invertible mapping between samples in the latent space and those in the image space. Using the notation of Sec. 2.3, our overall CNF takes the form + +$$ +\bar {h} _ {\boldsymbol {\theta}} (\boldsymbol {z}, \boldsymbol {y}) \triangleq h _ {\boldsymbol {\theta}} (\boldsymbol {z}, g _ {\boldsymbol {\theta}} (\boldsymbol {y})). \tag {12} +$$ + +Recently, advancements of CNFs in the super-resolution literature have revealed useful insights for more general inverse problems. First, Lugmayr et al. (2020) suggested the use of a pretrained, state-of-the-art point-estimate network for the conditioning network $g_{\theta}$ . This network is then trained jointly with $h_{\theta}$ using the loss in (11). This approach provides a functional initialization of $g_{\theta}$ and allows $g_{\theta}$ to learn to provide features that are useful for the maximum-likelihood training objective. We utilize a UNet from (Zbontar et al., 2018) for $g_{\theta}$ since it has been shown to perform well in accelerated MRI. We first pre-train $g_{\theta}$ for MRI recovery, and later we jointly train $g_{\theta}$ and $h_{\theta}$ together. + +Song et al. (2022) demonstrated the benefits of using "frequency-separation" when training a CNF for super + +![](images/fab4df3487c3346fa1b4bf50d374b04e24b7713fbde1f5314b9ec0b3d0645283.jpg) +Figure 1. The architecture of our CNF. The conditioning network $g_{\theta}$ takes in multi-coil zero-filled image estimates $\mathbf{y}$ and outputs features used by the flow model $h_{\theta}$ . The flow learns an invertible mapping between Gaussian random samples $\mathbf{z}^{(i)}$ and images $\mathbf{u}^{(i)}$ that are the projections of the training images $\mathbf{x}^{(i)}$ onto the non-measured subspace. + +resolution. The authors argue that the low-resolution conditional image already contains sufficient information about the low-frequency components of the image, so the CNF can focus on recovering only the high-frequency information. The CNF output is then added to an upsampled version of the conditional image to yield an estimate of the full image. + +We now generalize the frequency-separation idea to arbitrary linear models of the form $\pmb{y} = \pmb{A}\pmb{x}_{\mathrm{true}} + \varepsilon$ from (2) and apply the resulting procedure to MRI. Notice that (2) implies + +$$ +\boldsymbol {A} ^ {+} \boldsymbol {y} = \boldsymbol {A} ^ {+} \boldsymbol {A} \boldsymbol {x} _ {\text {t r u e}} + \boldsymbol {A} ^ {+} \varepsilon \tag {13} +$$ + +where $(\cdot)^{+}$ denotes the pseudo-inverse. Here, $A^{+}Ax_{\mathrm{true}}$ is recognized as the projection of $x_{\mathrm{true}}$ onto the row-space of $A$ , which we will refer to as the "measured space." Then + +$$ +\boldsymbol {u} _ {\text {t r u e}} \triangleq (\boldsymbol {I} - \boldsymbol {A} ^ {+} \boldsymbol {A}) \boldsymbol {x} _ {\text {t r u e}} \tag {14} +$$ + +would be the projection of $\pmb{x}_{\mathrm{true}}$ onto its orthogonal complement, which we refer to as the "nullspace." Assuming that the nullspace has dimension $> 0$ , we propose to construct an estimate $\widehat{\pmb{x}}$ of $\pmb{x}_{\mathrm{true}}$ with the form + +$$ +\widehat {\boldsymbol {x}} (\boldsymbol {z}, \boldsymbol {y}) = (\boldsymbol {I} - \boldsymbol {A} ^ {+} \boldsymbol {A}) \bar {h} _ {\boldsymbol {\theta}} (\boldsymbol {z}, \boldsymbol {y}) + \boldsymbol {A} ^ {+} \boldsymbol {y}, \tag {15} +$$ + +where $\overline{h}_{\pmb{\theta}}(\pmb {z},\pmb {y})$ is our CNF-generated estimate of $\pmb{u}_{\mathrm{true}}$ and the $(I - A^{+}A)$ in (15) strips off any part of $\overline{h}_{\pmb{\theta}}(\pmb {z},\pmb {y})$ that has leaked into the measured space. A similar approach was used in (Sønderby et al., 2017) for point estimation. Given training data $\{(x^{(i)},y^{(i)})\}_{i = 1}^{N}$ , the CNF $\overline{h}_{\pmb{\theta}}(\cdot ,\cdot)$ is trained to map code vectors $z^{(i)}\sim p_z$ to the nullspace projections + +$$ +\boldsymbol {u} ^ {(i)} \triangleq (\boldsymbol {I} - \boldsymbol {A} ^ {+} \boldsymbol {A}) \boldsymbol {x} ^ {(i)} \tag {16} +$$ + +using the measured data $\pmb{y}^{(i)}$ as the conditional information. As a result of (15), the reconstructions $\widehat{\pmb{x}}$ agree with the measurements $\pmb{y}$ in that $A\widehat{\pmb{x}} = \pmb{y}$ . However, this also means that $\widehat{\pmb{x}}$ inherits the noise $\varepsilon$ corrupting $\pmb{y}$ , and so this data-consistency procedure is best used in the low-noise regime. In the presence of significant noise, the dual-decomposition approach (Chen & Davies, 2020) may be more appropriate. + +In the accelerated MRI formulation (1)-(3), the matrix $\mathbf{A}$ is itself an orthogonal projection matrix, so that, in (15), + +$$ +\boldsymbol {I} - \boldsymbol {A} ^ {+} \boldsymbol {A} = \operatorname {b l k d i a g} \left\{\boldsymbol {F} ^ {\mathrm {H}} \tilde {\boldsymbol {P}} ^ {\top} \tilde {\boldsymbol {P}} \boldsymbol {F}, \dots , \boldsymbol {F} ^ {\mathrm {H}} \tilde {\boldsymbol {P}} ^ {\top} \tilde {\boldsymbol {P}} \boldsymbol {F} \right., \tag {17} +$$ + +where $\widetilde{\pmb{P}}\in \mathbb{R}^{(D - M)\times D}$ is the sampling matrix for the nonmeasured k-space. Also, $\pmb{y}$ is in the row-space of $\pmb{A}$ , so + +$$ +\boldsymbol {A} ^ {+} \boldsymbol {y} = \boldsymbol {y} \tag {18} +$$ + +in (15). Figure 1 illustrates the overall procedure, using "data consistency" to describe (15) and "nullspace projection" to describe (16). In Sec. 4.2, we quantitatively demonstrate the improvements gained from designing our CNF to estimate only the nullspace component. + +# 3.1. Architecture + +The backbone of $g_{\theta}$ is a UNet (Ronneberger et al., 2015) that mimics the design in (Zbontar et al., 2018), with 4 pooling layers and 128 output channels in the first convolution layer. The first layer was modified to accept complex-valued coil images. The inputs have $2C$ channels, where $C$ is the number of coils, each with a real and imaginary component. + +The outputs of the final feature layer of the UNet are processed by a feature-extraction network with $L$ convolution layers. Together, the feature extraction network and the UNet make up our conditioning network $g_{\theta}$ . The output of each convolution layer is fed to conditional coupling blocks of the corresponding layer in $h_{\theta}$ . + +For the flow model $h_\theta$ , we adopt the multi-scale RealNVP (Dinh et al., 2017) architecture. This construction utilizes $L$ -layers and $B$ -flow steps in each layer. A flow step consists of an activation normalization (Kingma & Dhariwal, 2018), a fixed $1 \times 1$ orthogonal convolution (Ardizzone et al., 2019), and a conditional coupling block (Ardizzone et al., 2021). Each layer begins with a checkerboard downsampling (squeeze layer) (Dinh et al., 2017) and a transition step made up of an activation normalization and $1 \times 1$ convolution. Layers end with a split operation that sends half of the channels directly to the output on the latent side. For all experiments, we use $L = 3$ and $B = 20$ . The full architecture of $h_\theta$ is specified in Fig. 1. + +Although the code that accompanies (Denker et al., 2021a) gives a built-in mechanism to scale their flow architecture to accommodate an increased number of input and output channels, we find that this mechanism does not work well (see Sec. 4.2). Thus, in addition to incorporating nullspace learning, we redesign several aspects of the flow architecture and training. First, to prevent the number of flow parameters from growing unreasonably large, our flow uses fewer downsampling layers (3 vs 6) but more flow steps per down-sampling layer (20 vs 5), and we utilize one-sided (instead of two-sided) affine coupling layers. Second, to connect the conditioning network to the flow, Denker et al. (2021a) used a separate CNN for each flow layer and adjusted its depth to match the flow-layer dimension. We use a single, larger CNN and feed its intermediate features to the flow layers with matched dimensions, further preventing an explosion in the number of parameters. Third, our conditioning network uses a large, pretrained UNet, whereas Denker et al. (2021a) used a smaller untrained UNet. With our modifications, we grow the conditional network more than the flow network, which allows the CNF to better handle the high dimensionality of complex-valued, multi-coil data. + +# 3.2.Data + +We apply our network to two datasets: the fastMRI knee and fastMRI brain datasets (Zbontar et al., 2018). For the knee data, we use the non-fat-suppressed subset, giving 17286 training and 3592 validation images. We compress the measurements to $C = 8$ complex-valued virtual coils using (Zhang et al., 2013) and crop the images to $320 \times 320$ pixels. The sampling mask is generated using the golden ratio offset (GRO) (Joshi et al., 2022) Cartesian sampling scheme with an acceleration rate $R = 4$ and autocalibration signal (ACS) + +region of 13 pixels. We create the ZF coil-image vectors $\pmb{y}$ by applying the mask and inverse Fourier transform to the fully sampled $\pmb{k}_c$ given by the fastMRI dataset to obtain $\pmb{y}_c = \pmb{F}^{\mathsf{H}}\pmb{P}^{\top}\pmb{P}\pmb{k}_c$ for all $c$ , and then stack the coils to obtain $\pmb{y} = [\pmb{y}_1^{\top},\dots,\pmb{y}_C^{\top}]^{\top}$ . We create the ground-truth coil-image vectors $\pmb{x}_{\mathrm{true}}$ using the same procedure but without the mask, i.e., $\pmb{x}_c = \pmb{F}^{\mathsf{H}}\pmb{k}_c$ and $\pmb{x}_{\mathrm{true}} = [\pmb{x}_1^{\top},\dots,\pmb{x}_C^{\top}]^{\top}$ . + +With the brain data, we use the T2-weighted images and take the first 8 slices of all volumes with at least 8 coils. This provides 12224 training and 3352 validation images. The data is compressed to $C = 8$ virtual coils (Zhang et al., 2013) and cropped to $384 \times 384$ pixels. The GRO sampling scheme is again used with an acceleration rate $R = 4$ and a 32-wide ACS region. For both methods, the coil-sensitivity maps are estimated from the ACS region using ESPIiT (Uecker et al., 2014). All inputs to the network are normalized by the 95th percentile of the ZF magnitude images. + +# 3.3. Training + +For both datasets, we first train the UNet in $g_{\theta}$ with an additional $1 \times 1$ convolution layer to get the desired $2C$ channels. We train the UNet to minimize the mean-squared error (MSE) from the nullspace projected targets $\{\pmb{u}^{(i)}\}_{i=1}^{N}$ for 50 epochs with batch size 8 and learning rate 0.003. Then, we remove the final $1 \times 1$ convolution and jointly train $g_{\theta}$ and $h_{\theta}$ for 100 epochs to minimize the negative log-likelihood (NLL) loss of the nullspace projected targets. For the brain data, we use batch size 8 and learning rate 0.0003. For the knee data, we use batch size 16 with learning rate 0.0005. All experiments use the Adam optimizer (Kingma & Ba, 2015) with default parameters $\beta_1 = 0.9$ and $\beta_2 = 0.999$ . The full training takes about 4 days on 4 Nvidia V100 GPUs. + +# 3.4. Comparison Methods + +We compare against other methods that are capable of generating posterior samples for accelerated MRI. For the fastMRI brain data, we present results for the CGAN from (Adler & Oktem, 2018) and the Langevin method from (Jalal et al., 2021a). For the fastMRI knee data, we present results for the "Score" method from (Chung & Ye, 2022a) and the "sCNF" method from (Denker et al., 2021a). + +For the CGAN, we utilize a UNet-based generator with 4 pooling layers and 128 output channels in the initial layer and a 5-layer CNN network for the discriminator. The generator takes $y$ concatenated with a latent vector $z$ as input. The model is trained with the default loss and hyperparameters from (Adler & Oktem, 2018) for 100 epochs with a learning rate of 0.001. For the Langevin method, we use the authors' implementation but with the GRO sampling mask described in Sec. 3.2. + +The Score method is different than the other methods in that + +![](images/53574374058362d89f5b15b78f9479e23c27a40c61d0f362c0e3842e93b8c35f.jpg) +Figure 2. Mean images and pixel-wise standard-deviation maps computed from 8 and 32 posterior samples for the brain images and knee datasets, respectively. The standard-deviation maps show which pixels have the greatest reconstruction uncertainty. The corresponding PSNR is shown on each reconstruction. + +it assumes that the k-space measurements $\pmb{k}_c$ are constructed from true coil images $\pmb{x}_{\mathrm{true}}$ with magnitudes affinely normalized to the interval [0, 1] and phases normalized to [0, 1] radians. Although this normalization cannot be enforced on prospectively undersampled MRI data, Score fails without this normalization. So, to evaluate Score, we normalize each $\pmb{k}_c$ using knowledge of the ground-truth $\pmb{x}_c$ , run Score, and un-normalized its output $\widehat{\pmb{x}}_c$ for comparison with the other methods. Since the Score paper (Chung & Ye, 2022a) used RSS combining to compute $\widehat{\pmb{i}}$ , we do the same. For the Score method, we use $T = 200$ iterations and not the default value of $T = 2000$ . This is because, when using posterior-sample averaging (see Sec. 3.5), the PSNR computed using 200 iterations is better than with 2000. + +The sCNF method works only on single-coil magnitude data, and so we convert our multi-coil data to that domain in order to evaluate sCNF. To do this, we apply RSS (5) to ZF coil-images $\pmb{y}$ and repeat the process for the true coil images $\pmb{x}_{\mathrm{true}}$ . Using those magnitude images, we train sCNF for 300 epochs with learning rate 0.0005 and batch size 32. + +# 3.5. Evaluation + +We report results for several different metrics, including peak-signal-to-noise ratio (PSNR), structural-similarity index (SSIM) (Wang et al., 2004), Fréchet Inception Score (FID) (Heusel et al., 2017), and conditional FID (cFID) + +(Soloveitchik et al., 2021). PSNR and SSIM were computed on the average of $P$ posterior samples $\{i_p\}_{p = 1}^P$ , i.e., + +$$ +\widehat {\boldsymbol {i}} _ {(P)} \triangleq \frac {1}{P} \sum_ {p = 1} ^ {P} \widehat {\boldsymbol {i}} _ {p} \tag {19} +$$ + +to approximate the posterior mean, while FID and cFID were evaluated on individual posterior samples $\widehat{\pmb{i}}_p$ . By default, we compute all metrics using magnitude reconstructions $|\widehat{\pmb{i}}|$ rather than the complex-valued reconstructions $\widehat{\pmb{i}}$ , in part because competitors like sCNF generate only magnitude reconstructions, but also because this is typical in the MRI literature (e.g., the fastMRI competition (Zbontar et al., 2018)). So, for example, PSNR is computed as + +$$ +\mathrm {P S N R} \triangleq 1 0 \log_ {1 0} \left(\frac {D \max _ {d} | [ \boldsymbol {i} _ {\text {t r u e}} ] _ {d} | ^ {2}}{| \hat {\boldsymbol {i}} _ {(P)} | - | \boldsymbol {i} _ {\text {t r u e}} |} _ {2} ^ {2}\right), \tag {20} +$$ + +where $[\cdot ]_d$ extracts the $d$ th pixel. For FID and cFID, we use the embeddings of VGG-16 (Simonyan & Zisserman, 2014) as (Kastryulin et al., 2022) found that this helped the metrics better correlate with the rankings of radiologists. + +For the brain data, we compute all metrics on 72 random test images in order to limit the Langevin image generation time to 4 days. We generate complex-valued images using the coil-combining method in (4) and use $P = 32$ posterior samples to calculate cFID $^1$ , FID $^1$ , PSNR, and SSIM. (For the reference statistics of FID, we use the entire training dataset.) Because FID and cFID are biased by small sample sizes, we also compute FID $^2$ and cFID $^2$ with 2484 test samples and $P = 8$ for our method and the CGAN. + +With the knee data, we follow a similar evaluation procedure except that, to comply with the evaluation steps of Score, we generate magnitude-only signals using the root-sum-of-square (RSS) combining from (5). Also, we computed metrics on 72 randomly selected slices in order to bound the image generation time of Score to 6 days with $P = 8$ . We use $P = 8$ for all metrics, but for $\mathrm{FID}^2$ and $\mathrm{cFID}^2$ , we use 2188 test samples. + +When computing inference time for all methods, we use a single Nvidia V100 with 32GB of memory and evaluate the time required to generate one posterior sample. + +# 4. Results + +Table 1 reports the quantitative metrics for the knee dataset. It shows that our method outperforms sCNF by a significant margin in all metrics except inference time. By using information from multiple coils and a more advanced architecture, our method shows the true competitive potential of CNFs in realistic accelerated MR imaging. + +Table 1 also shows that our method surpasses Score in all metrics except $\mathrm{FID}^1$ , even though Score benefited from im + +![](images/a367693ae773cbc0a6419e62f70638494a8ab8687d94d4cb1b31bcb9f897cc62.jpg) +Figure 3. Examples of posterior samples and standard-deviation maps for the knee data. The samples show important structural variations. This demonstrates the advantages of generating multiple reconstructions and computing a pixel-wise standard-deviation map. + +practical ground-truth normalization. Compared to Score, our method generated posterior samples $8000 \times$ faster. Furthermore, our method (and sCNF) will see a speedup when multiple samples are generated because the conditioning network $g_{\theta}$ needs to be evaluated only once per $P$ generated samples for a given $\mathbf{y}$ . For example, with the knee data, we are able to generate $P = 32$ samples in 1.41 seconds, corresponding to 44 milliseconds per sample, which is a $2.5 \times$ speedup over the value reported in Table 1. + +Table 2 reports the quantitative results for the brain dataset. The table shows that we outperform the Langevin and CGAN methods in all benchmarks except inference time. While our method is a bit slower than the CGAN, it is orders of magnitude faster than the Langevin approach. + +We show the mean images and standard-deviation maps for the fastMRI knee and brain experiments in Fig. 2. For the knee data, our method captures texture more accurately than the sCNF method and provides a sharper representation than the Score method. All of the brain methods provide a visually accurate representation to the ground truth, but the Langevin method provides a more diffuse variance map, with energy spread throughout the image. + +In Fig. 3, we plot multiple posterior samples, along with zoomed-in regions, to illustrate the changes across independently drawn samples for each method. The standard-deviation maps are generated using $P = 8$ posterior samples, three of which are shown. From the zoomed-in regions, it can be seen that several samples are consistent with the + +
ModelPSNR (dB)↑SSIM↑FID1↓FID2↓cFID1↓cFID2↓Time
Score34.15 ± 0.190.8764 ± 0.00364.494.4915 min
sCNF32.93 ± 0.170.8494 ± 0.00477.325.788.496.5166 ms
Ours35.23 ± 0.220.8888 ± 0.00464.682.553.962.44108 ms
+ +Table 1. Average performance on non-fat-suppressed fastMRI knee data, with standard error reported after the ±. PSNR, SSIM, FID1, and cFID1 are computed for 72 test images and $P = 8$ posterior samples. FID2, and cFID2 are computed for 2188 test samples and $P = 8$ posterior samples. Time to the generation of one posterior sample. + +
ModelPSNR (dB)↑SSIM↑FID1↓FID2↓cFID1↓cFID2↓Time
Langevin37.88 ± 0.410.9042 ± 0.00626.125.2914 min
CGAN37.28 ± 0.190.9413 ± 0.00315.384.066.414.28112 ms
Ours38.85 ± 0.230.9495 ± 0.00124.132.374.152.44177 ms
+ +Table 2. Average performance on non-fat-suppressed fastMRI brain data, with standard error reported after the ±. PSNR, SSIM, FID1, and cFID1 are computed for 72 test images and $P = 32$ posterior samples. FID2 and cFID2 are computed using 2484 test samples and $P = 8$ . Time to the generation of one posterior sample. + +ground truth while others are not (although they may be consistent with the measured data). Regions of high posterior variation can be flagged from visual inspection of the standard-deviation map and further investigated through viewing multiple posterior samples for improved clinical diagnoses. + +Our method presents observable, realistic variations of small anatomical features in the zoomed-in regions. The variations are also registered in the standard-deviation map. Both the posterior samples and the standard-deviation map could be used by clinicians to assess their findings. Comparatively, our method demonstrates variation that is spread across the entire image, while in the Score method, the variation is mostly localized to small regions. Since it is difficult to say which standard-deviation map is more useful or correct, the interpretation of these maps could be an interesting direction for future work. The sCNF also demonstrates variation, but it is mostly driven by residual aliasing artifacts. + +# 4.1. PSNR Gain versus $P$ + +It is well known that the minimum MSE (MMSE) estimate of $\pmb{i}$ from $\pmb{y}$ equals the conditional mean $\operatorname{E}\{\pmb{i}|\pmb{y}\}$ , i.e., the mean of the posterior distribution $p_{i|y}(\cdot|\pmb{y})$ . Thus, one way to approximate the MMSE estimate is to generate many samples from the posterior distribution and average them, as in (19). Bendel et al. (2022) showed that the MSE + +$$ +\mathcal {E} _ {P} \triangleq \mathrm {E} \left[ \| \widehat {\boldsymbol {i}} _ {(P)} - \boldsymbol {i} _ {\text {t r u e}} \| _ {2} ^ {2} \boldsymbol {y} \right] \tag {21} +$$ + +of the $P$ -posterior-sample average $\widehat{\pmb{i}}_{(P)}$ obeys $\mathcal{E}_1 / \mathcal{E}_P = 2P / (P + 1)$ . So, for example, the SNR increases by a factor of two as $P$ grows from 1 to $\infty$ . The same thing should happen for PSNR, as long as the PSNR definition is consistent with (21). For positive signals (i.e., magnitude images) the PSNR definition from (20) is consistent with + +![](images/8401e8986ff68ea6121598d2a03abc61eac46d68de6e0fc5d30b2a4358c834ba.jpg) + +![](images/cea1483d3379f41ce51a51a58d24d831013a3d1cc653188e08d019ba75fb80bb.jpg) + +![](images/c80d529ea0db3bf231fb57115de48674124048df40363bf54a837be30be8442e.jpg) +Figure 4. The gain in (magnitude) PSNR and complex PSNR of the $P$ -sample mean estimate $\hat{\pmb{i}}_{(P)}$ versus $P$ , for both brain and knee data. Note the $\approx 3$ dB increase as $P$ grows from 1 to infinity. + +![](images/611a272a8849dcc82a8f61da41025964b6b78f52d0bf84518a6b8b0534c7b9d8.jpg) + +(21), but for complex signals we must use "complex PSNR" + +$$ +\mathrm {c P S N R} \triangleq 1 0 \log_ {1 0} \left(\frac {D \max _ {d} | [ i _ {\text {t r u e}} ] _ {d} | ^ {2}}{\| \hat {i} _ {(P)} - i _ {\text {t r u e}} \| _ {2} ^ {2}}\right). \tag {22} +$$ + +As RSS combining provides only a magnitude estimate, we compute the coil-combined estimate for our method and Score to evaluate cPSNR behavior for the knee dataset. + +One may then wonder whether a given approximate posterior sampler has a PSNR gain versus $P$ that matches the theory. In Fig. 4, we answer this question by plotting the PSNR gain and the cPSNR gain versus $P \in \{1,2,4,8,16,32\}$ for the various methods under test (averaged over all 72 test samples). There we see that our method's cPSNR curve matches the theoretical curve well for both brain and knee data. As expected, our (magnitude) PSNR curve does not match the theoretical curve. The cPSNR curves of the Score and CGAN methods fall short of the theoretical curve by + +
ModelPSNR (dB)↑SSIM↑FID2↓cFID2↓
(Denker et al., 2021a)17.61 ± 0.200.6665 ± 0.007216.0216.68
+ Data Consistency27.27 ± 0.210.7447 ± 0.006116.9218.56
+ Architectural Changes33.87 ± 0.230.8715 ± 0.00494.484.50
+ Nullspace Learning35.23 ± 0.220.8888 ± 0.00462.552.44
+ +Table 3. Ablation Study: Performance on non-fat-suppressed fastMRI knee data, with standard error reported after the $\pm$ . Each line adds a new contribution to the model of the previous line. Metrics are computed as described in Sec. 3.5 + +![](images/18652740898464339030523853ec4ec0431231738a802f34cc9383b7ccaf5b75.jpg) +Figure 5. Examples of a ground-truth image, one posterior sample, an average of $P = 8$ posterior samples, and a MAP estimate. The log posterior density in units of bits-per-dimension is shown in the bottom right corner of each image. + +a large margin, but interestingly, the Langevin method's cPSNR curve matches ours almost perfectly. sCNF's PSNR gain curve matches the theoretical one almost perfectly, which provides further empirical evidence that CNF methods accurately sample from the posterior distribution. + +# 4.2. Ablation Study + +To evaluate the impact of our contributions to CNF architecture and training design, we perform an ablation study using the fastMRI knee dataset. We start with the baseline model in (Denker et al., 2021a), modified to take in 16 channels instead of 1, and scale it up using the built-in mechanism in the author's code. We train this model for 300 epochs with batch size 32 and learning rate 0.0001 to minimize the NLL of the multicoil targets $\{\pmb{x}^{(i)}\}$ , since higher learning rates were numerically unstable. Table 3 shows what happens when we add each of our contributions. First, we add data consistency (15) to the evaluation of the baseline. We then add the architectural changes described in Sec. 3.1, and finally we add nullspace learning to arrive at our proposed method. From Table 3, it can be seen that each of our design contributions yielded a significant boost in performance, and that nullspace learning was a critical ingredient in our outperforming the Score method in Table 1. For this ablation study, all models were trained following the procedure outlined in Sec. 3.3 (except for the learning rate of the baseline). + +# 4.3. Maximum a Posteriori (MAP) Estimation + +Because CNFs can evaluate the posterior density of a signal hypothesis (recall (9)), they can be used for posteriori (MAP) estimation, unlike CGANs. + +Due to our data-consistency step (15), we find the MAP estimate of $\pmb{x}$ using + +$$ +\widehat {\boldsymbol {x}} _ {\mathrm {M A P}} = \widehat {\boldsymbol {u}} _ {\mathrm {M A P}} + \boldsymbol {A} ^ {+} \boldsymbol {y} \tag {23} +$$ + +$$ +\widehat {\boldsymbol {u}} _ {\text {M A P}} = \arg \max _ {\boldsymbol {u} \in \operatorname {n u l l} (\boldsymbol {A})} \ln \widehat {p} _ {u | y} (\boldsymbol {u} | \boldsymbol {y}). \tag {24} +$$ + +Note the CNF output $\pmb{u}$ is constrained to the nullspace of $\pmb{A}$ . From (17), this nullspace is spanned by the columns of + +$$ +\boldsymbol {W} \triangleq \operatorname {b l k d i a g} \left\{\boldsymbol {F} ^ {\mathrm {H}} \widetilde {\boldsymbol {P}} ^ {\top}, \dots , \boldsymbol {F} ^ {\mathrm {H}} \widetilde {\boldsymbol {P}} ^ {\top} \right., \tag {25} +$$ + +which are orthonormal, and so $\widehat{\pmb{u}}_{\mathrm{MAP}} = \pmb {W}\widetilde{\pmb{k}}_{\mathrm{MAP}}$ with + +$$ +\begin{array}{l} \widetilde {\boldsymbol {k}} _ {\text {M A P}} = \underset {\widetilde {\boldsymbol {k}}} {\arg \max } \ln \widehat {p} _ {u | y} (\boldsymbol {W} \widetilde {\boldsymbol {k}} | \boldsymbol {y}; \boldsymbol {\theta}) (26) \\ = \arg \max _ {\widetilde {\boldsymbol {k}}} \left[ \ln p _ {z} \left(\bar {h} _ {\boldsymbol {\theta}} ^ {- 1} \left(\boldsymbol {W} \widetilde {\boldsymbol {k}}, \boldsymbol {y}\right)\right) \right. \\ \left. + \ln \det \left(\frac {\partial \bar {h} _ {\boldsymbol {\theta}} ^ {- 1} (\tilde {\boldsymbol {u}} , \boldsymbol {y})}{\partial \tilde {\boldsymbol {u}}} \right. _ {\tilde {\boldsymbol {u}} = \boldsymbol {W} \tilde {\boldsymbol {k}}}\right) \Bigg ]. (27) \\ \end{array} +$$ + +For this maximization, we use the Adam optimizer with 5000 iterations and a learning rate of $1 \times 10^{-8}$ . Above, $\widetilde{k}$ can be recognized as the unmeasured k-space samples. + +In Figure 5, we show an example of a MAP estimate along with the ground truth image, one sample from the posterior, a $P = 8$ posterior-sample average, and their corresponding log-posterior-density values. As expected, the MAP estimate has a higher log-posterior-density than the other estimates. Visually, the MAP estimate is slightly sharper than the sample average but contains less texture details than the single posterior sample. + +# 5. Conclusion + +In this work, we present the first conditional normalizing flow for posterior sample generation in multi-coil accelerated MRI. To do this, we designed a novel conditional normalizing flow (CNF) that infers the signal component in the measurement operator's nullspace, whose outputs are later combined with information from the measured space. In experiments with fastMRI brain and knee data, we demonstrate improvements over existing posterior samplers for MRI. Compared to score/Langevin-based approaches, our inference time is four orders-of-magnitude faster. We also illustrate how the posterior samples can be used to quantify uncertainty in MR imaging. This provides radiologists with additional tools to enhance the robustness of clinical diagnoses. We hope this work motivates additional exploration of posterior sampling for accelerated MRI. + +# Acknowledgements + +This work was supported in part by the National Institutes of Health under Grant R01-EB029957. + +# References + +Abdar, M., Pourpanah, F., Hussain, S., Rezazadegan, D., Liu, L., Ghavamzadeh, M., Fieguth, P., Cao, X., Khosravi, A., Acharya, U. R., et al. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Information Fusion, 76:243-297, 2021. +Adler, J. and Oktem, O. Deep Bayesian inversion. arXiv:1811.05910, 2018. +Ahmad, R., Bouman, C. A., Buzzard, G. T., Chan, S., Liu, S., Reehorst, E. T., and Schniter, P. Plug and play methods for magnetic resonance imaging. IEEE Signal Process. Mag., 37(1):105-116, March 2020. +Ardizzone, L., Bungert, T., Draxler, F., Kothe, U., Kruse, J., Schmier, R., and Sorrenson, P. Framework for Easily Invertible Architectures (FrEIA). https://github.com/vislearn/FrEIA, 2018. Accessed: 2022-11-05. +Ardizzone, L., Lüth, C., Kruse, J., Rother, C., and Köthe, U. Guided image generation with conditional invertible neural networks. arXiv:1907.02392, 2019. +Ardizzone, L., Kruse, J., Lüth, C., Bracher, N., Rother, C., and Köthe, U. Conditional invertible neural networks for diverse image-to-image translation. arXiv:2105.02104, 2021. +Bendel, M., Ahmad, R., and Schniter, P. A regularized conditional GAN for posterior sampling in inverse problems. arXiv:2210.13389, 2022. +Bora, A., Jalal, A., Price, E., and Dimakis, A. G. Compressed sensing using generative models. In Proc. Int. Conf. Mach. Learn., pp. 537-546, 2017. +Chang, C.-H., Creager, E., Goldenberg, A., and Duvenaud, D. Explaining image classifiers by counterfactual generation. In Proc. Int. Conf. on Learn. Rep., 2019. +Chen, D. and Davies, M. E. Deep decomposition learning for inverse imaging problems. In Proc. European Conf. Comp. Vision, pp. 510-526, 2020. +Chung, H. and Ye, J. C. Score-based diffusion models for accelerated MRI. Med. Image Analysis, 80:102479, 2022a. +Chung, H. and Ye, J. C. score-MRI. https://github.com/HJ-harry/score-MRI, 2022b. Accessed: 2022-12-15. +Denker, A., Schmidt, M., Leuschner, J., and Maass, P. Conditional invertible neural networks for medical imaging. J. Imaging, 7(11):243, 2021a. + +Denker, A., Schmidt, M., Leuschner, J., and Maass, P. cinn-for-imaging. https://github.com/jleuschn/cinn_for_imaging, 2021b. Accessed: 2022-10-08. +Dinh, L., Krueger, D., and Bengio, Y. NICE: Non-linear independent components estimation. In Proc. Int. Conf. on Learn. Rep. Workshops, 2015. +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using Real NVP. In Proc. Int. Conf. on Learn. Rep., 2017. +Edupuganti, V., Mardani, M., Vasanawala, S., and Pauly, J. Uncertainty quantification in deep MRI reconstruction. IEEE Trans. Med. Imag., 40(1):239-250, January 2021. +Eo, T., Jun, Y., Kim, T., Jang, J., Lee, H.-J., and Hwang, D. KIKI-net: Cross-domain convolutional neural networks for reconstructing undersampled magnetic resonance images. Magn. Reson. Med., 80(5):2188-2201, 2018. +Falcon, W. et al. Pytorch lightning, 2019. URL https://github.com/PyTorchLightning/pytorch-lightning. +Griswold, M. A., Jakob, P. M., Heidemann, R. M., Nittka, M., Jellus, V., Wang, J., Kiefer, B., and Haase, A. Generalized autocalibrating partially parallel acquisitions (GRAPPA). Magn. Reson. Med., 47(6):1202-1210, 2002. +Hammernik, K., Klatzer, T., Kobler, E., Recht, M. P., Sodickson, D. K., Pock, T., and Knoll, F. Learning a variational network for reconstruction of accelerated MRI data. Magn. Reson. Med., 79(6):3055-3071, 2018. +Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In Proc. Neural Inf. Process. Syst. Conf., volume 30, 2017. +Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. In Proc. Neural Inf. Process. Syst. Conf., volume 33, pp. 6840-6851, 2020. +Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A. A. Image-to-image translation with conditional adversarial networks. In Proc. IEEE Conf. Comp. Vision Pattern Recog., pp. 1125-1134, 2017. +Jalal, A., Arvinte, M., Daras, G., Price, E., Dimakis, A., and Tamir, J. Robust compressed sensing MRI with deep generative priors. In Proc. Neural Inf. Process. Syst. Conf., 2021a. +Jalal, A., Arvinte, M., Daras, G., Price, E., Dimakis, A., and Tamir, J. csgm-mri-langevin. https://github.com/utcsilab/csgm-mri-langevin, 2021b. Accessed: 2021-12-05. + +Joshi, M., Pruitt, A., Chen, C., Liu, Y., and Ahmad, R. Technical report (v1.0)-pseudo-random cartesian sampling for dynamic MRI. arXiv:2206.03630, 2022. +Kadkhodaie, Z. and Simoncelli, E. P. Solving linear inverse problems using the prior implicit in a denoiser. arXiv:2007.13640, 2020. +Kastryulin, S., Zakirov, J., Pezzotti, N., and Dylov, D. V. Image quality assessment for magnetic resonance imaging. arXiv:2203.07809, 2022. +Kim, Y. and Son, D. Noise conditional flow model for learning the super-resolution space. arXiv:1606.02838, 2021. +Kingma, D. and Dhariwal, P. Glow: Generative flow with invertible 1x1 convolutions. In Proc. Neural Inf. Process. Syst. Conf., pp. 10236-10245, 2018. +Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In Proc. Int. Conf. on Learn. Rep., 2015. +Knoll, F., Hammernik, K., Zhang, C., Moeller, S., Pock, T., Sodickson, D. K., and Akcakaya, M. Deep-learning methods for parallel magnetic resonance imaging reconstruction: A survey of the current approaches, trends, and issues. IEEE Signal Process. Mag., 37(1):128-140, January 2020. +Laumont, R., Bortoli, V. D., Almansa, A., Delon, J., Durmus, A., and Pereyra, M. Bayesian imaging using plug & play priors: When Langevin meets Tweedie. SIAM J. Imag. Sci., 15(2):701-737, 2022. +Lugmayr, A., Danelljan, M., Van Gool, L., and Timofte, R. SRFlow: Learning the super-resolution space with normalizing flow. In Proc. European Conf. Comp. Vision, 2020. +Lugmayr, A., Danelljan, M., Timofte, R., Kim, K.-w., Kim, Y., Lee, J.-y., Li, Z., Pan, J., Shim, D., Song, K.-U., et al. NTIRE 2022 challenge on learning the super-resolution space. In Proc. IEEE Conf. Comp. Vision Pattern Recog., pp. 786-797, 2022. +Lustig, M., Donoho, D., and Pauly, J. M. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magn. Reson. Med., 58(6):1182-1195, 2007. +Ong, F. and Lustig, M. SigPy: A python package for high performance iterative reconstruction. In Proc. Annu. Meeting ISMRM, volume 4819, 2019. +Papamakarios, G., Nalisnick, E. T., Rezende, D. J., Mohamed, S., and Lakshminarayanan, B. Normalizing flows for probabilistic modeling and inference. J. Mach. Learn. Res., 22(57):1-64, 2021. + +Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. PyTorch: An imperative style, high-performance deep learning library. In Proc. Neural Inf. Process. Syst. Conf., pp. 8024-8035, 2019. +Pruessmann, K. P., Weiger, M., Scheidegger, M. B., and Boesiger, P. SENSE: Sensitivity encoding for fast MRI. Magn. Reson. Med., 42(5):952-962, 1999. +Repetti, A., Pereyra, M., and Wiaux, Y. Scalable bayesian uncertainty quantification in imaging inverse problems via convex optimization. SIAM J. Imag. Sci., 12(1):87-118, 2019. +Ronneberger, O., Fischer, P., and Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proc. Intl. Conf. Med. Image Comput. Comput. Assist. Intervent., pp. 234–241, 2015. +Sanchez, T., Krawczuk, I., Sun, Z., and Cevher, V. Uncertainty-driven adaptive sampling via GANs. In Proc. Neural Inf. Process. Syst. Workshop, 2020. +Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014. +Soloveitchik, M., Diskin, T., Morin, E., and Wiesel, A. Conditional Frechet inception distance. arXiv:2103.11521, 2021. +Sonderby, C. K., Caballero, J., Theis, L., Shi, W., and Huszár, F. Amortised MAP inference for image superresolution. In Proc. Int. Conf. on Learn. Rep., 2017. +Song, K.-U., Shim, D., Kim, K.-w., Lee, J.-y., and Kim, Y. FS-NCSR: Increasing diversity of the super-resolution space via frequency separation and noise-conditioned normalizing flow. In Proc. IEEE Conf. Comp. Vision Pattern Recog. Workshop, pp. 968-977, June 2022. +Sriram, A., Zbontar, J., Murrell, T., Defazio, A., Zitnick, C. L., Yakubova, N., Knoll, F., and Johnson, P. End-to-end variational networks for accelerated MRI reconstruction. In Proc. Intl. Conf. Med. Image Comput. Comput. Assist. Intervent., pp. 64-73, 2020. +Tonolini, F., Radford, J., Turpin, A., Faccio, D., and Murray-Smith, R. Variational inference for computational imaging inverse problems. J. Mach. Learn. Res., 21(179): 1-46, 2020. +Uecker, M., Lai, P., Murphy, M. J., Virtue, P., Elad, M., Pauly, J. M., Vasanawala, S. S., and Lustig, M. ESPIRiT-an eigenvalue approach to autocalibrating parallel MRI: + +Where SENSE meets GRAPPA. Magn. Reson. Med., 71 (3):990-1001, 2014. +Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli, E. P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process., 13(4): 600-612, Apr. 2004. +Winkler, C., Worrall, D., Hoogeboom, E., and Welling, M. Learning likelihoods with conditional normalizing flows. arXiv preprint arXiv:1912.00042, 2019. +Zbontar, J., Knoll, F., Sriram, A., Muckley, M. J., Bruno, M., Defazio, A., Parente, M., Geras, K. J., Katsnelson, J., Chandarana, H., Zhang, Z., Drozdzal, M., Romero, A., Rabbat, M., Vincent, P., Pinkerton, J., Wang, D., Yakubova, N., Owens, E., Zitnick, C. L., Recht, M. P., Sodickson, D. K., and Lui, Y. W. fastMRI: An open dataset and benchmarks for accelerated MRI. arXiv:1811.08839, 2018. +Zhang, T., Pauly, J. M., Vasanawala, S. S., and Lustig, M. Coil compression for accelerated imaging with Cartesian sampling. Magn. Reson. Med., 69(2):571-582, 2013. + +# A. Accelerated MRI Simulation Procedure + +We outline the procedure for simulating the accelerated MRI problem. The fastMRI datasets provide the fully sampled multi-coil k-space, i.e., $\{\pmb{k}_c\}_{c=1}^C$ with $M = D$ . To obtain the ground truth coil-images $\{\pmb{x}_c\}_{c=1}^C$ , we take the inverse Fourier transform of the fully sampled k-space measurement, i.e., $\pmb{x}_c = \pmb{F}^\mathsf{H}\pmb{k}_c$ , wherein we assume that the noise $\epsilon_c$ in (1) is negligible. To obtain the zero-filled images $\pmb{y}_c$ , we take the inverse Fourier transform after masking the fully-sampled k-space measurement $\pmb{k}_c$ , i.e., $\pmb{y}_c = \pmb{F}^\mathsf{H}\pmb{P}^\top \pmb{P}\pmb{k}_c$ . This procedure is illustrated in Fig. 6. In real-world accelerated MRI, the data acquisition process would collect masked k-space $\pmb{P}\pmb{k}_c$ directly. + +![](images/eb3969344556faa806a9eb1c54f964671a5e1095589df13166f1c1682a155671.jpg) +Figure 6. A visual illustration of simulating accelerated MRI. Given the fully sampled k-space $\pmb{k}_c$ highlighted in blue, we obtain the ground truth $\pmb{x}_c$ by applying the inverse Fourier transform $\pmb{F}^{\mathsf{H}}$ . The zero-filled image $\pmb{y}_c$ is acquired by applying the sampling mask $\pmb{P}^{\top}\pmb{P}$ to fully sampled $\pmb{k}_c$ and then taking the inverse Fourier Transform $\pmb{F}^{\mathsf{H}}$ . + +# B. Implementation Details + +For our machine learning framework, we use PyTorch (Paszke et al., 2019) and PyTorch lightning (Falcon et al., 2019). To implement the components of the CNF, we use the Framework for Easily Invertible Architectures (FrEIA) (Ardizzone et al., 2018). For the Score, sCNF, and Langevin methods, we utilize the authors' implementations at (Chung & Ye, 2022b), (Denker et al., 2021b), and (Jalal et al., 2021b), respectively. ESPIRiT coil-estimation and coil-combining are implemented using the SigPy package (Ong & Lustig, 2019). + +# C. Brain Posterior Samples + +![](images/9225fcbf7f9d238fa5895ab73ab7d12449e52a85c714ff886b864829cc30db50.jpg) +Figure 7. Examples of posterior samples and standard-deviation maps for the brain images, both with zoomed regions. \ No newline at end of file diff --git a/aconditionalnormalizingflowforacceleratedmulticoilmrimaging/images.zip b/aconditionalnormalizingflowforacceleratedmulticoilmrimaging/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..8615b94b73735cfc25f914a5fa645ad921ed32d3 --- /dev/null +++ b/aconditionalnormalizingflowforacceleratedmulticoilmrimaging/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c2946d6bdd5ffa22a5424e269caf080886e830f55091cfdedc0d816d8754da3a +size 843827 diff --git a/aconditionalnormalizingflowforacceleratedmulticoilmrimaging/layout.json b/aconditionalnormalizingflowforacceleratedmulticoilmrimaging/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a294b54687cd0fa9befee735a344fbeb63e192b2 --- /dev/null +++ b/aconditionalnormalizingflowforacceleratedmulticoilmrimaging/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f77b31825277d977c52ec1187508212dbfa9df90488193beff27b43430e5ecaf +size 576311 diff --git a/aconnectionbetweenonesteprlandcriticregularizationinreinforcementlearning/ee44bd65-4ebd-4d5a-9321-fa647ed48b24_content_list.json b/aconnectionbetweenonesteprlandcriticregularizationinreinforcementlearning/ee44bd65-4ebd-4d5a-9321-fa647ed48b24_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e2a475490e50d2c41c3a9e514727e81e9b93465a --- /dev/null +++ b/aconnectionbetweenonesteprlandcriticregularizationinreinforcementlearning/ee44bd65-4ebd-4d5a-9321-fa647ed48b24_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:04e3135abb95610b4664e21b760cf26645fff3f79eacbf4ce234c22c0b833703 +size 178072 diff --git a/aconnectionbetweenonesteprlandcriticregularizationinreinforcementlearning/ee44bd65-4ebd-4d5a-9321-fa647ed48b24_model.json b/aconnectionbetweenonesteprlandcriticregularizationinreinforcementlearning/ee44bd65-4ebd-4d5a-9321-fa647ed48b24_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0c957b9e792f7dd0731f77ebc55640b6db20fad0 --- /dev/null +++ b/aconnectionbetweenonesteprlandcriticregularizationinreinforcementlearning/ee44bd65-4ebd-4d5a-9321-fa647ed48b24_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f29d0a4434013095ed9bb20dae3de3f240999baa168f0d4078f48edf5925e303 +size 213172 diff --git a/aconnectionbetweenonesteprlandcriticregularizationinreinforcementlearning/ee44bd65-4ebd-4d5a-9321-fa647ed48b24_origin.pdf b/aconnectionbetweenonesteprlandcriticregularizationinreinforcementlearning/ee44bd65-4ebd-4d5a-9321-fa647ed48b24_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a9f3ec51ee369b07016faa06ada482b5672b8cc0 --- /dev/null +++ b/aconnectionbetweenonesteprlandcriticregularizationinreinforcementlearning/ee44bd65-4ebd-4d5a-9321-fa647ed48b24_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd7548c08c7ff1c0c4803613d90eadce4a2805988a236b08ca08d855b59916b5 +size 727361 diff --git a/aconnectionbetweenonesteprlandcriticregularizationinreinforcementlearning/full.md b/aconnectionbetweenonesteprlandcriticregularizationinreinforcementlearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..af3cde79491d773b6d50e9c9ab72aa8f274a0c49 --- /dev/null +++ b/aconnectionbetweenonesteprlandcriticregularizationinreinforcementlearning/full.md @@ -0,0 +1,857 @@ +# A Connection between One-Step RL and Critic Regularization in Reinforcement Learning + +Benjamin Eysenbach $^{1,2}$ Matthieu Geist $^{1}$ Sergey Levine $^{1,3}$ Ruslan Salakhutdinov $^{2}$ + +# Abstract + +As with any machine learning problem with limited data, effective offline RL algorithms require careful regularization to avoid overfitting. One class of methods, known as one-step RL, perform just one step of policy improvement. These methods, which include advantage-weighted regression and conditional behavioral cloning, are thus simple and stable, but can have limited asymptotic performance. A second class of methods, known as critic regularization, perform many steps of policy improvement with a regularized objective. These methods typically require more compute but have appealing lower-bound guarantees. In this paper, we draw a connection between these methods: applying a multi-step critic regularization method with a regularization coefficient of 1 yields the same policy as one-step RL. While our theoretical results require assumptions (e.g., deterministic dynamics), our experiments nevertheless show that our analysis makes accurate, testable predictions about practical offline RL methods (CQL and one-step RL) with commonly-used hyperparameters. + +# 1. Introduction + +Reinforcement learning (RL) algorithms tend to perform better when regularized, especially when given access to only limited data, and especially in batch (i.e., offline) settings where the agent is unable to collect new experience. While RL algorithms can be regularized using the same tools as in supervised learning (e.g., weight decay, dropout), we will use "regularization" to refer to those techniques unique to the RL setting. Such regularization methods include policy regularization (penalizing the policy for sampling out-of + +1Google Research 2Carnegie Mellon University 3UC Berkeley. Correspondence to: Benjamin Eysenbach . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +![](images/6090f24d7216e1a17ddc2a28d272fb1089e1951a25e34a171a0065c3d2e52e6f.jpg) +Figure 1: Both $n$ -step RL and critic regularization can interpolate between behavioral cloning (left) and un-regularized RL (right) by varying the regularization parameter. Endpoints of these regularization paths are the same. We prove that these methods also obtain the same policy for an intermediate degree of regularization. + +distribution action) and value regularization (penalizing the critic for making large predictions). Research on these sorts of regularization has grown significantly in recent years, yet theoretical work studying the tradeoffs between regularization methods remains limited (Vieillard et al., 2020). + +Many RL methods perform regularization and can be classified by whether they perform one or many steps of policy improvement. One-step RL methods (Brandfonbrener et al., 2021; Peng et al., 2019; Peters & Schaal, 2007; Peters et al., 2010) perform one step of policy iteration, updating the policy to choose actions the are best according to the Q-function of the behavioral policy. The policy is often regularized to not deviate far from the behavioral policy. In theory, policy iteration can take a large number of iterations $(\tilde{\mathcal{O}}(|S||\mathcal{A}| / (1 - \gamma))$ (Scherrer, 2013)) to converge, so one-step RL (one step of policy iteration) fails to find the optimal policy on most tasks. Empirically, policy iteration often converges in a smaller number of iterations (Sutton & Barto, 2018, Sec. 4.3), and the policy after just a single iteration can sometimes achieve performance comparable to multi-step RL methods (Brandfonbrener et al., 2021). Critic regularization methods modify the training of the value function such that it predicts smaller returns for unseen actions (Kumar et al., 2020; Chebotar et al., 2021; Yu et al., 2021; Hatch et al., 2022; Nachum et al., 2019; An et al., 2021; Bai et al.; Buckman et al., 2021). Intuitively, such critic regularization causes the policy to avoid sampling unseen actions. In this paper, we will use "critic regularization" to specifically refer to multi-step methods that use critic regularization; multi-step methods that do not utilize critic regularization (e.g., KL control (Ziebart, 2010), + +TD3+BC (Fujimoto & Gu, 2021)) are outside the scope of our analysis. + +These RL regularization methods appear distinct. Critic regularization typically involves solving a two-player game, whereby a policy predicts actions with high values while the critic decreases the values predicted for those actions. Prior work (Kumar et al., 2020) has argued that this complexity is worthwhile because the regularization effect is being propagated across time via Bellman backups: decreasing the value at one state will also decrease the value at states that lead there. + +In this paper, we show that a certain type of one-step RL is equivalent to a certain type of critic regularization, under some assumptions (see Fig. 1). The key idea is that, when using a certain TD loss, the regularized critic updates converge not to the true Q-values, but rather the Q-values multiplied by an importance weight. For the critic, these importance weights mean that the Q-values end up estimating the expected returns of the behavioral policy $(Q^{\beta})$ , as in many one-step methods (Peters et al., 2010; Peters & Schaal, 2007; Peng et al., 2019; Brandonbrener et al., 2021)), rather than the expected returns of the optimal policy $(Q^{\pi})$ . For the actor, these importance weights mean that the logarithm of the Q-values includes a term that looks like a KL divergence. This connection allows us to make precise how critic regularization methods implicitly regularize the policy. + +The key contribution of this paper is a construction showing that one-step RL produces the same policy as a multi-step critic regularization method, for a certain regularization coefficient. We also discuss similar connections in settings with varying degrees of regularization, in goal-conditioned settings, and in RL settings with success classifiers. The main assumption behind these results is that the critic is updated using an update rule based on the cross entropy loss, rather than the MSE loss. Our result is potentially surprising because, algorithmically and mechanistically, one-step RL and critic regularization are very different. Nonetheless, our analysis may help explain why prior work has found that one-step RL and critic regularization methods can perform similarly on some (Brandfonbrener et al., 2021; Emmons et al., 2021) (but not all (Kostrikov et al., 2021)) problems. Our results hint that one-step RL methods may be a simpler approach to achieving the theoretical guarantees typically associated with critic regularization methods. Our analysis does not say that practical implementations (which violate our assumptions) will always behave the same in practice; they do not (Brandfonbrener et al., 2021, Sec. 7). Nonetheless, our experiments show that our analysis makes accurate testable predictions about practical methods. While our results do not say whether users should regularize the actor or critic in practice, they hint that one-step RL methods may be a simpler way of achieving the theoretical and empirical + +properties of critic regularization on RL tasks that require strong regularization. + +# 2. Related Work + +Regularization has been applied to RL in many different ways (Neu et al., 2017; Geist et al., 2019), and features prominently in offline RL methods (Lange et al., 2012; Levine et al., 2020). While RL algorithms can be regularized using the same techniques as in supervised learning (e.g., weight decay, dropout), our focus will be on regularization methods unique to the RL setting. Such RL-specific regularization methods can be roughly categorized into one-step RL methods and critic regularization methods. + +One-step RL methods (Brandfonbrener et al., 2021; Gülçehre et al., 2020; Peters & Schaal, 2007; Peng et al., 2019; Peters et al., 2010; Wang et al., 2018) apply a single step of policy improvement to the behavioral policy. These methods first estimate the Q-values of the behavioral policy, either via regression or iterative Bellman updates. Then, these methods optimize the policy to maximize these Q-values minus an actor regularizer. Many goal-conditioned or task-conditioned imitation learning methods (Savinov et al., 2018; Ding et al., 2019; Sun et al., 2019; Ghosh et al., 2020; Paster et al., 2021; Yang et al., 2021; Srivastava et al., 2019; Kumar et al., 2019b; Chen et al., 2021; Lynch & Sermanet, 2021; Li et al., 2020; Eysenbach et al., 2020a) also fits into this mold (Eysenbach et al., 2022), yielding policies that maximize the Q-values of the behavioral policy while avoiding unseen actions. Note that non-conditional imitation learning methods do not perform policy improvement, and do not fit into this mold. One-step methods are typically simple to implement and computationally efficient + +Critic regularization methods instead modify the objective for the Q-function so that it predicts lower returns for unseen actions (Kumar et al., 2020; Chebotar et al., 2021; Yu et al., 2021; Hatch et al., 2022; Nachum et al., 2019; An et al., 2021; Bai et al.; Buckman et al., 2021). Critic regularization methods are typically more challenging to implement correctly and more computationally demanding (Kumar et al., 2020; Nachum et al., 2019; Bai et al.; An et al., 2021), but can lead to better results on some challenging problems (Kostrikov et al., 2021) Our analysis will show that one-step RL is equivalent to a certain type of critic regularization. + +Some regularization methods do not fit exactly into these two categories. Methods like KL control regularize both the actor and the reward function (Geist et al., 2019; Ziebart, 2010; Haarnoja et al., 2018; Abdelmaleki et al., 2018; Wu et al., 2019; Jaques et al., 2019; RezaEIFar et al., 2022) Other methods only regularize the policy used in the critic updates (Fujimoto et al., 2019; Kumar et al., 2019a). + +Our results are conceptually similar to prior work on regularization in the supervised learning setting. There, regularization methods like weight decay, spectral regularization, and early stopping all appear quite distinct, but are actually mathematical equivalent under some assumptions (Bakushinskii, 1967; Wahba, 1987; Fleming, 1990; Santos, 1996; Bauer et al., 2007). This result is surprising because the methods are so different: weight decay modifies the loss function, early stopping modifies the update procedure, and spectral normalization is a post-hoc correction. + +# 3. Preliminaries + +We start by defining the single-task RL problem, and then introduce prototypical examples of one-step RL and critic regularization. We then define an actor critic algorithm for use in our analysis. + +# 3.1. Notation + +We assume an MDP with states $s$ , actions $a$ , initial state distribution $p_0(s_0)$ , dynamics $p(s' \mid s, a)$ , and reward function $r(s, a)$ . We assume episodes always have infinite length (i.e., there are no terminal states). Without loss of generality, we assume rewards are positive; adding a positive constant to all rewards can make them all positive without changing the optimal policy. We will learn a Markovian policy $\pi(a \mid s)$ to maximize the expected discounted sum of rewards: + +$$ +\max _ {\pi} \mathbb {E} _ {\pi (\tau)} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} r (s _ {t}, a _ {t}) \mid s _ {0} \sim p _ {0} (s _ {0}) \right], +$$ + +where $\pi(\tau) = p(s_0)\prod_{t=0}^{\infty}\pi(a_t\mid s_t)p(s_{t+1}\mid s_t,a_t)$ is the probability of policy $\pi$ sampling an infinite-length trajectory $\tau = (s_0,a_0,\dots)$ . We define Q-values for policy $\pi(a\mid s)$ as + +$$ +Q ^ {\pi} (s, a) = \mathbb {E} _ {\pi (\tau)} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} r \left(s _ {t}, a _ {t}\right) \mid s _ {0} = s, a _ {0} = a \right]. +$$ + +Because the rewards are positive, these Q-values are also positive, $Q^{\pi}(s,a) > 0$ . Since we focus on the offline setting, we will consider two policies: $\beta (a\mid s)$ is the behavioral policy that collected the dataset, and $\pi (a\mid s)$ is the online policy output by the algorithm that attempts to maximize the rewards. We will use $p(s,a,s')$ to denote the empirical distribution of transitions in an offline dataset, and $p(s,a)$ and $p(s)$ denote the corresponding marginal distributions. The behavioral policy is defined as $\beta (a\mid s) = p(a\mid s)$ . + +# 3.2. Examples of Regularization in RL + +While actor and critic regularization methods can be implemented in many ways, we introduce two prototypical examples below to make our discussion more concrete. + +Example of one-step RL: Brandfonbrener et al. (2021). One-step RL first estimates the Q-values of the behavioral policy $(Q^{\beta}(s,a))$ , and then optimizes the policy to maximize the Q-values minus a actor regularizer. While the actor regularizer can take different forms and the Q-values can be learned via regression, we will use a reverse KL regularizer and TD-style critic update so that the objective is similar to critic regularization: + +$$ +\max _ {\pi} \mathbb {E} _ {p (s) \pi (a | s)} \left[ Q ^ {\beta} (s, a) + \lambda \log \frac {\beta (a \mid s)}{\pi (a \mid s)} \right] \tag {1} +$$ + +where $Q^{\beta}(s,a) = \lim_{t\to \infty}Q_t(s,a)$ and + +$$ +Q _ {t + 1} \leftarrow \underset {Q} {\arg \min } \mathbb {E} _ {p (s, a)} \left[ \left(Q (s, a) - y ^ {\beta , Q _ {t}} (s, a)\right) ^ {2} \right] +$$ + +$$ +y^{\beta ,Q_{t}}(s,a)\triangleq r(s,a) + \gamma \mathbb{E}_{\substack{p(s^{\prime}|s,a)\\ \beta (a^{\prime}|s^{\prime})}}[Q_{t}(s^{\prime},a^{\prime})]. +$$ + +The scalar $\lambda$ is the regularization coefficient and $\beta(a|s)$ is an estimate of the behavioral policy, typically learned via behavioral cloning. Like most TD methods (Haarnoja et al., 2018; Mnih et al., 2013; Fujimoto et al., 2018), the TD targets $y$ are not considered learnable. In practice, most methods do not solve the critic to convergence at each step, instead taking just a few gradient steps before updating the TD targets. This one-step critic loss is different from the multi-step critic losses used in other RL methods (e.g., TD3, SVG(0)) because it uses the TD target $y^{\beta,Q}(s,a)$ (corresponds to a fixed policy) rather than $y^{\pi,Q}(s,a)$ (corresponding to a sequence of learned policies). One-step RL amounts to performing one step of policy iteration, rather than full policy optimization. While truncating the iterations of policy iteration can be suboptimal, it can also be interpreted as a form of early stopping regularization. + +Example of critic regularization: Kumar et al. (2020). CQL (Kumar et al., 2020) modifies the standard Bellman loss to include an additional term that decreases the values predicted for unseen actions. The actor objective is to maximize Q values; some CQL implementations also regularize the actor loss (Hoffman et al., 2020; Kumar et al., 2020)). The objectives can then be written as + +$$ +\max _ {\pi} \mathbb {E} _ {p (s) \pi (a | s)} [ Q ^ {\pi} (s, a) ] \tag {2} +$$ + +where $Q^{\pi}(s,a) = \lim_{t\to \infty}Q_t(s,a)$ and + +$$ +\begin{array}{l} Q _ {t + 1} = \underset {Q} {\arg \min } \mathbb {E} _ {p (s, a)} \left[ \left(Q (s, a) - y ^ {\pi , Q _ {t}} (s, a)\right) ^ {2} \right] \\ + \lambda \left(\mathbb {E} _ {p (s) \pi (a | s)} [ Q (s, a) ] - \mathbb {E} _ {p (s) \beta (a | s)} [ Q (s, a) ]\right). \\ \end{array} +$$ + +The second term decreases the Q-values for unseen actions (those sampled from $\pi(a \mid s)$ ) while the third term increases the values predicted for seen actions (those sampled from + +the behavioral policy $\beta (a\mid s)$ . Unlike standard temporal difference methods, the CQL updates resemble a competitive game between the actor and the critic. In practice, this cyclic dependency can create unstable learning (Kumar et al., 2020; Hoffman et al., 2020). + +# 3.3. How are these methods connected? + +Prior work has observed that one-step methods and critic regularization methods perform similarly on many (Fujimoto & Gu, 2021; Emmons et al., 2021) (but not all (Kostrikov et al., 2021)) tasks. Despite the differences in objectives and implementations of these two methods (and, more broadly, the actor/critic regularization methods for which they are prototypes), are there deeper, unifying connections between the methods? + +In the next section, we introduce a different actor-critic method that will allow us to draw a connection between one-step RL and critic regularization. We experimentally validate this equivalence in Sec. 5.1. Despite its difference from practically-used methods, such as one-step RL and CQL, we will show that it makes accurate predictions about the behavior of these practical methods (Sec. 5.2 and 5.3). + +# 3.4. Classifier Actor Critic + +To support our analysis, we will introduce a new actor-critic algorithm. This algorithm is similar to prior work, but trains the critic using a cross entropy loss instead of an MSE loss. We introduce this algorithm not because we expect it to perform better than existing actor-critic methods, but rather because it allows us to make precise a connection between actor and critic regularization. This method treats the value function like a classifier, so we will call it classifier actor critic. We will then introduce actor-regularized and critic-regularized versions of this method. The subsequent section (Sec. 4) will show that these two regularized methods learn the same policy. + +The key to our analysis will be to treat Q-values like probabilities, so we define the critic loss in terms of a cross-entropy loss, similar to prior work (Kalashnikov et al., 2018; Eysenbach et al., 2021). Recalling that Q-values are positive (Sec. 3.1), we transform the Q-values to have the correct range by using $\frac{Q}{Q + 1} \in [0,1)$ . We will minimize the cross-entropy loss applied to the transformed Q-values: + +$$ +\begin{array}{l} \mathbb {E} _ {p (s, a)} \left[ \mathcal {C E} \left(\frac {Q (s , a)}{Q (s , a) + 1}; \frac {y ^ {\pi , Q _ {t}} (s , a)}{y ^ {\pi , Q _ {t}} (s , a) + 1}\right) \right] \\ = - \mathbb {E} _ {p (s, a)} \left[ \frac {y ^ {\pi , Q _ {t}} (s , a)}{y ^ {\pi , Q _ {t}} (s , a) + 1} \log \frac {Q (s , a)}{Q (s , a) + 1} \right. \\ \left. + \frac {1}{y ^ {\pi , Q _ {t}} (s , a) + 1} \log \frac {1}{Q (s , a) + 1} \right] \\ \end{array} +$$ + +$$ +\begin{array}{l} \stackrel {\text {c o n s t .}} {=} - \mathbb {E} _ {p (s, a)} \left[ y ^ {\pi , Q _ {t}} (s, a) \log \frac {Q (s , a)}{Q (s , a) + 1} \right. \\ \underbrace {\left. + \log \frac {1}{Q (s , a) + 1} \right] .} _ {\triangleq \mathcal {L} _ {\text {c r i t i c}} (Q, y ^ {\pi , Q _ {t}})} \tag {3} \\ \end{array} +$$ + +In the last line we scale both the positive and negative term by $y^{\pi, Q_t}(s, a) + 1$ , a choice that does not change the optimal classifier but reduces notational clutter. When the TD target can be computed exactly, solving this optimization problem results in performing one SARSA update: $Q(s, a) \gets r(s, a) + \gamma Q(s', a')$ (see Lemma 4.1). Thus, by solving this optimization problem many times, each time using the previous Q-value to compute the TD targets, we will converge to the correct Q-values (see Lemma 4.1). The actor objective is to maximize the expected log of the Q-values: + +$$ +\max _ {\pi} \mathcal {L} _ {\text {a c t o r}} (\pi) \triangleq \mathbb {E} _ {p (s) \pi (a | s)} [ \log (Q ^ {\pi} (s, a)) ] \tag {4} +$$ + +where $Q^{\pi}(s,a) = \lim_{t\to \infty}Q_{t}(s,a)$ and + +$$ +Q_{t + 1} = \operatorname *{arg min}_{Q}\mathcal{L}_{\text{critic}}(Q,y^{\pi ,Q_{t}}). +$$ + +While most actor-critic methods do not use the logarithm transformation, prior work on conditional behavioral cloning (e.g., (Savinov et al., 2018; Ding et al., 2019; Sun et al., 2019; Ghosh et al., 2020; Srivastava et al., 2019)) implicitly includes this transformation (Eysenbach et al., 2022). In the absence of additional regularization, the optimal policy $\pi(a|s) = \mathbb{1}(a = \arg \max_{a'} Q(s, a'))$ is the same as the optimal policy for the standard actor objective (without the logarithm). We next introduce a one-step version of this method, as well as a critic regularization variant that resembles CQL. While we will implicitly use a regularization coefficient of 1 below, Appendix B.1 discusses versions of classifier actor critic with varying degrees of regularization. + +One-step RL. To make classifier actor critic resemble one-step RL (Brandfonbrener et al., 2021), we make two changes: estimating the value of the behavioral policy and adding a regularization term to the actor objective. To estimate the value of the behavioral policy, we modify the critic loss to sample the next action $a^\prime$ from the behavioral policy (i.e., we use $y^{\beta ,Q_t}(s,a)$ rather than $y^{\pi ,Q_t}(s,a)$ ). We also regularize the policy by adding a relative entropy term to the actor loss, analogous to the reverse KL penalty used in one-step RL: + +$$ +\max _ {\pi} \mathbb {E} _ {p (s) \pi (a | s)} \left[ \log Q ^ {\beta} (s, a) + \log \beta (a \mid s) - \log \pi (a \mid s) \right] \tag {5} +$$ + +where $Q^{\beta}(s,a) = \lim_{t\to \infty}Q_{t}(s,a)$ and + +$$ +Q_{t + 1} = \operatorname *{arg min}_{Q}\mathcal{L}_{\text{critic}}(Q,y^{\beta ,Q_{t}}). +$$ + +In tabular settings, this critic objective estimates the Q-values for $\beta (a\mid s)$ (Appendix Lemma 4.1). + +Critic regularization. To emulate CQL, we modify the critic loss (Eq. 3) by adding a penalty term that decreases the values for unseen actions. Whereas CQL applies this penalty to the Q-values directly, we will apply it to the logarithm of the Q-values: + +$$ +\max _ {\pi} \mathbb {E} _ {p (s) \pi (a | s)} \left[ \log Q _ {r} ^ {\pi} (s, a) \right] \tag {6} +$$ + +where $Q_r^\pi(s, a) = \lim_{t \to \infty} Q_t(s, a)$ and $Q_{t+1}(s, a) = \arg \min_{Q} \mathcal{L}_{\text{critic}}^r(Q, y^{\pi, Q_t})$ : + +$$ +\begin{array}{l} \mathcal {L} _ {\text {c r i t i c}} ^ {r} (Q, y ^ {\pi , Q _ {t}}) \triangleq \mathcal {L} _ {\text {c r i t i c}} (Q, y ^ {\pi , Q _ {t}}) \\ + \lambda \Bigl(\mathop{\mathbb{E}}_{\substack{p(s)\\ \pi (a|s)}}[\log (Q(s,a) + 1)] - \mathop{\mathbb{E}}_{\substack{p(s)\\ \beta (a|s)}}[\log (Q(s,a) + 1)]\Bigr). \\ \end{array} +$$ + +# 4. A Connection between One-Step RL and Critic Regularization + +This section provides our main result, which is that actor and critic regularization yield the same policy under some settings. The key to proving this connection will be to analyze the Q-values learned by critic regularization. While we mainly focus on the single-task setting, Sec. 4.1 describes how similar results also apply to other settings, including goal-conditioned RL, example-based control, and settings with smaller degrees of regularization. All proofs are in Appendix A. + +To relate one-step RL to critic regularization, we start by analyzing the Q-values learned by both methods. We first show that the classifier critic converges to the correct Q-values: + +Lemma 4.1. Assume that states and actions are tabular (discrete and finite), that rewards are positive, and that TD targets can be computed exactly (without sampling). Incrementally update the critic by solving a sequence of optimization problems: + +$$ +Q _ {t + 1} \leftarrow \underset {Q} {\arg \min } \mathcal {L} _ {\text {c r i t i c}} (Q, y ^ {\pi , Q _ {t}}). +$$ + +This sequence of $Q$ -functions will converge to $Q^{\pi}$ : + +$$ +\lim _ {t \rightarrow \infty} Q _ {t} (s, a) = Q ^ {\pi} (s, a) f o r a l l s t a t e s s a n d a c t i o n s a. +$$ + +Because one-step RL trains the critic using $\mathcal{L}_{\mathrm{critic}}(Q, y^{\beta, Q})$ , it learns Q-values corresponding to $Q^{\beta}(s, a)$ . When regularization is added to the critic updates, it learns different Q-values. Perhaps surprisingly, this regularization means that our estimates for the value of policy $\pi(a \mid s)$ look like the value of the original behavioral policy: + +Lemma 4.2. Assume that states and actions are tabular (discrete and finite), that rewards are positive, and that TD targets can be computed exactly (without sampling). Incrementally update the critic by minimizing a sequence of regularized critic losses using policy $\pi$ and hyperparameter $\lambda = 1$ : + +$$ +Q _ {t + 1} \leftarrow \underset {Q} {\arg \min } \mathcal {L} _ {\text {c r i t i c}} ^ {r} (Q, y ^ {\pi , Q _ {t}}). +$$ + +In the limit, this sequence of $Q$ -functions will converge to the $Q$ -values for the behavioral policy $(\beta (a\mid s))$ , weighted by the ratio of the behavioral and online policies: + +$$ +\lim _ {t \rightarrow \infty} Q _ {t} (s, a) = \frac {Q ^ {\beta} (s , a) \beta (a \mid s)}{\pi (a \mid s)} +$$ + +for all states $s$ and actions $a$ . + +Proof sketch. The ratio $\frac{\beta(a|s)}{\pi(a|s)}$ above is an importance weight. Ordinarily, a TD backup for policy $\pi(a|s)$ would entail sampling an action $a \sim \pi(a|s)$ . However, this importance weight means that TD backup is effectively performed by sampling an action $a \sim \beta(a|s)$ . Such a TD backup resembles the TD backup for $\beta(a|s)$ . The full proof is in Appendix A. + +Intuitively, this result says that critic regularization reweights the Q-values to assign higher values to indistribution actions, where $\beta (a\mid s)$ is large. An unexpected part of this result is that the Q-values correspond to the behavioral policy. In other words, critic regularization added to a multi-step RL method (one using $y^{\pi ,Q_t}(s,a))$ yields the same critic as a one-step RL method (one using $y^{\beta ,Q_t}(s,a)$ ). Our main result is a direct corollary of this Lemma: + +Theorem 4.3. Let a behavioral policy $\beta(a \mid s)$ be given and let $Q^{\beta}(s, a)$ be the corresponding value function. Let $\pi(a \mid s)$ be an arbitrary policy (typically learned) with support constrained to $\beta(a \mid s)$ (i.e., $\pi(a \mid s) > 0 \Rightarrow \beta(a \mid s) > 0$ ). Let $Q_r^\pi(s, a)$ be the critic obtained by the regularized critic update (Eq. 6) to this policy with $\lambda = 1$ . Then critic regularization results in the same policy as one-step RL: + +$$ +\mathbb {E} _ {\pi (a \mid s)} \left[ \log Q _ {r} ^ {\pi} (s, a) \right] = \mathbb {E} _ {\pi (a \mid s)} \left[ \log Q ^ {\beta} (s, a) + \log \frac {\beta (a \mid s)}{\log \pi (a \mid s)} \right] +$$ + +for all states $s$ . + +Since both forms of regularization result in the same objective for the actor, they must produce the same policy in the end. While prior work has mentioned that critic regularization implicitly regularizes the policy (Yu et al., 2021), this result shows that under the assumptions stated above, the implicit regularization of critic regularization results in the exact same policy learning objective as one-step RL. This equivalence holds when $\lambda = 1$ , and not necessarily for other regularization coefficients. Appendix B.1 shows how a variant of this result that includes an additional regularization mechanism does apply to different regularization + +![](images/d33ef69eaea7cabcaa847044f3dda73a7279b1ddd5b45baaadf8e4a93d179f3d.jpg) +MDPs where regularization decreases returns. + +![](images/63a136923acaed23c59b3eb6795b3ef8a92cb5f60030f2e5fe534703f92ec00a.jpg) +Figure 2: Actor and critic regularization produce identical policies. Across three tabular settings, we plot the action probabilities $\pi(a \mid s)$ for the policies produced by one-step RL and critic-regularized classifier actor-critic ( $R^2 \geq 0.999$ ). We also plot the action probabilities for a policy learned by an unregularized policy to confirm that the equivalence between one-step RL and critic regularization is not a coincidence. + +![](images/7ca5d595e349dfb5ea5dc2b88a7aa847ad38cb2e5fe387066242d48f46ce2ad6.jpg) +MDP where regularization increases returns. + +coefficients. This connection between one step RL and critic regularization concerns their objective functions, not the procedures used to optimize those objective functions. Indeed, because practical offline RL algorithms sometimes use different optimization procedures (e.g., TD vs. MC estimates of $Q^{\beta}(s,a)$ ), they will incur errors in estimating $Q^{\beta}(s,a)$ , violating Theorem 4.3's assumption that these Q-values are estimated exactly. + +Limitations. Our theoretical analysis makes assumptions that may not always hold in practice. For example, our results use a critic loss based on the cross entropy loss, while most (but not all (Kalashnikov et al., 2018; Eysenbach et al., 2020b)) practical methods use the MSE. Our analysis assumes that critic regularization arrives at an equilibrium, and ignores errors introduced by function approximation and sampling. Nonetheless, our theoretical results will make accurate predictions about prior offline RL methods. + +# 4.1. Extensions of the Analysis + +We extend this analysis in three ways. First, we also show that a similar connection can be established for lesser degrees of regularization ( $\lambda < 1$ ) (see Appendix B.1). Second, we show that a similar connection holds for RL problems defined via success examples (Pinto & Gupta, 2016; Tung et al., 2018; Kalashnikov et al., 2021; Singh et al., 2019; Zolna et al., 2020; Calandra et al., 2017; Eysenbach et al., 2021) These results use existing actor-critic method, rather than classifier actor critic (see Appendix C). Third, we extend our analysis to multi-task settings by looking at goal-conditioned RL problems. Taken together, these extensions show that the connection between actor and critic regularization extends to other commonly-studied problem settings. + +# 5. Numerical Simulations + +Our numerical simulations study whether the theoretical connection between actor and critic regularization holds + +empirically. The first experiments (Sec. 5.1) will use classifier actor-critic, and we will expect the equivalence to hold exactly in this setting. We then study whether this connection still holds for practical prior methods (one-step RL and CQL), which violate our assumptions. We study these commonly-used methods in both tabular settings (Sec. 5.2) and on a benchmark offline RL task with continuous states and actions (Sec. 5.3). We do not expect these methods to always be the same (see, e.g., Kostrikov et al. (2021, Table 1)), and we will focus our experiments on critic regularization with moderate regularization coefficients. See Appendix F for details and hyperparameters for the experiments. Code for the tabular experiments is available online. $^{2}$ + +# 5.1. Exact Equivalence for Classifier Actor Critic + +Our first experiment aims to validate our theoretical result under the required assumptions: when using classifier actor-critic as the RL algorithm, and when using a tabular environment. We use a $5 \times 5$ deterministic gridworld with 5 actions (up/down/left/right/nothing). We describe the reward function and other details in Appendix F. To ensure that critic regularization converges to a fixed point and to avoid oscillatory learning dynamics, we update the policy using an exponential moving average. We also include (unregularized) classifier actor-critic to confirm that regularization is important in some settings. + +We compare these three methods in three environments. The first setting (Fig. 2 (left)) checks our theory that one-step RL and critic regularization should obtain the same policy. The second setting (Fig. 2 (center)) shows that one-step RL and critic regularization learn the same (suboptimal) policy in settings where using the Q-values for the behavioral policy lead to a suboptimal policy. The final setting is designed so that regularization increases the expected returns. The dataset is a single trajectory from the initial state to + +![](images/01592766c3f8e0cc796d02f01e8095f88cb6fe38bc0981bb52918be758752189.jpg) +(a) reward function + +![](images/609bac80efb8001d2f40d19849a11b7878d23092958cd7ee8d3783c6a161c40c.jpg) +(b) Q-learning + +![](images/6184f71a5b6bd241fbb5640e1428f3ce38a6c9cadcc5f86775258b06bdf590d8.jpg) +(c) One-step RL +Figure 3: CQL can behave like one-step RL. We design a gridworld $(a)$ so that one-step RL $(c)$ learns a suboptimal policy. For the three cells highlighted in blue, the optimal policy $(b)$ navigates towards the high-reward state (green) while the one-step RL policy $(c)$ navigates away from the high-reward state. $(d)$ CQL with a large regularization coefficient exhibits the same suboptimal behavior as one-step RL, taking actions that lead away from the high-reward states. $(e)$ CQL with a small regularization coefficient behaves like Q-learning. For clarity, we only show the argmax action in each state; we omit the arrow when the argmax action is "do nothing". + +![](images/6edef3509f63f9c29e3b7bcd83186f81796ff1f64bcef8dd56b40f4c8575375a.jpg) +(d) $\mathrm{CQL}(\lambda = 10)$ + +![](images/055abe2e032395ef501a47b1a01976f25d6c0e5e1aaebabf88c736dd315dbc2b.jpg) +(e) CQL $(\lambda = 0.1)$ + +the goal. With such limited data, unregularized classifier actor critic overestimates the Q-values at unseen actions, learning a policy that mistakenly takes these actions. In contrast, the regularized approaches learn to imitate the expert trajectory. Fig. 2 (right) shows that both forms of regularization produce the optimal policy. In summary, these tabular experiments validate our theoretical results, including in settings where regularization is useful and harmful. These experiments also demonstrate that the actor-critic method introduced in Sec. 3.4 does converge (Lemma 4.1). + +# 5.2. Predictions about Prior Methods: Tabular Setting + +Based on our theoretical analysis, we predict that practical implementations of one-step RL and critic regularization will exhibit similar behavior, for a certain critic regularization coefficient. This section studies the tabular setting, and the following section will use a continuous control benchmark. For critic regularization, we used CQL (Kumar et al., 2020) together with soft value iteration; following (Brandonbrener et al., 2021), we implement one-step RL (reverse KL) using Q-learning. + +We designed a deterministic gridworld so one-step RL would fail to learn the optimal policy (see Fig. 3 (left)). If CQL interpolates between the behavioral policy (random) and the optimal policy, then the argmax action would always be the same as the action for $\pi^{*}$ . Based on our analysis, we make a different prediction: that CQL will learn a policy similar to the one-step RL policy. We show results in Fig. 3, just showing the argmax action for visual clarity. The CQL policy takes actions away from both the high-reward state and the low reward state, similar to the behavioral policy but different from both the behavioral policy and the optimal policy. This experiment suggests that CQL can exhibit behavior similar to one-step RL. Of course, this effect is mediated by the regularization strength: a larger regularization coefficient would cause CQL to learn a random policy, and + +![](images/76f6bfb85751c41819600f85f6d6443468b6ce5c8603c9f44ca5726e96f29b26.jpg) +Figure 4: CQL and one-step RL take similar actions on most MDPs that resemble Fig. 3 + +a coefficient of 0 would make CQL identical to Q-learning. We extend Fig. 3 to include classifier actor critic and regularized variants in Appendix Fig. 9. + +How often does one-step RL approximate CQL? To show that the results in Fig. 3 are not cherry-picked, we repeated this experiment using 100 MDPs that are structurally similar to that in Fig. 3, but where the locations of the high-reward and low reward state are randomized. In each randomly generated MDP, we determine whether CQL exhibits behavior similar to one-step RL by looking at the states where CQL takes actions that differ from the reward-maximizing actions (as determined by running Q-learning with unlimited data). Since there are five total actions, a random policy would have a similarity score of $20\%$ . As shown in Fig. 4, the similarity score is significantly higher than chance for the vast majority of MDPs, showing that one-step RL and $\mathrm{CQL}(\lambda = 10)$ produce similar policies on most such gridworlds. + +When does one-step RL approximate CQL? Because one-step RL is highly regularized (policy iteration is truncated after just one step), one might imagine that it would be most similar to CQL with a very large regularization coefficient. To study this, we use the same environment + +![](images/04adc8be3cf70f1bd882b77dd0efb37765d818bf25c23ef693d4f988a95c2362.jpg) +Figure 5: One-step RL is most similar to CQL with a moderate regularization coefficient. + +(Fig. 3) and measure the fraction of states where one-step RL and CQL choose the same argmax action. As shown in Fig. 5, one-step RL is most similar to CQL with moderate regularization $(\lambda = 10)$ , and is less similar to CQL with a very strong regularization. + +# 5.3. Predictions about Prior Methods: Continuous Control Setting + +Our final set of experiments studies whether our theoretical results can make accurate testable predictions about practically-used regularization methods in a setting where they are commonly used: offline RL benchmarks with continuous states and actions. For these experiments, we will use well-tuned implementations of CQL and one-step RL from Hoffman et al. (2020), using the default hyperparameters without modification. While our theoretical results do not apply directly to these practical methods, which violate the assumptions in our analysis, they nonetheless raise the questions of whether these methods perform similarly in practice. We made one change to the one-step RL implementation to makethe comparison more fair: because CQL learns two Q functions and takes the minimum (a trick introduced in Fujimoto et al. (2018)), we applied this same parametrization to the one-step RL implementation. Since offline RL methods can perform differently on datasets of varying quality (Wang et al., 2020; Fujimoto & Gu, 2021; Paine et al., 2020; Wang et al., 2021; Fujimoto et al., 2019), we will repeat our experiments on four datasets from the D4RL benchmark (Fu et al., 2020). + +Lower bounds on Q-values. One oft-cited benefit of critic regularization is that it has guarantees about value-estimation (Kumar et al., 2020): under appropriate assumptions, the learned value function will underestimate the discounted expected returns of the policy. Because our analysis shows a connection between one-step RL and critic regularization, it raises the question of whether one-step RL methods have similar value-estimation properties. Taken at face value, this hypothesis seems obvious: the behavioral critic estimates the value of the behavioral policy, so it should underestimate the value of any policy that is better + +![](images/2a05c1f9ebd5731e08cc9c6e9e9796187c58da361e6980d180d7b8ee631d9257.jpg) +Figure 6: Q-value under/over-estimation. (Top) Experiments on benchmark datasets of varying quality show that one-step RL underestimates the Q-values. (Bottom) Despite the theoretical guarantees about critic regularization (CQL) yielding underestimates, in practice we observe that the values learned via critic regularization can sometimes overestimate the actual returns. We plot the mean and standard deviation across five random seeds. Note that the Q-values are equivalent to the value function $V^{\pi}(s)$ . + +than the behavioral policy. Despite this, the lower bound property of methods like one-step RL are rarely discussed, suggesting that it has yet to be widely appreciated. + +Fig. 6 shows both these predicted and actual (discounted) returns throughout the course of training. The results for one-step RL confirm our theoretical prediction on $4/4$ datasets: the Q-values from one-step RL underestimate the actual returns. In contrast, we observe that critic regularization overestimates the true returns on $2/4$ environments, perhaps because the regularization coefficients used to achieve good returns in practice are too weak to guarantee the lower bound property, and perhaps because the theoretical guarantees are only guaranteed to hold at convergence. In total, these experiments confirm our theoretical predictions that one-step RL will result in Q-values that are underestimations, while also questioning the claim that critic regularization methods are always preferable for ensuring underestimation. + +Critic regularization causes actor regularization. Our analysis in Sec. 4 not only suggests that one-step RL methods might inherit properties of critic regularization (as studied in the previous section), but also suggests that critic regularization methods may behave like one-step methods. In particular, while critic regularization methods such as CQL do not explicitly regularize their actor, we hypothesize that they implicitly regularize the actor (Lemma 4.2), similar to how one-step RL methods explicitly regularize the actor. + +We measure the MSE between the action in the dataset and the action predicted by the learned policy. Fig. 7 shows the results. Some CQL implementations, including ours, "warm-start" the actor by applying a behavioral cloning loss for 50,000 iterations; we omit these initial gradient steps from our plots so that any effect is caused solely by the critic regularization. On $4/4$ datasets, we observe that the + +![](images/76fdf6d0c2f4ad5ae827cbb02b9fcb08625bd8365a0460ed404ca1dac0d06380.jpg) +Figure 7: Critic regularization causes actor regularization. Performing critic regularization via CQL implicitly results in actor regularization, similar to one-step RL: the MSE between the predicted actions and the dataset actions decreases. We plot the mean and standard deviation across five random seeds. + +MSE between the CQL policy's actions and the actions in the datasets decreases throughout training. Perhaps the one exception is on the medium-replay dataset, where the MSE eventually starts to increase after 5e5 gradient steps. While directly regularizing the actor leads to MSE errors that are $\sim 3\times$ smaller, this plot nevertheless provides evidence that critic regularization indirectly regularizes the actor. + +# 6. Conclusion + +In this paper, we drew a connection between two seemingly distinct RL regularization methods: one-step RL and critic regularization. While our analysis made assumptions that are typically violated in practice, it nonetheless made accurate, testable predictions about practical methods with commonly-used hyperparameters: critic regularization methods can behave like one-step methods, and vice versa. + +Acknowledgements. We thank Aviral Kumar, George Tucker, David Brandfonbrener, and Scott Fujimoto for discussions and feedback on drafts of the paper. We thank Young Geng and Ilya Kostrikov for sharing code. We thank Chongyi Zheng for spotting a bug that prompted this project. This material is supported by the Fannie and John Hertz Foundation, NSF GRFP (DGE1745016), ONR N000142312368, DARPA/AFRL FA87502321015, and the Office of Naval Research + +# References + +Abdolmaleki, A., Springenberg, J. T., Tassa, Y., Munos, R., Heess, N., and Riedmiller, M. Maximum a posteriori policy optimisation. In International Conference on Learning Representations, 2018. +Abramovich, S. and Persson, L.-E. Some new estimates of the 'Jensen gap'. Journal of Inequalities and Applications, 2016(1):1-9, 2016. +Agarwal, A., Jiang, N., Kakade, S. M., and Sun, W. Reinforcement learning: Theory and algorithms. CS Dept., UW Seattle, Seattle, WA, USA, Tech. Rep, 2019. + +An, G., Moon, S., Kim, J.-H., and Song, H. O. Uncertainty-based offline reinforcement learning with diversified Q-ensemble. Advances in neural information processing systems, 34:7436-7447, 2021. +Bai, C., Wang, L., Yang, Z., Deng, Z.-H., Garg, A., Liu, P., and Wang, Z. Pessimistic bootstrapping for uncertainty-driven offline reinforcement learning. In International Conference on Learning Representations. +Bakushinskii, A. B. A general method of constructing regularizing algorithms for a linear incorrect equation in hilbert space. Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki, 7(3):672-677, 1967. +Bauer, F., Pereverzev, S., and Rosasco, L. On regularization algorithms in learning theory. Journal of complexity, 23 (1):52-72, 2007. +Brandfonbrener, D., Whitney, W., Ranganath, R., and Bruna, J. Offline RL without off-policy evaluation. Advances in Neural Information Processing Systems, 34:4933-4946, 2021. +Buckman, J., Gelada, C., and Bellemare, M. G. The importance of pessimism in fixed-dataset policy optimization. In International Conference on Learning Representations, 2021. +Calandra, R., Owens, A., Upadhyaya, M., Yuan, W., Lin, J., Adelson, E. H., and Levine, S. The feeling of success: Does touch sensing help predict grasp outcomes? In Conference on Robot Learning, pp. 314-323. PMLR, 2017. +Chebotar, Y., Hausman, K., Lu, Y., Xiao, T., Kalashnikov, D., Varley, J., Irpan, A., Eysenbach, B., Julian, R. C., Finn, C., et al. Actionable models: Unsupervised offline reinforcement learning of robotic skills. In International Conference on Machine Learning, pp. 1518-1528. PMLR, 2021. +Chen, L., Lu, K., Rajeswaran, A., Lee, K., Grover, A., Laskin, M., Abbeel, P., Srinivas, A., and Mordatch, I. Decision transformer: Reinforcement learning via sequence modeling. Advances in neural information processing systems, 34:15084-15097, 2021. +Ding, Y., Florensa, C., Abbeel, P., and Phielipp, M. Goal-conditioned imitation learning. Advances in neural information processing systems, 32, 2019. +Emmons, S., Eysenbach, B., Kostrikov, I., and Levine, S. Rvs: What is essential for offline rl via supervised learning? In International Conference on Learning Representations, 2021. + +Eysenbach, B., Geng, X., Levine, S., and Salakhutdinov, R. Rewriting history with inverse rl: Hindsight inference for policy improvement. In Advances in Neural Information Processing Systems, 2020a. +Eysenbach, B., Salakhutdinov, R., and Levine, S. C-learning: Learning to achieve goals via recursive classification. In International Conference on Learning Representations, 2020b. +Eysenbach, B., Levine, S., and Salakhutdinov, R. Replacing rewards with examples: Example-based policy search via recursive classification. Advances in Neural Information Processing Systems, 34, 2021. +Eysenbach, B., Udatha, S., Salakhutdinov, R. R., and Levine, S. Imitating past successes can be very suboptimal. Advances in Neural Information Processing Systems, 35: 6047-6059, 2022. +Finn, C., Levine, S., and Abbeel, P. Guided cost learning: Deep inverse optimal control via policy optimization. In International conference on machine learning, pp. 49-58. PMLR, 2016. +Fleming, H. E. Equivalence of regularization and truncated iteration in the solution of ill-posed image reconstruction problems. Linear Algebra and its applications, 130:133-150, 1990. +Fu, J., Singh, A., Ghosh, D., Yang, L., and Levine, S. Variational inverse control with events: A general framework for data-driven reward definition. Advances in neural information processing systems, 31, 2018. +Fu, J., Kumar, A., Nachum, O., Tucker, G., and Levine, S. D4rl: Datasets for deep data-driven reinforcement learning. arXiv preprint arXiv:2004.07219, 2020. +Fujimoto, S. and Gu, S. S. A minimalist approach to offline reinforcement learning. Advances in neural information processing systems, 34:20132-20145, 2021. +Fujimoto, S., Hoof, H., and Meger, D. Addressing function approximation error in actor-critic methods. In International conference on machine learning, pp. 1587-1596. PMLR, 2018. +Fujimoto, S., Meger, D., and Precup, D. Off-policy deep reinforcement learning without exploration. In International conference on machine learning, pp. 2052-2062. PMLR, 2019. +Gao, X., Sitharam, M., and Roitberg, A. E. Bounds on the Jensen gap, and implications for mean-concentrated distributions. arXiv preprint arXiv:1712.05267, 2017. + +Geist, M., Scherrer, B., and Pietquin, O. A theory of regularized Markov decision processes. In International Conference on Machine Learning, pp. 2160-2169. PMLR, 2019. +Ghosh, D., Gupta, A., Reddy, A., Fu, J., Devin, C. M., Eysenbach, B., and Levine, S. Learning to reach goals via iterated supervised learning. In International Conference on Learning Representations, 2020. +Gülçehre, C., Wang, Z., Novikov, A., Le Paine, T., Colmenarejo, S. G., Zolna, K., Agarwal, R., Merel, J., Mankowitz, D. J., Paduraru, C., et al. RL unplugged: Benchmarks for offline reinforcement learning. 2020. +Haarnoja, T., Zhou, A., Abbeel, P., and Levine, S. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning, pp. 1861-1870. PMLR, 2018. +Hatch, K., Yu, T., Rafailov, R., and Finn, C. Example-based offline reinforcement learning without rewards. Proceedings of Machine Learning Research vol, 144:1-17, 2022. +Hazan, E., Kakade, S., Singh, K., and Van Soest, A. Provably efficient maximum entropy exploration. In International Conference on Machine Learning, pp. 2681-2691. PMLR, 2019. +Hoffman, M., Shahriari, B., Aslanides, J., Barth-Maron, G., Behbahani, F., Norman, T., Abdelmaleki, A., Cassirer, A., Yang, F., Baumli, K., Henderson, S., Novikov, A., Colmenarejo, S. G., Cabi, S., Gulcehre, C., Paine, T. L., Cowie, A., Wang, Z., Piot, B., and de Freitas, N. Acme: A research framework for distributed reinforcement learning. arXiv preprint arXiv:2006.00979, 2020. +Huntley, H. E. Dimensional analysis. Dover publications, 1967. +Jaques, N., Ghandeharioun, A., Shen, J. H., Ferguson, C., Lapedriza, A., Jones, N., Gu, S., and Picard, R. Way off-policy batch deep reinforcement learning of implicit human preferences in dialog. arXiv preprint arXiv:1907.00456, 2019. +Kalashnikov, D., Irpan, A., Pastor, P., Ibarz, J., Herzog, A., Jang, E., Quillen, D., Holly, E., Kalakrishnan, M., Vanhoucke, V., et al. Scalable deep reinforcement learning for vision-based robotic manipulation. In Conference on Robot Learning, pp. 651-673. PMLR, 2018. +Kalashnikov, D., Varley, J., Chebotar, Y., Swanson, B., Jonschkowski, R., Finn, C., Levine, S., and Hausman, K. Scaling up multi-task robotic reinforcement learning. In 5th Annual Conference on Robot Learning, 2021. + +Kostrikov, I., Nair, A., and Levine, S. Offline reinforcement learning with implicit Q-learning. In International Conference on Learning Representations, 2021. +Kumar, A., Fu, J., Soh, M., Tucker, G., and Levine, S. Stabilizing off-policy Q-learning via bootstrapping error reduction. Advances in Neural Information Processing Systems, 32, 2019a. +Kumar, A., Peng, X. B., and Levine, S. Reward-conditioned policies. arXiv preprint arXiv:1912.13465, 2019b. +Kumar, A., Zhou, A., Tucker, G., and Levine, S. Conservative Q-learning for offline reinforcement learning. Advances in Neural Information Processing Systems, 33: 1179-1191, 2020. +Lange, S., Gabel, T., and Riedmiller, M. Batch reinforcement learning. In Reinforcement learning, pp. 45-73. Springer, 2012. +Levine, S., Kumar, A., Tucker, G., and Fu, J. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643, 2020. +Li, A., Pinto, L., and Abbeel, P. Generalized hindsight for reinforcement learning. Advances in neural information processing systems, 33:7754-7767, 2020. +Lu, C., Ball, P., Parker-Holder, J., Osborne, M., and Roberts, S. J. Revisiting design choices in offline model based reinforcement learning. In International Conference on Learning Representations, 2021. +Lynch, C. and Sermanet, P. Language conditioned imitation learning over unstructured data. Proceedings of Robotics: Science and Systems, 2021. +Lyu, J., Ma, X., Yan, J., and Li, X. Efficient continuous control with double actors and regularized critics. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp. 7655-7663, 2022. +Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., and Riedmiller, M. A. Playing atari with deep reinforcement learning. *ArXiv*, abs/1312.5602, 2013. +Nachum, O., Dai, B., Kostrikov, I., Chow, Y., Li, L., and Schuurmans, D. Algaedice: Policy gradient from arbitrary experience. arXiv preprint arXiv:1912.02074, 2019. +Nash, J. Non-cooperative games. Annals of mathematics, pp. 286-295, 1951. +Neu, G., Jonsson, A., and Gomez, V. A unified view of entropy-regularized Markov decision processes. arXiv preprint arXiv:1705.07798, 2017. + +Paine, T. L., Paduraru, C., Michi, A., Gulcehre, C., Zolna, K., Novikov, A., Wang, Z., and de Freitas, N. Hyperparameter selection for offline reinforcement learning. arXiv preprint arXiv:2007.09055, 2020. +Paster, K., McIlraith, S. A., and Ba, J. Planning from pixels using inverse dynamics models. In International Conference on Learning Representations, 2021. +Peng, X. B., Kumar, A., Zhang, G., and Levine, S. Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. arXiv preprint arXiv:1910.00177, 2019. +Peters, J. and Schaal, S. Reinforcement learning by reward-weighted regression for operational space control. In Proceedings of the 24th international conference on Machine learning, pp. 745-750, 2007. +Peters, J., Mulling, K., and Altun, Y. Relative entropy policy search. In Twenty-Fourth AAAI Conference on Artificial Intelligence, 2010. +Pinto, L. and Gupta, A. Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. In 2016 IEEE international conference on robotics and automation (ICRA), pp. 3406-3413. IEEE, 2016. +RezaEIFar, S., Dadashi, R., Vieillard, N., Hussenot, L., Bachem, O., Pietquin, O., and Geist, M. Offline reinforcement learning as anti-exploration. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp. 8106-8114, 2022. +Santos, R. J. Equivalence of regularization and truncated iteration for general ill-posed problems. Linear algebra and its applications, 236:25-33, 1996. +Savinov, N., Dosovitskiy, A., and Koltun, V. Semiparametric topological memory for navigation. In International Conference on Learning Representations, 2018. +Scherrer, B. Improved and generalized upper bounds on the complexity of policy iteration. Advances in Neural Information Processing Systems, 26, 2013. +Siegel, N., Springenberg, J. T., Berkenkamp, F., Abdolmaleki, A., Neunert, M., Lampe, T., Hafner, R., Heess, N., and Riedmiller, M. Keep doing what worked: Behavior modelling priors for offline reinforcement learning. In International Conference on Learning Representations. +Singh, A., Yang, L., Hartikainen, K., Finn, C., and Levine, S. End-to-end robotic reinforcement learning without reward engineering. arXiv preprint arXiv:1904.07854, 2019. + +Srivastava, R. K., Shyam, P., Mutz, F., Jaskowski, W., and Schmidhuber, J. Training agents using upside-down reinforcement learning. arXiv preprint arXiv:1912.02877, 2019. +Sun, H., Li, Z., Liu, X., Zhou, B., and Lin, D. Policy continuation with hindsight inverse dynamics. Advances in Neural Information Processing Systems, 32, 2019. +Sutton, R. S. and Barto, A. G. Reinforcement learning: An introduction. MIT press, 2018. +Tung, H.-Y., Harley, A. W., Huang, L.-K., and Fragkiadaki, K. Reward learning from narrated demonstrations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7004-7013, 2018. +Vieillard, N., Kozuno, T., Scherrer, B., Pietquin, O., Munos, R., and Geist, M. Leverage the average: an analysis of kl regularization in reinforcement learning. Advances in Neural Information Processing Systems, 33:12163-12174, 2020. +Villaflor, A., Dolan, J., and Schneider, J. Fine-tuning offline reinforcement learning with model-based policy optimization. Offline Reinforcement Learning Workshop at Neural Information Processing Systems, 2020. +Wahba, G. Three topics in ill-posed problems. In Inverse and ill-posed problems, pp. 37-51. Elsevier, 1987. +Wang, Q., Xiong, J., Han, L., Liu, H., Zhang, T., et al. Exponentially weighted imitation learning for batched historical data. Advances in Neural Information Processing Systems, 31, 2018. +Wang, R., Wu, Y., Salakhutdinov, R., and Kakade, S. Instabilities of offline rl with pre-trained neural representation. In International Conference on Machine Learning, pp. 10948-10960. PMLR, 2021. +Wang, Z., Novikov, A., Zolna, K., Merel, J. S., Springenberg, J. T., Reed, S. E., Shahriari, B., Siegel, N., Gulcehre, C., Heess, N., et al. Critic regularized regression. Advances in Neural Information Processing Systems, 33: 7768-7778, 2020. +Wen, J., Kumar, S., Gummadi, R., and Schuurmans, D. Characterizing the gap between actor-critic and policy gradient. In International Conference on Machine Learning, pp. 11101-11111. PMLR, 2021. +Wu, Y., Tucker, G., and Nachum, O. Behavior regularized offline reinforcement learning. arXiv preprint arXiv:1911.11361, 2019. +Yang, R., Fang, M., Han, L., Du, Y., Luo, F., and Li, X. MHER: Model-based hindsight experience replay. ArXiv, abs/2107.00306, 2021. + +Yu, T., Kumar, A., Rafailov, R., Rajeswaran, A., Levine, S., and Finn, C. Combo: Conservative offline model-based policy optimization. Advances in neural information processing systems, 34:28954-28967, 2021. +Zhou, W., Bajracharya, S., and Held, D. Plas: Latent action space for offline reinforcement learning. In Conference on Robot Learning, pp. 1719-1735. PMLR, 2021. +Ziebart, B. D. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. Carnegie Mellon University, 2010. +Zolna, K., Novikov, A., Konyushkova, K., Gulcehre, C., Wang, Z., Aytar, Y., Denil, M., de Freitas, N., and Reed, S. Offline learning from demonstrations and unlabeled experience. arXiv preprint arXiv:2011.13885, 2020. + +# Appendices + +In the Appendices, we provide proofs of the theoretical results (Appendix C), extend the analysis to other RL settings (Appendices C-D), and then provide details of the experiments (Appendix F). + +# A. Proofs + +# A.1. Proof of Lemma 4.1 + +Proof sketch. Lemma 4.1 shows that classifier actor critic converges. The key idea of the proof will be to show that the incremental updates for classifier actor critic are exactly the same as the incremental updates for Q-learning. Q-learning converges, so an algorithm that performs the same incremental updates as Q-learning must also converge. + +Proof. As the cross entropy loss is minimized when the predictions equal the labels, updates for $\mathcal{L}_{\mathrm{critic}}(Q,\pi)$ can be written as $\frac{Q(s,a)}{Q(s,a)+1} \gets \frac{y^{\pi,Q_t}(s,a)}{y^{\pi,Q_t}(s,a)+1}$ . If the updates are performed by averaging over all possible next states (e.g., in the tabular setting), these updates are equivalent to directly updating $Q(s,a) \gets y^{\pi,Q_t}(s,a) = r(s,a) + \gamma \mathbb{E}_{p(s'|s,a)\pi(a'|s')} [Q_t(s',a')]$ , which is the standard policy evaluation update for policy $\pi(a|s)$ . Thus, we can invoke the standard result that policy evaluation converges to $Q^{\pi}$ (Agarwal et al., 2019, Theorem 1.14.) to argue that updates for $\mathcal{L}_{\mathrm{critic}}$ likewise converge to $Q^{\pi}$ . + +In this proof, the TD targets were the expectation over the next state and next action. If Eq. 3 were optimized using a single-sample estimate of this expectation, $\mathbf{y} = r(s,a) + \gamma Q_t(s',a')$ , then the updates would be biased: + +$$ +\frac {Q (s , a)}{Q (s , a) + 1} \leftarrow \mathbb {E} \left[ \frac {\mathbf {y}}{\mathbf {y} + 1} \right] \leq \frac {\mathbb {E} [ \mathbf {y} ]}{\mathbb {E} [ \mathbf {y} ] + 1} = \frac {y ^ {\pi , Q _ {t}} (s , a)}{y ^ {\pi , Q _ {t}} (s , a) + 1}. +$$ + +In settings with stochastic transitions or policies, these updates would result in estimating a lower bound on $Q^{\pi}(s,a)$ . + +# A.2. Proof of Lemma 4.2 and Theorem 4.3 + +Proof. Our proof proceeds in three steps. First, we derive the update equations for the regularized critic update. That is, if we maintained a table of Q-values, what would the new value for $Q(s,a)$ be? Second, we show that these updates are equivalent to performing policy evaluation on a re-parametrized critic $\tilde{Q}(s,a) = Q(s,a)\frac{\pi(a|s)}{\beta(a|s)}$ . We invoke the standard results for policy evaluation to prove convergence that $\tilde{Q}(s,a)$ converges. Finally, we undo the reparametrization to obtain convergence results for $Q(s,a)$ . + +Step 0. We start by rearranging the regularized critic objective: + +$$ +\begin{array}{l} \mathcal {L} _ {\text {c r i t i c}} ^ {r} (Q, y ^ {\pi , Q _ {t}}) \triangleq \mathcal {L} _ {\text {c r i t i c}} (Q, y ^ {\pi , Q _ {t}}) + \left(\mathbb {E} _ {p (s) \pi (a | s)} [ \log (Q (s, a) + 1) ] - \mathbb {E} _ {p (s) \beta (a | s)} [ \log (Q (s, a) + 1) ]\right) \\ = - \mathbb {E} _ {p (s, a)} \left[ y ^ {\pi , Q _ {t}} (s, a) \log \frac {Q (s , a)}{Q (s , a) + 1} + \log \frac {1}{Q (s , a) + 1} \right] \\ + \left(\mathbb {E} _ {p (s) \pi (a | s)} \left[ \log (Q (s, a) + 1) \right] - \mathbb {E} _ {p (s) \beta (a | s)} \left[ \log (Q (s, a) + 1) \right]\right) \\ = - \mathbb {E} _ {p (s, a)} \left[ y ^ {\pi , Q _ {t}} (s, a) \log \frac {Q (s , a)}{Q (s , a) + 1} + \log \frac {1}{Q (s , a) + 1} \right] \\ \left. - \left(\mathbb {E} _ {p (s) \pi (a | s)} \left[ \log \frac {1}{Q (s , a) + 1} \right] + \mathbb {E} _ {p (s) \beta (a | s)} \left[ \log \frac {1}{Q (s , a) + 1} \right]\right)\right) \\ = - \mathbb {E} _ {p (s, a)} \left[ y ^ {\pi , Q _ {t}} (s, a) \log \frac {Q (s , a)}{Q (s , a) + 1} \right] + \mathbb {E} _ {p (s) \beta (a | s)} \left[ \log \frac {1}{Q (s , a) + 1} \right]. \\ \end{array} +$$ + +For the cancellation on the third line, we used the fact that $p(s,a) = p(s)\beta (a\mid s)$ . + +Step 1. To start, note that the regularized critic update is equivalent to a weighted classification loss: positive examples are sampled $(s,a)\sim p(s)\beta (a\mid s)$ and receive weight $\frac{y^{\pi,Q_t}(s,a)}{y^{\pi,Q_t}(s,a) + 1}$ , and negative examples are sampled $(s,a)\sim p(s)\pi (a\mid s)$ + +and receive weight $\frac{1}{y^{\pi,Q_t}(s,a) + 1}$ . The Bayes' optimal classifier is given by + +$$ +\frac {Q (s , a)}{Q (s , a) + 1} = \frac {\frac {y ^ {\pi , Q _ {t}} (s , a)}{y ^ {\pi , Q _ {t}} (s , a) + 1} p (s) \beta (a \mid s)}{\frac {y ^ {\pi , Q _ {t}} (s , a)}{y ^ {\pi , Q _ {t}} (s , a) + 1} p (s) \beta (a \mid s) + \frac {1}{y ^ {\pi , Q _ {t}} (s , a) + 1} p (s) \pi (a \mid s)} = \frac {y ^ {\pi , Q _ {t}} (s , a) \beta (a \mid s)}{y ^ {\pi , Q _ {t}} (s , a) \beta (a \mid s) + \pi (a \mid s)}. +$$ + +Solving for $Q(s, a)$ on the left hand side, the optimal value for $Q(s, a)$ is given by + +$$ +Q (s, a) = y ^ {\pi , Q _ {t}} (s, a) \frac {\beta (a \mid s)}{\pi (a \mid s)} = \left(r (s, a) + \mathbb {E} _ {p \left(s ^ {\prime} \mid s, a\right) \pi \left(a ^ {\prime} \mid s ^ {\prime}\right)} \left[ Q _ {t} \left(s ^ {\prime}, a ^ {\prime}\right) \right]\right) \frac {\beta (a \mid s)}{\pi (a \mid s)}. \tag {7} +$$ + +This equation tells us what each update for the regularized critic loss does. + +Step 2. To analyze these updates, we define $\tilde{Q}(s,a) \triangleq Q_t(s,a)\frac{\pi(a|s)}{\beta(a|s)}$ . Then these updates can be written using $\tilde{Q}(s,a)$ as + +$$ +\tilde {Q} (s, a) \frac {\beta (a \mid s)}{\pi (a \mid s)} = \left(r (s, a) + \mathbb {E} _ {p \left(s ^ {\prime} \mid s, a\right) \pi \left(a ^ {\prime} \mid s ^ {\prime}\right)} \left[ \tilde {Q} \left(s ^ {\prime}, a ^ {\prime}\right) \frac {\beta \left(a ^ {\prime} \mid s ^ {\prime}\right)}{\pi \left(a ^ {\prime} \mid s ^ {\prime}\right)} \right]\right) \frac {\beta (a \mid s)}{\pi (a \mid s)}, \tag {8} +$$ + +which can be simplified to + +$$ +\tilde {Q} (s, a) = r (s, a) + \mathbb {E} _ {p \left(s ^ {\prime} \mid s, a\right) \beta \left(a ^ {\prime} \mid s ^ {\prime}\right)} \left[ \tilde {Q} \left(s ^ {\prime}, a ^ {\prime}\right) \right]. \tag {9} +$$ + +Note that the ratio $\frac{\beta(a' | s')}{\pi(a' | s')}$ inside the expectation acts like an importance weight, so that the expectation over $\pi(a' \mid s')$ becomes an expectation over $\beta(a' \mid s')$ . Thus, the regularized critic updates are equivalent to perform policy evaluation on $\tilde{Q}(s, a)$ . An immediately consequence is that the regularized critic updates converge, and they converge to $\tilde{Q}^*(s, a) = Q^\beta(s, a)$ . + +Step 3. Finally, we translate these convergence results for $\tilde{Q}(s,a)$ into convergence results for $Q(s,a)$ . Written in terms of the original Q-values, we see that the optimal critic for the regularized critic update is + +$$ +Q ^ {*} (s, a) = \tilde {Q} ^ {*} (s, a) \frac {\beta (a \mid s)}{\pi (a \mid s)} = Q ^ {\beta} (s, a) \frac {\beta (a \mid s)}{\pi (a \mid s)}. \tag {10} +$$ + +This completes the proof of Lemma 4.2. + +We now prove Theorem 4.3 by applying a logarithm: + +Proof. + +$$ +\log Q ^ {*} (s, a) = \log \left(Q ^ {\beta} (s, a) \frac {\beta (a \mid s)}{\pi (a \mid s)}\right) = \log Q ^ {\beta} (s, a) + \log \beta (a \mid s) - \log \pi (a \mid s). +$$ + +We note that our proof does not account for stochastic and function approximation errors. However, if we assume that the TD updates are deterministic (e.g., as they are in deterministic MDPs), then the updates for classifier actor-critic are identical to those of Q-learning (Lemma 4.1). Thus, it immediately inherits any theoretical results regarding the propagation of errors for Q-learning. + +While this Theorem 4.3 shows that one-step RL and critic regularization have the same fixed point, it does not say how many transitions or gradient updates are required to reach those fixed points. + +# A.3. Why use the cross-entropy loss? + +Our proof of Theorem 4.3 helps explain why classifier actor-critic use the cross entropy loss for the critic loss, rather than the MSE loss. Precisely, our analysis requires that the optimal Q function be a ratio, $\tilde{Q}(s,a) = \frac{Q(s,a)\pi(a|s)}{\beta(a|s)}$ . The cross entropy loss can readily estimate ratios. For example, the optimal classifier for data drawn from $p(x)$ and $q(x)$ is $C(x) = \frac{p(x)}{p(x) + q(x)}$ , + +so the ratio can be expressed as $\frac{C(x)}{1 - C(x)} = \frac{p(x)}{q(x)}$ . However, fitting a function $C(x)$ to data drawn from (say) a 1:1 mixture of $p(x)$ and $q(x)$ would result in $C(x) = \frac{1}{2} p(x) + \frac{1}{2} q(x)$ , which we cannot transform to express the ratio $\frac{p(x)}{q(x)}$ as a function of $C(x)$ . + +# A.4. Validating the Theory + +Our theoretical results suggest that one-step RL and critic regularization should be most similar with critic regularization is applied with a regularization coefficient of $\lambda = 1$ . To test this hypothesis, we took the task from Fig. 2 (Left) and measured the similarity between one-step RL and critic-regularized classifier actor critic, for varying values of the critic regularization parameter. We measured the similarity of the policies obtained by the two methods by counting the fraction of states where the two methods choose the same (argmax) action. The results, shown in Fig. 8, validate our theoretical prediction that these methods should be most similar with $\lambda = 1$ . + +# A.5. What about using the policy gradient? + +![](images/d705f3fc1088e90743a0772e2d83ae4602504b80f0a9728e2a11997f057eec47.jpg) +Figure 8: Under the assumptions of Theorem 4.3, one-step RL is most similar to critic regularization with a coefficient of $\lambda = 1$ . + +Our analysis fundamentally requires using TD learning: the key step is that doing TD backups using one policy is equivalent to doing (modified) TD backups with a different policy. However, the actor updates for both methods could be implemented using a policy gradient or natural gradient, rather than a straight-through gradient estimator. Indeed, much of the work on one-step RL methods (Peng et al., 2019; Siegel et al.) uses an actor update that resembles a policy gradient or natural policy gradient (e.g., 1-step RL with a reverse KL penalty (Brandfonbrener et al., 2021)). + +# B. Varying the regularization coefficient + +While our main analysis (Theorem 4.3) showed that regularization and critic regularization yield the same policy when these regularizers are applied with a certain strength, in practice the strength of regularization is controlled by a hyperparameter. This hyperparameter raises a question: does the connection between one-step RL and critic regularization hold for different values of this hyperparameter? + +In this section, we show that there remains a precise connection between actor and critic regularization, even for different values of this hyperparameter. This result not only suggests that the connection is stronger than initially suggested by the main result. Proving this connection also helps highlight how many regularization methods can be cast from a similar mold. + +# B.1. A Regularization Coefficient. + +We start by modifying the actor regularizer and critic regularizer introduced in Sec. 3.4 to include an additional hyperparameter. + +Mixture policy. Both the actor and critic losses will make use of a mixture policy, $(1 - \lambda)\pi (a\mid s) + \lambda \beta (a\mid s)$ , where $\lambda \in [0,1]$ will be a hyperparameter. Larger values of $\lambda$ yield a mixture policy that is closer to the behavioral policy; this will correspond to higher degrees of regularization. Mixtures of policies are commonly used in practice (Kumar et al., 2020, Appendix F),(Villaflor et al., 2020, Eq. 11), (Finn et al., 2016, Sec. 4.3) (Lyu et al., 2022) (Hazan et al., 2019, Eq. 2.5), even though it rarely appears in theoretical offline RL literature. Indeed, because critic regularization resembles a two-player zero-sum game, mixture policies might even be required to find a (Nash) equilibrium of the critic regularizer (Nash, 1951). + +$\lambda$ -weighted critic loss. With this concept of a mixture policy, we define the $\lambda$ -weighted actor and critic regularizers. For the $\lambda$ -weighted critic loss, we will change how the TD targets are computed. Instead of sampling the next action from $\pi$ or $\beta$ , we will sample the next action from a $\lambda_{\mathrm{TD}}$ -weighted combination of these two policies, reminiscent of how prior work + +has regularized the actions sampled for the TD backup (Fujimoto et al., 2019; Zhou et al., 2021): + +$$ +y^{\lambda_{\mathrm{TD}}}\triangleq y^{(1 - \lambda)\pi +\lambda \beta}(s,a) = r(s,a) + \gamma \mathbb{E}_{\substack{p(s^{\prime}|s,a)\\ (1 - \lambda_{\mathrm{TD}})\pi (a|s) + \lambda_{\mathrm{TD}}\beta (a|s)}}[Q(s^{\prime},a^{\prime})]. +$$ + +When introducing one-step RL in Sec. 3.4, we used $\lambda_{\mathrm{TD}} = 1$ + +Using this TD target, the $\lambda$ -weighted critic loss can now be written as a combination of the un-regularized objective (Eq. 3) plus the regularized objective (Eq. 6): + +$$ +\begin{array}{l} \mathcal {L} _ {\mathrm {c r i t i c}} ^ {r} (Q, \lambda_ {\mathrm {c r i t i c}}) \triangleq (1 - \lambda_ {\mathrm {c r i t i c}}) \left(- \mathbb {E} _ {p (s, a)} \left[ \frac {y ^ {\lambda_ {\mathrm {T D}}} (s , a)}{y ^ {\lambda_ {\mathrm {T D}}} (s , a) + 1} \log \frac {Q (s , a)}{Q (s , a) + 1} + \frac {1}{y ^ {\lambda_ {\mathrm {T D}}} (s , a) + 1} \log \frac {1}{Q (s , a) + 1} \right]\right) \\ +\lambda \left(-\mathbb{E}_{\substack{p(s,a)\\ a^{-}\sim \pi (\cdot |s)}}\left[\frac{y^{\lambda_{\mathrm{TD}}}(s,a)}{y^{\lambda_{\mathrm{TD}}}(s,a) + 1}\log \frac{Q(s,a)}{Q(s,a) + 1} +\frac{1}{y^{\lambda_{\mathrm{TD}}}(s,a) + 1}\log \frac{1}{Q(s,a) + 1}\right]\right) \\ = - \mathbb {E} _ {a ^ {-} \sim (1 - \lambda_ {\text {c r i t i c}}) \pi (\cdot | s) + \lambda_ {\text {c r i t i c}} \beta (\cdot | s)} \left[ \frac {y ^ {\lambda_ {\mathrm {T D}}} (s , a)}{y ^ {\lambda_ {\mathrm {T D}}} (s , a) + 1} \log \frac {Q (s , a)}{Q (s , a) + 1} + \frac {1}{y ^ {\lambda_ {\mathrm {T D}}} (s , a) + 1} \log \frac {1}{Q (s , a ^ {-}) + 1} \right]. \tag {11} \\ \end{array} +$$ + +The second line rewrites this objective: the first term looks the same as the original "positive" term in the critic objective, while the "negative" term uses actions sampled from a mixture of the current policy and the behavioral policy. When $\lambda_{\mathrm{critic}} = 1$ , we recover the regularized critic loss introduced in Sec. 3.4. + +$\lambda$ -weighted actor loss. Finally, the strength of the actor regularizer can be controlled by changing the reverse KL penalty. While it may seem like changing the reward scale would varying the strength of the actor loss, this is not the case for classifier actor critic because of the $\log(\cdot)$ in the actor loss. Instead, we will relax the reverse KL penalty between the learned policy $\pi(a \mid s)$ and the behavioral policy $\beta(a \mid s)$ so that only the mixture policy only needs to be close to behavioral policy: + +$$ +\mathcal {L} _ {\text {a c t o r}} ^ {r} (\pi , \lambda_ {\mathrm {K L}}) \triangleq \mathbb {E} _ {p (s) \pi (a | s)} [ \log Q (s, a) + \log \beta (a \mid s) - \log ((1 - \lambda_ {\mathrm {K L}}) \pi (a \mid s) + \lambda_ {\mathrm {K L}} \beta (a \mid s)) ]. \tag {12} +$$ + +As indicated on the second line, replacing $\beta (a\mid s)$ with the mixture policy has an effect similar to that of decreasing the weight applied to the KL penalty. The approximation on the second line is determined by the Jensen Gap (Abramovich & Persson, 2016; Gao et al., 2017). When introducing one-step RL in Sec. 3.4, we used $\lambda_{\mathrm{KL}} = 1$ , together with $\lambda_{\mathrm{TD}} = 1$ . + +In summary, the strength of the actor and critic regularizers can be controlled through additional hyperparameters $(\lambda_{\mathrm{critic}},\lambda_{\mathrm{TD}},\lambda_{\mathrm{KL}})$ . Indeed, it is typical for offline RL methods to require many hyperparameters (Brandfonbrener et al., 2021; Lu et al., 2021; Paine et al., 2020; Wu et al., 2019), and performance is sensitive to their settings. However, the close connection that we have shown between actor and critic regularizers allows us to decrease the number of hyperparameters. + +# B.2. Analysis + +In our main result (Thm. 4.3), we showed that one-stel RL and critic regularization are equivalent when $\lambda_{\mathrm{critic}} = \lambda_{\mathrm{TD}} = \lambda_{\mathrm{KL}} = 1$ . This is a large value for the regularization strength, and we now consider what happens for smaller degrees of regularization: is there still a connection between one-step RL and critic regularization? + +The following theorem will prove that this is the case. In particular, applying critic regularization with coefficient $\lambda_{\mathrm{critic}}$ yields the same policy as applying one-step RL with $\lambda_{\mathrm{TD}} = \lambda_{\mathrm{KL}} = \lambda_{\mathrm{critic}}$ . That is, there is a very simple recipe for converting the hyperparameters for critic regularization into the hyperparameters for one-step RL. + +Theorem B.1. Let policy $\pi(a|s)$ be given, let $Q^{\beta}(s,a)$ be the $Q$ -function of the behavioral policy, and let $Q_r^{\lambda_{TD}}(s,a,\lambda_{\text{critic}})$ be the critic obtained by the $\lambda_{\text{critic}}$ -weighted regularized critic update (Eq. 11) using TD targets $y^{\lambda_{TD}}(s,a)$ . If $\lambda_{\text{critic}} = \lambda_{TD} = \lambda_{KL}$ , then the $\lambda_{KL}$ -weighted actor loss (Eq. 12) is equivalent to the un-regularized policy objective using the regularized critic: + +$$ +\begin{array}{l} \mathbb {E} _ {p (s) \pi (a | s)} \left[ \log Q (s, a) + \log \beta (a \mid s) - \log \left((1 - \lambda_ {K L}) \pi (a \mid s) + \lambda_ {K L} \beta (a \mid s)\right) \right] \\ = \mathbb {E} _ {\pi (a | s)} \left[ \log Q _ {r} ^ {\lambda_ {T D}} (s, a, \lambda_ {\text {c r i t i c}}) \right] \quad f o r a l l s. \\ \end{array} +$$ + +While we used the cross entropy loss for this result, it turns out that the result also holds for the more standard MSE loss (we omit the proof for brevity). + +Limitations. Before presenting the proof in Sec. B.3, we discuss a few limitations of this result. Like the rest of the analysis in this paper, the form of the critic regularizer is different from that often used in practice. Additionally, our analysis assumes ignores many sources of errors (e.g., sampling, function approximation), and assumes that each objective is optimized exactly. + +# B.3. Proof of Theorem B.1 + +Proof. We start by defining the fixed point of the $\lambda$ -weighted regularized critic loss. Like in the single-task setting, this loss resembles a weighted classification problem, so we can write down the Bayes' optimal classifier as + +$$ +\begin{array}{l} \frac {Q (s , a)}{Q (s , a) + 1} = \frac {\frac {y ^ {\lambda_ {\mathrm {T D}} (s , a)}}{y ^ {\lambda_ {\mathrm {T D}} (s , a) + 1}} p (s) \beta (a \mid s)}{\frac {y ^ {\lambda_ {\mathrm {T D}} (s , a)}}{y ^ {\lambda_ {\mathrm {T D}} (s , a) + 1}} p (s) \beta (a \mid s) + \frac {1}{y ^ {\lambda_ {\mathrm {T D}} (s , a) + 1}} p (s) ((1 - \lambda_ {\mathrm {c r i t i c}}) \pi (a \mid s) + \lambda_ {\mathrm {c r i t i c}} \beta (a \mid s))} \\ = \frac {y ^ {\lambda_ {\mathrm {T D}}} (s , a) \beta (a \mid s)}{y ^ {\lambda_ {\mathrm {T D}}} (s , a) \beta (a \mid s) + (1 - \lambda_ {\text {c r i t i c}}) \pi (a \mid s) + \lambda_ {\text {c r i t i c}} \beta (a \mid s)}. \\ \end{array} +$$ + +Solving for $Q(s, a)$ on the left hand side, the optimal value for $Q(s, a)$ is given by + +$$ +\begin{array}{l} Q (s, a) = y ^ {\lambda_ {\mathrm {T D}}} (s, a) \frac {\beta (a \mid s)}{(1 - \lambda_ {\mathrm {c r i t i c}}) \pi (a \mid s) + \lambda_ {\mathrm {c r i t i c}} \beta (a \mid s)} \\ = \left(r (s, a) + \mathbb {E} _ {p \left(s ^ {\prime} \mid s, a\right), a ^ {\prime} \sim \left(1 - \lambda_ {\mathrm {T D}}\right), \pi (\cdot \mid s ^ {\prime}) + \lambda_ {\mathrm {T D}} \beta (\cdot \mid s)} \left[ Q \left(s ^ {\prime}, a ^ {\prime}\right) \right]\right) \frac {\beta (a \mid s)}{\left(1 - \lambda_ {\text {c r i t i c}}\right) \pi (a \mid s) + \lambda_ {\text {c r i t i c}} \beta (a \mid s)}. \tag {13} \\ \end{array} +$$ + +Note that the next action $a^\prime$ is sampled from a mixture policy defined by $\lambda_{\mathrm{TD}}$ . This equation tells us what each update for the $\lambda$ -weighted regularized critic loss does. + +To analyze these updates, we define + +$$ +\tilde {Q} (s, a) \triangleq Q (s, a) \frac {(1 - \lambda_ {\text {c r i t i c}}) \pi (a \mid s) + \lambda_ {\text {c r i t i c}} \beta (a \mid s)}{\beta (a \mid s)}. +$$ + +Like before, the ratio $\frac{\beta(a'|s')}{(1 - \lambda_{\mathrm{TD}})\pi(a'|s') + \lambda_{\mathrm{TD}}\beta(a'|s')}$ can act like an importance weight. When $\lambda_{\mathrm{TD}} = \lambda_{\mathrm{critic}}$ , then this importance weight cancels with the sampling distribution, providing the following identity: + +$$ +\begin{array}{l} \mathbb {E} _ {p \left(s ^ {\prime} \mid s, a\right), a ^ {\prime} \sim \left(1 - \lambda_ {\mathrm {T D}}\right), \pi \left(\cdot \mid s ^ {\prime}\right) + \lambda_ {\mathrm {T D}} \beta \left(\cdot \mid s\right)} \left[ Q \left(s ^ {\prime}, a ^ {\prime}\right) \right] \\ = \mathbb {E} _ {p (s ^ {\prime} | s, a), a ^ {\prime} \sim (1 - \lambda_ {\mathrm {T D}}), \pi (\cdot | s ^ {\prime}) + \lambda_ {\mathrm {T D}} \beta (\cdot | s)} \left[ \tilde {Q} (s, a) \frac {\beta (a \mid s)}{(1 - \lambda_ {\mathrm {c r i t i c}}) \pi (a \mid s) + \lambda_ {\mathrm {c r i t i c}} \beta (a \mid s)} \right] \\ = \mathbb {E} _ {p (s ^ {\prime} | s, a), a ^ {\prime} \sim \beta (\cdot | s ^ {\prime})} [ \tilde {Q} (s, a) ]. \\ \end{array} +$$ + +Substituting this identity in Eq. 13, we can write the updates using $\tilde{Q}(s,a)$ : + +$$ +\begin{array}{l} \tilde {Q} (s, a) \frac {\beta (a \mid s)}{(1 - \lambda_ {\text {c r i t i c}}) \pi (a \mid s) + \lambda_ {\text {c r i t i c}} \beta (a \mid s)} \\ = \Big (r (s, a) + \mathbb {E} _ {p (s ^ {\prime} | s, a), a ^ {\prime} \sim \beta (\cdot | s ^ {\prime})} [ \tilde {Q} (s, a) ] \Big) \frac {\beta (a \mid s)}{(1 - \lambda_ {\mathrm {c r i t i c}}) \pi (a \mid s) + \lambda_ {\mathrm {c r i t i c}} \beta (a \mid s)}, \\ \end{array} +$$ + +which can be simplified to + +$$ +\tilde {Q} (s, a) = r (s, a) + \mathbb {E} _ {p (s ^ {\prime} | s, a), a ^ {\prime} \sim \beta (\cdot | s ^ {\prime})} [ \tilde {Q} (s, a) ]. +$$ + +We then translate these convergence results for $\tilde{Q}(s, a)$ into convergence results for $Q(s, a)$ . Written in terms of the original Q-values, we see that the optimal critic for the regularized critic update is + +$$ +Q ^ {*} (s, a) = Q ^ {\beta} (s, a) \frac {\beta (a \mid s)}{(1 - \lambda_ {\text {c r i t i c}}) \pi (a \mid s) + \lambda_ {\text {c r i t i c}} \beta (a \mid s)}. \tag {14} +$$ + +Note that this holds for any value of $\lambda_{\mathrm{critic}} = \lambda_{\mathrm{TD}} \in [0,1]$ . This result suggests that two common forms of regularization, decreasing the values predicted at unseen actions and regularizing the actions used in the TD backup, can produce the same effect: a critic that estimates the Q-values of the behavioral policy (multiplied by some importance weight). + +Finally, substitute this Q-function into the un-regularized actor loss, we see that the result is equivalent to the $\lambda$ -weighted actor loss: + +$$ +\mathbb {E} _ {p (s) \pi (a | s)} \left[ \log Q ^ {*} (s, a) \right] = \mathbb {E} _ {p (s) \pi (a | s)} \left[ \log Q ^ {\beta} (s, a) + \underbrace {\log \beta (a \mid s) - \log ((1 - \lambda_ {\mathrm {K L}}) \pi (a \mid s) + \lambda_ {\mathrm {K L}} \beta (a \mid s))} _ {\lambda \text {- w e i g h t e d a c t o r r e g u l a r i z e r}} \right] +$$ + +# C. Regularization for Goal-Conditioned Problems + +Like single-task RL problems, goal-conditioned RL problems have also been approached with both one-step methods (Ghosh et al., 2020; Ding et al., 2019; Sun et al., 2019) and critic regularization (Chebotar et al., 2021). In these problems, the aim is to learn a goal-conditioned policy $\pi(a \mid s, s_g)$ that maximizes the expected discounted sum of goal-conditioned rewards $r_g(s, a)$ , where goals are sampled $s_g \sim p_g(s_g)$ : + +$$ +\max _ {\pi} \mathbb {E} _ {p _ {g} (s _ {g})} \mathbb {E} _ {\pi (\tau | s _ {g})} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} r _ {g} (s _ {t}, a _ {t}) \right]. +$$ + +We will use the goal-conditioned reward function $r_g(s, a) = p(s' = s_g \mid s, a)$ , which is defined in terms of the environment dynamics. In settings with discrete states, maximizing this reward function is equivalent to maximizing the sparse indicator reward function $(r_g(s, a) = \mathbb{1}(s_g = s))$ . + +In this section, we show that one-step RL and critic regularization are equivalent for a certain goal-conditioned actor-critic method. Unlike our analysis in the single-task setting, this analysis here uses an existing method, C-learning (Eysenbach et al., 2020b). C-learning is a TD method that already makes use of the cross entropy loss for training the critic: + +$$ +\begin{array}{l} \max _ {Q} (1 - \gamma) \mathbb {E} _ {p (s, a, s ^ {\prime})} \left[ \log \frac {Q (s , a , s _ {g} = s ^ {\prime})}{Q (s , a , s _ {g} = s ^ {\prime}) + 1} \right] + \gamma \mathbb {E} _ {p (s, a) p _ {g} (s _ {g})} \left[ y ^ {\pi , Q _ {t}} (s, a, s) \log \frac {Q (s , a , s _ {g})}{Q (s , a , s _ {g}) + 1} \right] \\ + \mathbb {E} _ {p (s, a) p _ {g} (s _ {g})} \left[ \log \frac {1}{Q (s , a , s _ {g} = s ^ {\prime}) + 1} \right], \\ \end{array} +$$ + +where $y^{\pi ,Q_t}(s,a,s_g) = \mathbb{E}_{p(s'|s,a)\pi (a'|s',s_g)}[Q(s',a',s_g)]$ serves the role of the TD target. + +The first two terms increase the Q-values while the last term decreases the Q-values. The actor is updated to maximize the Q-values. While this objective for the actor can be written in many ways, we will write it as maximizing a log ratio because it will allow us to draw a precise equivalence between actor and critic regularization: + +$$ +\max _ {\pi} \mathbb {E} _ {p _ {g} (s _ {g}) p (s) \pi (a | s, s _ {g})} \left[ \log Q (s, a, s _ {g}) \right] +$$ + +We will now consider variants of C-learning that incorporate actor and critic regularization. + +One-step RL. We will consider a variant of C-learning that resembles one-step RL (Brandfonbrener et al., 2021). The critic update will be similar to before, but the next-actions sampled for the TD updates will be sampled from the marginal behavioral policy: + +$$ +\begin{array}{l} \max _ {Q} (1 - \gamma) \mathbb {E} _ {p (s, a, s ^ {\prime})} \left[ \log \frac {Q (s , a , s _ {g} = s ^ {\prime})}{Q (s , a , s _ {g} = s ^ {\prime}) + 1} \right] + \gamma \mathbb {E} _ {p (s, a) p _ {g} (s _ {g})} \left[ y ^ {\beta , Q _ {t}} (s, a, s) \log \frac {Q (s , a , s _ {g})}{Q (s , a , s _ {g}) + 1} \right] \\ + \mathbb {E} _ {p (s, a) p _ {g} (s _ {g})} \left[ \log \frac {1}{Q (s , a , s _ {g} = s ^ {\prime}) + 1} \right], \\ \end{array} +$$ + +where $y^{\beta ,Q_t}(s,a,s_g) = \mathbb{E}_{p(s'|s,a)\beta (a'|s')}[Q_t(s',a',s_g)]$ . The actor update will be modified to include a reverse KL divergence: + +$$ +\max _ {\pi} \mathbb {E} _ {p (s) p _ {g} \left(s _ {g}\right) \pi (a \mid s, s _ {g})} \left[ \log Q (s, a, s _ {g}) + \log \beta (a \mid s) - \pi (a \mid s, s _ {g}) \right]. \tag {15} +$$ + +Note that we are regularizing the policy to be similar to the average behavioral policy, $\beta (a\mid s)$ . Compared to regularization towards a goal-conditioned behavioral policy $\beta (a\mid s,s_g)$ , this choice gives the policy additional flexibility: when trying to reach goal $s_g$ , it is allowed to take actions that were not taken by $\beta (a\mid s,s_g)$ , as long as they were taken by the behavioral policy when trying to reach some other goal $s_g'$ . + +Critic regularization. To regularize the critic, we will modify the "negative" term in the C-learning objective to use actions sampled from the policy: + +$$ +\begin{array}{l} \max _ {Q} (1 - \gamma) \mathbb {E} _ {p \left(s, a, s ^ {\prime}\right)} \left[ \log \frac {Q \left(s , a , s _ {g} = s ^ {\prime}\right)}{Q \left(s , a , s _ {g} = s ^ {\prime}\right) + 1} \right] (16) \\ + \gamma \mathbb {E} _ {p (s, a) p _ {g} (s _ {g})} \left[ y ^ {\pi , Q _ {t}} (s, a, s _ {g}) \log \frac {Q (s , a , s _ {g})}{Q (s , a , s _ {g}) + 1} \right] (17) \\ + \mathbb {E} _ {p (s) p _ {g} (s _ {g}), a \sim \pi (\cdot | s, s _ {g})} \left[ \log \frac {1}{Q (s , a , s _ {g}) + 1} \right]. (18) \\ \end{array} +$$ + +# C.1. Analysis for Goal-Conditioned Problems + +Like in the single-task setting, these two forms of regularization yield the same fixed points: + +Theorem C.1. Let policy $\pi(a \mid s, s_g)$ be given, let $Q^{\beta}(s, a, s_g)$ be the $Q$ -values for the marginal behavioral policy $\beta(a \mid s)$ and let $Q_r^\pi(s, a, s_g)$ be the critic obtained by the regularized critic update (Eq. 18). Then performing regularized policy updates (Eq. 15) using the behavioral critic is equivalent to the un-regularized policy objective using the regularized critic: + +$$ +\mathbb {E} _ {\pi (a | s, s _ {g})} \left[ \log Q ^ {\beta} (s, a, s _ {g}) + \log \beta (a | s) - \log \pi (a | s, s _ {g}) \right] = \mathbb {E} _ {\pi (a | s, s _ {g})} \left[ \log Q _ {r} ^ {\pi} (s, a, s _ {g}) \right] +$$ + +for all states $s$ and goals $s_g$ . + +Proof. We start by determining the fixed point of critic-regularized C-learning. Like in the single-task setting, the C-learning objective resembles a weighted-classification problem, so we can write down the Bayes' optimal classifier as + +$$ +\frac {Q (s , a , s _ {g})}{Q (s , a , s _ {g}) + 1} = \frac {((1 - \gamma) p (s ^ {\prime} = s _ {g} \mid s , a) + \gamma p (s = s _ {g}) y (s ^ {\prime} , s _ {g})) \beta (a \mid s)}{((1 - \gamma) p (s ^ {\prime} = s _ {g} \mid s , a) + \gamma p (s = s _ {g}) y (s ^ {\prime} , s _ {g})) \beta (a \mid s) + p (s _ {g}) \pi (a \mid s , s _ {g})}. +$$ + +Solving for $Q(s, a, s_g)$ on the left hand side, the optimal value for $Q(s, a, s_g)$ is given by + +$$ +Q (s, a, s _ {g}) = ((1 - \gamma) p (s ^ {\prime} = s _ {g} \mid s, a) + \gamma p (s = s _ {g}) y (s ^ {\prime}, s _ {g})) \frac {\beta (a \mid s)}{\pi (a \mid s , s _ {g})} +$$ + +This tells us what each critic-regularized C-learning update does. + +To analyze these updates, we define $\tilde{Q}(s,a,s_g) \triangleq Q(s,a,s_g)\frac{\pi(a|s,s_g)}{\beta(a|s)}$ . Then these updates can be written using $\tilde{Q}(s,a,s_g)$ as + +$$ +\tilde {Q} (s, a, s _ {g}) \frac {\beta (a \mid s)}{\pi (a \mid s , s _ {g})} = \left((1 - \gamma) p (s ^ {\prime} = s _ {g} \mid s, a) + \gamma \mathbb {E} _ {p (s ^ {\prime} \mid s, a) \pi (a ^ {\prime} \mid s ^ {\prime}, s _ {g})} \left[ \tilde {Q} (s ^ {\prime}, a ^ {\prime}, s _ {g}) \frac {\beta (a ^ {\prime} \mid s ^ {\prime})}{\pi (a ^ {\prime} \mid s ^ {\prime} , s _ {g})} \right]\right) \frac {\beta (a \mid s)}{\pi (a \mid s , s _ {g})}. +$$ + +These updates can be simplified to + +$$ +\tilde {Q} (s, a, s _ {g}) = (1 - \gamma) p (s ^ {\prime} = s _ {g} \mid s, a) + \gamma \mathbb {E} _ {p (s ^ {\prime} \mid s, a) \beta (a ^ {\prime} \mid s ^ {\prime})} \left[ \tilde {Q} (s ^ {\prime}, a ^ {\prime}, s _ {g}) \right]. +$$ + +Like before, the ratio $\frac{\beta(a'|s')}{\pi(a'|s',s_g)}$ inside the expectation acts like an importance weight. Thus, the regularized critic updates are equivalent to perform policy evaluation on $\tilde{Q}(s,a,s_g)$ . Note that this is estimating the probability that the average behavioral policy $\beta(a|s)$ reaches goal $s_g$ ; this is not the probability that a goal-directed behavioral policy $\beta(a|s,s_g)$ reaches the goal. + +Finally, we translate these convergence results for $\tilde{Q}(s,a,s_g)$ into convergence results for $Q(s,a,s_g)$ . Written in terms of the original Q-values, we see that the optimal critic for the regularized critic update is + +$$ +Q ^ {*} (s, a, s _ {g}) = \tilde {Q} ^ {*} (s, a, s _ {g}) \frac {\beta (a \mid s)}{\pi (a \mid s , s _ {g})} = Q ^ {\beta (\cdot | \cdot)} (s, a, s _ {g}) \frac {\beta (a \mid s)}{\pi (a \mid s , s _ {g})}. +$$ + +Thus, critic regularization implicitly regularizes the actor objective so that it is the same objective as one-step RL: + +$$ +\begin{array}{l} \mathbb {E} _ {p (s), s _ {g} \sim p (s), \pi (a | s, s _ {g})} [ \log Q ^ {*} (s, a, s _ {g}) ] \\ = \mathbb {E} _ {p (s), s _ {g} \sim p (s), \pi (a | s, s _ {g})} \left[ \log Q ^ {\beta (\cdot | \cdot)} (s, a, s _ {g}) + \log \beta (a | s) - \log \pi (a | s, s _ {g}) \right]. \\ \end{array} +$$ + +![](images/b305c9ef66f02d627c5da5603c7c85655844074c6343bd893120dd202fa15aee.jpg) + +# D. Regularization for Example-based Control Problems + +While specifying tasks in terms of reward functions is standard for MDPs, it can be difficult for real-world applications of RL. So, prior work has looked at specifying tasks by goal states (as in the previous section) or sets of states representing good outcomes (Pinto & Gupta, 2016; Tung et al., 2018; Fu et al., 2018). In addition to requiring more flexible and user-friend forms of task specification, these algorithms targeted at real-world applications often demand regularization. In the same way that prior goal-conditioned RL algorithms have employed critic regularization, so too have prior example-based control algorithms (Singh et al., 2019; Hatch et al., 2022). In this section, we extend our analysis to regularization of an example-based control algorithm. Again, we will show that a certain form of critic regularization is equivalent to regularizing the actor. + +We first define the problem of example-based control (Fu et al., 2018). In these problems, the agent is given a small collection of states $s \sim p_{e}(s)$ , which are examples of successful outcomes. The aim is to learn a policy $\pi(a \mid s)$ that maximizes the probability of reaching a success state: + +$$ +\max _ {\pi} \mathbb {E} _ {p (s _ {g})} \mathbb {E} _ {\pi (\tau | s _ {g})} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} p _ {e} (s _ {t}) \right]. +$$ + +Note that this objective function is exactly equivalent to a reward-maximization problem, with a reward function $r(s,a) = p_{e}(s_{t})$ . + +In this section, we show that one-step RL and critic regularization are equivalent for a certain example-based control algorithm. Unlike our analysis in the single-task setting, this analysis here uses an existing method, RCE (Eysenbach et al., 2021). RCE is a TD method that already makes use of the cross entropy loss for training the critic: + +$$ +\max _ {Q} (1 - \gamma) \mathbb {E} _ {p _ {e} (s) \beta (a | s)} \left[ \log \frac {Q (s , a)}{Q (s , a) + 1} \right] + \mathbb {E} _ {p (s, a)} \left[ \gamma y ^ {\pi , Q _ {t}} (s, a) \log \frac {Q (s , a)}{Q (s , a) + 1} + \log \frac {1}{Q (s , a) + 1} \right], +$$ + +where $y^{\pi ,Q_t}(s,a) = \mathbb{E}_{p(s'|s,a)\pi (a'|s')}[Q(s',a')]$ serves the role of the TD target. The first two terms increase the Q-values while the last term decreases the Q-values. The actor is updated to maximize the Q-values. While this objective for the actor can be written in many ways, we will write it as maximizing a log ratio because it will allow us to draw a precise equivalence between actor and critic regularization: + +$$ +\max _ {\pi} \mathbb {E} _ {p (s) \pi (a | s)} \left[ \log Q (s, a) \right] +$$ + +We will now consider variants of RCE that incorporate actor and critic regularization. + +One-step RL. We will consider a variant of RCE that resembles one-step RL (Brandfonbrener et al., 2021). The critic update will be similar to before, but the next-actions sampled for the TD updates will be sampled from the behavioral policy: + +$$ +\max _ {Q} (1 - \gamma) \mathbb {E} _ {p _ {e} (s) \beta (a | s)} \left[ \log \frac {Q (s , a)}{Q (s , a) + 1} \right] + \mathbb {E} _ {p (s, a)} \left[ \gamma y ^ {\beta , Q _ {t}} (s, a) \log \frac {Q (s , a)}{Q (s , a) + 1} + \log \frac {1}{Q (s , a) + 1} \right], +$$ + +where $y^{\beta ,Q_t}(s,a) = \mathbb{E}_{p(s'|s,a)\beta (a'|s')}[Q(s',a')]$ . The actor update will be modified to include a reverse KL divergence: + +$$ +\max _ {\pi} \mathbb {E} _ {p (s), \pi (a | s)} \left[ \log Q (s, a) + \log \beta (a \mid s) - \pi (a \mid s) \right]. \tag {19} +$$ + +Critic regularization. To regularize the critic, we will modify the "negative" term in the RCE objective to use actions sampled from the policy: + +$$ +(1 - \gamma) \mathbb {E} _ {p _ {e} (s) \beta (a | s)} \left[ \log \frac {Q (s , a)}{Q (s , a) + 1} \right] + \mathbb {E} _ {p (s, a), a ^ {-} \sim \pi (\cdot | s)} \left[ \gamma y ^ {\pi , Q _ {t}} (s, a) \log \frac {Q (s , a)}{Q (s , a) + 1} + \log \frac {1}{Q (s , a ^ {-}) + 1} \right], \tag {20} +$$ + +# D.1. Analysis for Example-based Control Problems + +Like in the single-task setting, these two forms of regularization yield the same fixed points: + +Theorem D.1. Let policy $\pi(a \mid s)$ be given, let $Q^{\beta}(s, a)$ be the $Q$ -values for the behavioral policy $\beta(a \mid s)$ and let $Q_r^\pi(s, a)$ be the critic obtained by the regularized critic update (Eq. 20). Then performing regularized policy updates (Eq. 19) using the behavioral critic is equivalent to the un-regularized policy objective using the regularized critic: + +$$ +\mathbb {E} _ {\pi (a \mid s)} \left[ \log Q ^ {\beta} (s, a) + \log \beta (a \mid s) - \log \pi (a \mid s) \right] = \mathbb {E} _ {\pi (a \mid s)} \left[ \log Q _ {r} ^ {\pi} (s, a) \right] +$$ + +for all states $s$ . + +Proof. We start by determining the fixed point of critic-regularized RCE. Like in the single-task setting, The RCE objective resembles a weighted-classification problem, so we can write down the Bayes' optimal classifier as + +$$ +\frac {Q (s , a)}{Q (s , a) + 1} = \frac {\left((1 - \gamma) p _ {e} (s) + \gamma y ^ {\pi , Q _ {t}} (s , a)\right) \beta (a \mid s)}{\left((1 - \gamma) p _ {e} (s) + \gamma y ^ {\pi , Q _ {t}} (s , a)\right) \beta (a \mid s) + \pi (a \mid s)}. +$$ + +Solving for $Q(s, a)$ on the left hand side, the optimal value for $Q(s, a)$ is given by + +$$ +Q (s, a) = \left((1 - \gamma) p _ {e} (s) + \gamma y ^ {\pi , Q _ {t}} (s, a)\right) \frac {\beta (a \mid s)}{\pi (a \mid s)} +$$ + +This tells us what each critic-regularized RCE update does. + +To analyze these updates, we define $\tilde{Q}(s,a) \triangleq Q(s,a)\frac{\pi(a|s)}{\beta(a|s)}$ . Then these updates can be written using $\tilde{Q}(s,a)$ as + +$$ +\tilde {Q} (s, a) \frac {\beta (a \mid s)}{\pi (a \mid s)} = \left((1 - \gamma) p _ {e} (s) + \gamma \mathbb {E} _ {p (s ^ {\prime} \mid s, a) \pi (a ^ {\prime} \mid s ^ {\prime})} \left[ \tilde {Q} (s ^ {\prime}, a ^ {\prime}) \frac {\beta (a ^ {\prime} \mid s ^ {\prime})}{\pi (a ^ {\prime} \mid s ^ {\prime})} \right]\right) \frac {\beta (a \mid s)}{\pi (a \mid s)}. +$$ + +These updates can be simplified to + +$$ +\tilde {Q} (s, a) = (1 - \gamma) p _ {e} (s) + \gamma \mathbb {E} _ {p \left(s ^ {\prime} \mid s, a\right) \beta \left(a ^ {\prime} \mid s ^ {\prime}\right)} \left[ \tilde {Q} \left(s ^ {\prime}, a ^ {\prime}\right) \right]. +$$ + +Like before, the ratio $\frac{\beta(a' | s')}{\pi(a' | s')}$ inside the expectation acts like an importance weight. Thus, the regularized critic updates are equivalent to perform policy evaluation on $\tilde{Q}(s, a)$ . + +Finally, we translate these convergence results for $\tilde{Q}(s,a)$ into convergence results for $Q(s,a)$ . Written in terms of the original Q-values, we see that the optimal critic for the regularized critic update is + +$$ +Q ^ {*} (s, a) = \tilde {Q} ^ {*} (s, a) \frac {\beta (a \mid s)}{\pi (a \mid s)} = Q ^ {\beta} (s, a) \frac {\beta (a \mid s)}{\pi (a \mid s)}. +$$ + +Thus, critic regularization implicitly regularizes the actor objective so that it is the same objective as one-step RL: + +$$ +\mathbb {E} _ {p (s), \pi (a | s)} \left[ \log Q ^ {*} (s, a) \right] = \mathbb {E} _ {p (s), \pi (a | s)} \left[ \log Q ^ {\beta} (s, a) + \log \beta (a \mid s) - \log \pi (a \mid s) \right]. +$$ + +# E. Additional Experiments + +# E.1. Is classifier actor critic a good model of practically-used RL methods? + +To study whether classifier actor critic is an accurate model for practically-used RL methods, we extended Fig. 3 by adding three additional methods: classifier actor critic, classifier actor critic with actor regularization (Equations 4 and 3). Comparing Q-learning with classifier actor critic, we see that both yield reward-maximizing policies (in line with Lemma 4.1), though these policies are different (i.e., they perform symmetry breaking in different ways). Comparing one-step RL with actor-regularized classifier actor critic, we observe that both methods take the same three actions at the states within the blue + +![](images/5fc5d80e41445e69693859028128a976522ebebda62cbe1fc2132cd243129a4f.jpg) + +![](images/caae64b054815114ab8fd2eea61c2bdd409639a1385dcd5a55863217a3bc00fa.jpg) + +![](images/cf68552b978f59208e6e7e805ec1bf83ab9d3e63e2795fa7b9ef03d0148c3e5b.jpg) + +![](images/dbec32173e8bbe472c87817f5a74f888e11892aeeb1ab3e72eae73444b839cd9.jpg) +(a) Q-learning +(d) classifier actor critic (no regularization) +Figure 9: Extension of Fig. 3 with additional figures for classifier actor critic and regularized variants. As expected, classifier actor critic produces a reward-maximizing policy (Lemma 4.1), as does Q-learning. In line with Theorem 4.3, critic-regularized and actor-regularized classifier actor critic product the same policies. Comparing (b) one-step RL (e) actor-regularized classifier actor critic, we see that the methods produce the same three actions within the blue box (states where we expect regularization to be important), but can produce different actions in states outside the blue box. The comparisons between (c) CQL and (d) critic-regularized classifier actor critic are similar, supporting the claim that classifier actor critic is only an approximate (no a perfect) model of practically-used offline RL methods. + +![](images/c0de547f15a62c1daa0b93416cbbc43b0a702ec01085acc760d0a29e1318574f.jpg) +(b) One-step RL +(e) classifier actor critic (actor regularization) + +![](images/0236200e16c29ffcb4aa225fd6cda9fa2196d6daaef30944ddf706886e9b47df.jpg) +(c) $\mathrm{CQL}(\lambda = 10)$ +(f) classifier actor critic (critic regularization) + +box, states where we expect regularization to have a large effect. Outside this blue box, these two methods occasionally take different actions. Similarly, comparing CQL to critic-regularized classifier actor-critic, we observe that the methods take the same actions within the blue box, but occasionally take different actions outside the blue box. In line with Theorem 4.3, classifier actor-critic with critic regularization produces the exact same policy as classifier actor-critic with actor regularization. Taken together, these results provide empirical backing for our theoretical results, while also showing that classifier actor critic is only an approximate model of practical algorithms, not a perfect model. + +# F. Experimental Details + +# F.1. Tabular experiments + +Implementing critic regularization for classifier actor critic. The objective for critic regularization in contrastive actor critic (Eq. 6) is nontrivial to optimize because of the cyclic dependency between the policy and the critic: simply alternating between optimizing the actor and the critic does not converge. In our experiments, we update the critic using an exponential moving average of the policy, as proposed in Wen et al. (2021). We found that this decision was sufficient for ensuring convergence. When applying CQL in the tabular setting (Figures 3 and 4), we did not do this because soft value iteration represents the policy implicitly in terms of the value function. + +Fig. 2 (left) The initial state and goal state are located in opposite corners. The reward function is $+1$ for reaching the goal and 0 otherwise. We use a dataset of 20 trajectories, 50 steps each, collected by a random policy. We use $\gamma = 0.95$ and train for 20k full-batch updates, using a learning rate of 1e-2. The Q table is randomly initialized using a standard normal + +distribution. + +Fig. 2 (center) The initial state and goal state are located in adjacent corners. The goal state has a reward of $+3.5$ , the states between the initial state and goal state have a reward $+1$ , and all other states (including the initial state) have a reward of $+2$ . We use a dataset of 20 trajectories, 50 steps each, collected by a random policy. We use $\gamma = 0.95$ and train for 20k full-batch updates, using a learning rate of 1e-2. The Q table is randomly initialized using a standard normal distribution. + +Fig. 2 (right) The initial state and goal state are located in adjacent corners. The reward is $+0.01$ at the goal state and 0 otherwise. We use a dataset of 1 trajectories with 10 steps, which traces the following path: + +$$ +[ (0, 0), (1, 0), (1, 1), (1, 2), (1, 3), (1, 4), (0, 4), (0, 4), (0, 4), (0, 4) ]. +$$ + +We use $\gamma = 0.95$ and train for $10\mathrm{k}$ full-batch updates, using a learning rate of 1e-2. The Q table is randomly initialized using a standard normal distribution. + +Fig. 3 There is a bad state (reward of $-10$ ) next to the optimal state (reward of $+1$ ), so the behavioral policy navigates away from the optimal state. We generate 10 trajectories of length 100 from a uniform random policy. We use $\gamma = 0.95$ and train each method for $10k$ full-batch updates. The Q table is randomly initialized using a standard normal distribution. One-step RL performs SARSA updates while CQL performs soft value iteration (as suggested in the CQL paper). + +Fig. 4 We generate 100 random variants of Fig. 3 by randomly sampling the high-reward state and low-reward state (without replacement). The datasets are generated in the same way. + +Fig. 5 We use the same environment and dataset as in Fig. 3, but train the CQL agent with varying values of $\lambda$ , each with 5 random seeds. We train the one-step RL agent for 5 random seeds. For each point on the X axis of Fig. 5, we compare compute $5 \times 5$ pairwise comparisons and report the mean and standard deviation. + +# F.2. Continuous control experiments + +For the experiments in Figures 6 and 7, we used the implementation of one-step RL (reverse KL) and CQL provided by Hoffman et al. (2020). We choose this implementation because it is well tuned and uses similar hyperparameters for the two methods. As mentioned in the main text, the only change we made to the implementation was adding the twin-Q trick to one-step RL, such that it matched the critic architecture used by CQL. We did not change any of the other hyperparameters, including hyperparameters controlling the regularization strength. \ No newline at end of file diff --git a/aconnectionbetweenonesteprlandcriticregularizationinreinforcementlearning/images.zip b/aconnectionbetweenonesteprlandcriticregularizationinreinforcementlearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1e3741998bc40d6bc2c0b70ceb91a15e441838a5 --- /dev/null +++ b/aconnectionbetweenonesteprlandcriticregularizationinreinforcementlearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b35d2cea99ba842541e0af7674a5d8c27c3dc52827cc0fc7d89c8607191ba3d5 +size 1034461 diff --git a/aconnectionbetweenonesteprlandcriticregularizationinreinforcementlearning/layout.json b/aconnectionbetweenonesteprlandcriticregularizationinreinforcementlearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..628ec38a07ed61e6dd17d97535ba7be6ea1279f0 --- /dev/null +++ b/aconnectionbetweenonesteprlandcriticregularizationinreinforcementlearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8f5952de6adeaf0430e1321959a3799c78e877d6504ac3dd895473e46898183 +size 899529 diff --git a/acoupledflowapproachtoimitationlearning/865a27f8-93e6-494a-8b17-c293ff834e3b_content_list.json b/acoupledflowapproachtoimitationlearning/865a27f8-93e6-494a-8b17-c293ff834e3b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c02f31b5e43be1853d400aab25c20d4eaaa741b8 --- /dev/null +++ b/acoupledflowapproachtoimitationlearning/865a27f8-93e6-494a-8b17-c293ff834e3b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43d664f62fc3d118f521c5a1b574581e7f8041677431849a41dec2488c48c557 +size 101427 diff --git a/acoupledflowapproachtoimitationlearning/865a27f8-93e6-494a-8b17-c293ff834e3b_model.json b/acoupledflowapproachtoimitationlearning/865a27f8-93e6-494a-8b17-c293ff834e3b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d25b3ec86456c513c0da5d2042e466213cc51dda --- /dev/null +++ b/acoupledflowapproachtoimitationlearning/865a27f8-93e6-494a-8b17-c293ff834e3b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f5e121c42ecf7f83fb34a8fd109062780a20fcb811b19f2c0acbc6f519cf2ae +size 125974 diff --git a/acoupledflowapproachtoimitationlearning/865a27f8-93e6-494a-8b17-c293ff834e3b_origin.pdf b/acoupledflowapproachtoimitationlearning/865a27f8-93e6-494a-8b17-c293ff834e3b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..dd40d19430f348a94754dbc774d5b20a4cc11282 --- /dev/null +++ b/acoupledflowapproachtoimitationlearning/865a27f8-93e6-494a-8b17-c293ff834e3b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eda430bea2e0da2871abab0848f7540a360ceb8bd5ff00f9ec770ea2fe0175bc +size 1161441 diff --git a/acoupledflowapproachtoimitationlearning/full.md b/acoupledflowapproachtoimitationlearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7d7a93baa4ebb09c5a0db8cac86d53dcc07662e1 --- /dev/null +++ b/acoupledflowapproachtoimitationlearning/full.md @@ -0,0 +1,427 @@ +# A Coupled Flow Approach to Imitation Learning + +Gideon Freund1 Elad Sarafian1 Sarit Kraus1 + +# Abstract + +In reinforcement learning and imitation learning, an object of central importance is the state distribution induced by the policy. It plays a crucial role in the policy gradient theorem, and references to it—along with the related state-action distribution—can be found all across the literature. Despite its importance, the state distribution is mostly discussed indirectly and theoretically, rather than being modeled explicitly. The reason being an absence of appropriate density estimation tools. In this work, we investigate applications of a normalizing flow-based model for the aforementioned distributions. In particular, we use a pair of flows coupled through the optimality point of the Donsker-Varadhan representation of the Kullback-Leibler (KL) divergence, for distribution matching based imitation learning. Our algorithm, Coupled Flow Imitation Learning (CFIL), achieves state-of-the-art performance on benchmark tasks with a single expert trajectory and extends naturally to a variety of other settings, including the subsampled and state-only regimes. + +# 1. Introduction + +Reinforcement learning (RL) (Sutton & Barto, 2018) concerns the optimization of an agent's behavior in an environment. Its characterizing difficulties of exploration vs exploitation and credit assignment, stem from its typical incarnations where the agent must learn from sparse feedback. In order to provoke desired behavior, one may need to craft a sophisticated reward function or provide demonstrations for the agent to imitate. Imitation Learning (IL) deals precisely with the latter: learning from expert demonstrations. Although RL and IL share the same ultimate goal of producing a good policy, they differ fundamentally in that RL is guided by the environment's feedback, while IL is guided + +$^{1}$ Department of Computer Science, Bar-Ilan University, Israel. Correspondence to: Gideon Freund . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +only by the ability of the agent to reproduce the expert's behavior. + +Central to both RL and IL are the state and state-action distributions induced by the policy. Their importance cannot be overstated, with the state distribution forming the basis of policy gradient methods through the policy gradient theorem (Sutton et al., 2000), and the state-action distribution being core to the common distribution matching formulation of IL (Ke et al., 2020; Ghasemipour et al., 2020). They are also foundational to other applications, like curiosity-based exploration (Pathak et al., 2017), constrained RL (Qin et al., 2021) and Batch RL (Fujimoto et al., 2019), some of which have recently been unified under the umbrella of convex RL (Zhang et al., 2020; Zahavy et al., 2021; Mutti et al., 2022), which studies objectives that are convex functions of the state distribution. + +Despite their ubiquity across the literature, explicit modeling of the distributions is scarce. Instead, they mostly find use as a theoretical tool for derivations. This is, of course, barring some approaches that do attempt to model them (Hazan et al., 2019; Qin et al., 2021; Lee et al., 2019; Kim et al., 2021b) or their ratios (Nachum et al., 2019; Liu et al., 2018; Gangwani et al., 2020), further discussion of which is delegated to the related work. The reason for this lack of modeling is due to the difficulty of density estimation, especially in the case of complex and high-dimensional distributions. This is where normalizing flows (NF) (Dinh et al., 2014; 2016; Papamakarios et al., 2017), a recent approach to density estimation, will be of service. + +We believe there is no shortage of applications for such modeling, but we focus on imitation learning, a quite natural and suitable place, given its modern formulation as state-action distribution matching (Ghasemipour et al., 2020). + +Many approaches to distribution matching based imitation have been presented (Ho & Ermon, 2016; Fu et al., 2017; Kostrikov et al., 2018; Ke et al., 2020; Ghasemipour et al., 2020; Kostrikov et al., 2019; Sun et al., 2021; Kim et al., 2021b; Dadashi et al., 2020; Schroecker, 2020). The common theme for such methods begins with the selection of a divergence, followed by the development of a unique approach. This may involve a direct attack on the objective by reformulating it (Kostrikov et al., 2019), or by derivation of a surrogate objective (Zhu et al., 2020; Dadashi et al., + +2020; Kim et al., 2021b), with some utilizing mechanisms such as an inverse action model (Zhu et al., 2020) and focusing on learning from states alone (a setting our approach naturally lends to). Other methods first derive estimates for the gradient of the state distribution with respect to the policy's parameters (Schroecker, 2020), while some devise unifying algorithms and frameworks encompassing previous approaches (Ke et al., 2020; Ghasemipour et al., 2020). + +The most popular divergence of choice is the reverse KL, which some favor due to its mode-seeking behavior (Ghasemipour et al., 2020). Others attempt to get the best of both worlds, combining both mode-seeking and modecovering elements (Zhu et al., 2020). A priori, it is difficult to say which choice of divergence is advantageous, it's more about the ensuing approach to its minimization. + +In this work, we propose a unique approach to distribution matching based imitation, by coupling a pair of flows through the optimality point of the Donsker-Varadhan (Donsker & Varadhan, 1976) representation of the KL. More specifically, by noting this point occurs at the log distribution ratio, while the IL objective with the reverse KL can be seen as an RL problem with the ratio inverted. We propose setting the point of optimality as the difference of two normalizing flows, then training in an alternating fashion akin to other adversarial IL methods (Ho & Ermon, 2016; Kostrikov et al., 2018; Ghasemipour et al., 2020; Kostrikov et al., 2019; Sun et al., 2021). This method proves far more accurate than estimating the log distribution ratio by naively training a pair of flows independently. We show this in part by analyzing their respective BC graphs: a simple tool we present for gauging how well a proposed estimator captures the expert's behavior. While most IL works neglect analysis of their learned reward function, we think this can be a potential guiding tool for future IL researchers. + +Our resulting algorithm, Coupled Flow Imitation Learning (CFIL) shows strong performance on standard benchmark tasks, while extending naturally to the subsampled and state-only regimes. In the state-only regime in particular, CFIL exhibits significant advantage over prior state-of-the-art work, despite the competition being specifically designed for that domain. This work also aims to inspire more research incorporating explicit modeling of the state-action distribution. + +# 2. Background + +# 2.1. Markov Decision Process + +Environments in RL are typically expressed as a Markov Decision process (MDP). An MDP is a tuple $(S, A, P, R, p_0, \gamma)$ , where $S$ is a set of states termed the state space, $A$ is a set of actions termed the action space, $P: S \times A \times S \to [0, \infty]$ is a transition density function describing the environment's Markovian dynamics, + +$R: S \times A \times S \to \mathbb{R}$ is a reward function, $p_0$ is an initial state distribution, and $\gamma \in [0,1)$ is a discount factor. + +A policy $\pi : S \to A$ dictates an agent's actions when roaming the environment, and the goal in an MDP is to find the optimal one, denoted $\pi^{*}$ . The standard optimality criterion for a policy is the expected discounted reward: $J(\pi, r) = \mathbb{E}_{\tau \sim \pi}[\sum_{t=0}^{\infty} \gamma^{t} r_{t}]$ where $\tau \sim \pi$ symbolizes the trajectory distribution induced by the policy $\pi$ , and $r_{t}$ is the reward at time $t$ along a trajectory. Each policy has an associated value function and Q-function: $V^{\pi}(s) = \mathbb{E}_{\pi}[\sum_{t=0}^{\infty} \gamma^{t} r_{t}|s_{0} = s]$ and $Q^{\pi}(s, a) = \mathbb{E}_{\pi}[\sum_{t=0}^{\infty} \gamma^{t} r_{t}|s_{0} = s, a_{0} = a]$ , where the value function represents the expected discounted reward of following $\pi$ from state $s$ , while the Q-function represents the expected discounted reward of following $\pi$ after taking action $a$ from state $s$ . + +Another object induced by the policy is the improper discounted state distribution $d_{\pi}^{\gamma}(s) = \sum_{t=0}^{\infty} \gamma^{t} Pr(s_{t} = s | s_{0} \sim p_{0})$ , as well as its undiscounted counterpart, $d_{\pi}(s) = \sum_{t=0}^{\infty} Pr(s_{t} = s | s_{0} \sim p_{0})$ , in which we take greater interest for reasons described in Appendix C. $d_{\pi}(s)$ is called the state distribution and is of central interest to this work. Closely related to $d_{\pi}(s)$ is the state-action distribution $p_{\pi}(s, a) = \pi(a | s) d_{\pi}(s)$ , which is also of interest to this work. + +# 2.2. Normalizing Flows + +Normalizing Flows (NF) (Dinh et al., 2014; 2016) are exact-likelihood generative models that use a simple base distribution $p_Z(z)$ and a sequence of invertible and differentiable transformations $f_{\theta} = f_{k}\circ f_{k - 1}\circ \dots \circ f_{1}$ to model an arbitrary target distribution $p_X(x)$ as $z = f_{\theta}(x)$ . This means the target log-likelihood can be written using the change of variables formula: + +$$ +\log p _ {X} (x) = \log p _ {Z} (z) + \sum_ {i = 1} ^ {k} \log \left| \det \frac {d f _ {i}}{d f _ {i - 1}} \right| \tag {1} +$$ + +allowing for parameters $\theta$ to be trained using maximum likelihood. With a trained flow at hand, generation is performed by sampling from the base distribution and applying $f_{\theta}^{-1}$ , and density estimation can be done by evaluating the RHS of (1). RealNVP (Dinh et al., 2016), which is of particular interest to us, uses coupling layers (not to be confused with our coupled approach) to construct $f_{\theta}$ . Masked Autoregressive Flow (MAF) (Papamakarios et al., 2017) is a generalization of RealNVP which substitutes its coupling layers with autoregressive ones. While improving expressiveness, this impedes single pass sampling, but still allows efficient + +density evaluation—our main concern—using masked autoencoders (Germain et al., 2015). RealNVP and MAF are reviewed more thoroughly in Appendix D. + +Although much research (Kingma & Dhariwal, 2018; Huang et al., 2018; Durkan et al., 2019; Ho et al., 2019) has gone into improving the capacity of MAF, due to its simplicity, efficiency and adequacy for our task, it is the architecture used in this work to model the state and state-action distributions $d_{\pi}(s)$ and $p_{\pi}(s,a)$ . + +# 2.3. Imitation Learning + +Imitation Learning (IL) concerns the optimization of an agent's behavior in an environment, given expert demonstrations. Perhaps the simplest approach to imitation, behavioral cloning (BC), performs supervised regression or maximum-likelihood on given expert state-action pairs $\{(s_e,a_e)\}_{t = 1}^N$ : + +$$ +\min _ {\pi} \sum_ {t = 0} ^ {N} L o s s (\pi (s _ {t}), a _ {t}) \quad \text {o r} \quad \max _ {\pi} \sum_ {t = 0} ^ {N} \log \pi (a _ {t} | s _ {t}) \tag {2} +$$ + +for deterministic and stochastic policies, respectively. BC suffers from compounding errors and distributional shift, wherein unfamiliar states cause the policy to misstep, leading to even less familiar states and an eventual complete departure from the expert trajectory (Ross et al., 2011). + +Recent distribution matching (DM) approaches (Ho & Ermon, 2016; Kostrikov et al., 2018; 2019; Kim et al., 2021b) successfully overcome these issues. One common formulation (Ke et al., 2020; Ghasemipour et al., 2020) encompasses most of these methods, views DM as an attempt to match the agent's state-action distribution $p_{\pi}$ , with the expert's $p_e$ , by minimizing some f-divergence2 $D_f$ : + +$$ +\underset {\pi} {\arg \min } D _ {f} \left(p _ {\pi} \mid \mid p _ {e}\right). \tag {3} +$$ + +These methods hinge on the one-to-one relationship between a policy and its state-action distribution and have shown significant improvement over BC, particularly when few expert trajectories are available (Ghasemipour et al., 2020) or expert trajectories are subsampled (Li et al., 2022). + +# 3. Related Work + +Attempts at modeling the state distribution have been made for various applications and countless distribution matching approaches to imitation learning exist, with varying degrees of relevancy to our own. The most pertinent works—those intersecting both the former and the latter—are reviewed in most detail towards the end of this section. + +Some general attempts to model the state distribution include (Hazan et al., 2019) who use discretization and kernel density estimates (KDE) for curiosity-based exploration, while (Lee et al., 2019) uses variational autoencoders (VAE) in the same context. VAE's have also been used by (Islam et al., 2019) to model $d_{\pi}(s)$ for constraining the distribution shift in off-policy RL algorithms, and KDE's have also been used by (Qin et al., 2021) for density constrained reinforcement learning. + +Approaches to distribution matching-based imitation include GAIL (Ho & Ermon, 2016) who uses a GAN-like (Goodfellow et al., 2014) objective to minimize the JS divergence. Originally born out of max-entropy inverse reinforcement learning (Ziebart et al., 2008), GAIL paved the way for later works such as AIRL (Fu et al., 2017) which modified GAIL's objective to allow reward recovery; DAC (Kostrikov et al., 2018), an off-policy extension of GAIL also addressing reward bias; and general f-divergence approaches like (Ke et al., 2020; Ghasemipour et al., 2020). Other works include the DICE (Nachum et al., 2019) family comprised of ValueDICE (Kostrikov et al., 2019) and its successors SoftDICE (Sun et al., 2021), SparseDICE (Camacho et al., 2021) and DemoDICE (Kim et al., 2021a). Using the Donsker-Varadhan representation along with a change of variables, ValueDICE derives a fully off-policy objective from the reverse KL, completely drubbing BC in the offline regime. Its successors are of tangential relevancy here, each augmenting it for a different domain: SoftDICE makes various amendments notably using the Wasserstein metric (also used by PWIL (Dadashi et al., 2020)), while SparseDice adds a host of regularizers to extend ValueDICE to subsampled trajectories and DemoDICE focuses on imitation with supplementary imperfect demonstrations. We note the apparent advantages of the DICE family over BC in the fully offline regime have recently been questioned (Li et al., 2022). + +Our exploitation of Donsker-Varadhan was certainly inspired by ValueDICE and is somewhat reminiscent of MINE (Belghazi et al., 2018) who utilize it to estimate mutual information with a regular neural estimator. Their context and precise method however, differ substantially. Another work of relation is the often overlooked (Schroecker, 2020) which includes three approaches: SAIL, GPRIL and VDI (Schroecker & Isbell, 2017; Schroecker et al., 2019; Schroecker & Isbell, 2020). All three attack the IL problem via the forward KL. Essentially, the forward KL is broken into a behavioral cloning term as well as a state distribution term, and optimizing the objective then requires an estimate of $\nabla_{\theta}\log d_{\pi_{\theta}}(s)$ . SAIL, GPRIL and VDI each propose a unique way of estimating this gradient, with GPRIL and VDI incorporating flows. (Chang et al., 2022) also utilize flows, proposing a state-only IL approach that models the state next-state transition using conditional flows, but require dozens of expert trajectories to find success. + +Finally, the most similar work is NDI (Kim et al., 2021b) who rewrite the reverse KL as $-D_{KL}(p_{\pi}||p_e) = \mathbb{E}_{p_{\pi}}[\log p_e - \log p_{\pi}] = J(\pi ,r = \log p_e) + \mathcal{H}(p_{\pi})$ . That is, RL with rewards $\log p_{e}$ , along with a state-action entropy term. They continue by deriving a lower bound on $\mathcal{H}(p_{\pi})$ , termed the SAELBO. NDI+MADE is then proposed, where in a first phase $\log p_{e}$ is estimated using flows followed by a second phase of optimization. + +NDI has many limitations and differs dramatically from our approach. NDI claims to be non-adversarial, while true, it is only due to their loose bound. Moreover, this bound was proven superfluous in their ablation where setting $\lambda_{f} = 0$ showed no reduction in performance. On top of that, we far outperform them, our evaluation is far more comprehensive and although we both use flows, the method and employment differ significantly in CFIL given the coupling of a pair with Donsker-Varadhan. + +Pausing the deturpation, each of these methods above has strong merits and heavily inspired our own: We adopted the use of MAF over RealNVP from NDI+MADE (our work began in ignorance of NDI); and it was studying ValueDICE that inspired our direction to exploit the Donsker-Varadhan representation of the KL. The way in which we do however, is unique, and all the approaches above are distinct from our own. + +# 4. Our Approach + +We begin with the reverse KL as our divergence of choice, since, noting as others have (Kostrikov et al., 2019; Hazan et al., 2019; Kim et al., 2021b; Camacho et al., 2021), its minimization may be viewed as an RL problem with rewards being the log distribution ratios: + +$$ +\begin{array}{l} \underset {\pi} {\arg \min } D _ {K L} (p _ {\pi} | | p _ {e}) = \underset {\pi} {\arg \max } \mathbb {E} _ {p _ {\pi} (s, a)} \left[ \log \frac {p _ {e} (s , a)}{p _ {\pi} (s , a)} \right] \\ = \arg \max _ {\pi} J (\pi , r = \log \frac {p _ {e}}{p _ {\pi}}). \tag {4} \\ \end{array} +$$ + +Thus permitting the use of any RL algorithm for solving the IL objective, provided one has an appropriate estimate of the ratio. + +Before continuing however, we first motivate our coupled approach for such estimation, by illustrating the failure of what is perhaps a more natural next step: using two independent density estimators—say flows—for each of the densities $p_e$ and $p_{\pi}$ directly. Practically, this would mean alternating between learning the flows and using their log-ratio as reward in an RL algorithm. The table in Section 6 concisely showcases a clear failure of this approach on all the standard benchmarks we later evaluate with. + +The failure can be further understood through analyzing what we term the BC graph corresponding to an estima + +![](images/c2ccf1a2e3d1bd77e3e9c5bc5f2287eb3346d7566ab14ef98e6ea3b23a690d6c.jpg) +Figure 1. Left: The BC graph of an uncoupled flow for the HalfCheetah-v2 environment. Right: The BC graph of a coupled flow for the HalfCheetah-v2 environment. BC graphs for an estimator are generated by updating the estimator analogously to an RL run using $N$ saved BC rollouts. This yields $N$ estimators corresponding to $N$ intermediate BC agents. The BC graph is then, for all $i$ , the scatter of the $i$ 'th estimator's evaluation of an $i$ 'th BC agent's trajectory against its true environment reward. + +![](images/35c4d53d132ea0730e839cdb0e74970b2e2ba4fae1f96a5b1c4e977ff43e94ed.jpg) + +tor of the log distribution ratio. That is, we first train a behavioral cloning agent once on sufficient expert trajectories, while rolling it out every few iterations. We then train the estimator analogously to an RL run, using a single expert trajectory along with the $N$ saved BC rollouts. This quick process yields $N$ estimators corresponding to $N$ intermediate BC agents. The BC graph is then, for all $i$ , the $i$ 'th estimator's evaluation of an $i$ 'th BC agent's trajectory, scattered against the true environment reward of that same trajectory. Intuitively, a non-increasing graph, means a potential RL learner with an analogously trained reward may struggle to overcome those misleading behaviors ranked high by the synthetic reward, thus preventing them from reaching expert level. In practice of course, one would want some form of approximate monotonicity or upward trend. Though importantly, a BC graph's monotonicity by no means implies the success of an RL learner with the correspondingly constructed reward. This is more than a theoretical idiosyncrasy: Many estimators will emit a perfect BC graph while completely failing all attempts in RL (see Appendix F.1). Only the reverse is true: its complete lack of an upward trend will usually imply an agent's failure. + +Loosely formalized, given BC's objective in Equation 2 and assuming a stationary (time independent) reward function $r$ , a monotonically increasing BC graph essentially means that for all policies $\pi_1, \pi_2$ : $J(\pi_1, r) > J(\pi_2, r)$ if $\mathbb{E}_{p_e}[\log \pi_1(a|s)] > \mathbb{E}_{p_e}[\log \pi_2(a|s)]$ . Thus, further assuming continuity, a non-monotonically increasing graph implies it either monotonically decreases or the existence of a local maximum. In both cases an RL learner with ob + +jective $\arg \max_{\pi} J(\pi, r)$ may converge on a policy with suboptimal BC loss. Since only at optimality BC recovers the expert policy, this would guarantee the agent will not meet its truly intended goal of expert mimicry. Of course, in reality, RL can overcome a certain lack of an upward trend. Moreover, the rewards are neither stationary nor identical between the BC and RL runs, only analogously constructed, so such graphs are only loosely representative. Nonetheless, we find they can be highly insightful. + +As Figure 1 suggests, the independently trained flows' BC graph is quite lacking. An agent would have no incentive according to the synthetic reward to make any progress, which is precisely what occurs as the table in Section 6 demonstrates. This poor BC graph is due in part to each flow being evaluated on data completely out of its distribution (OOD), which flows are known to struggle with (Kirichenko et al., 2020). Since the two flows estimates lack true meaning when evaluated on each others data, we need to tie them together somehow: They must be coupled. + +To perform our coupling, we employ the Donsker-Varadhan (Donsker & Varadhan, 1976) form of the KL divergence: + +$$ +\begin{array}{l} D _ {K L} (p _ {\pi} | | p _ {e}) = \\ \sup _ {x: S \times A \rightarrow \mathbb {R}} \mathbb {E} _ {p _ {\pi} (s, a)} [ x (s, a) ] - \log \mathbb {E} _ {p _ {e} (s, a)} \left[ e ^ {x (s, a)} \right]. \tag {5} \\ \end{array} +$$ + +In the above, optimality occurs with $x^{*} = \log \frac{p_{\pi}}{p_{e}} + C$ for any $C \in \mathbb{R}$ (Kostrikov et al., 2019; Gangwani et al., 2020; Belghazi et al., 2018). Thus after computing $x^{*}$ , one recovers the log distribution ratio by simple negation, enabling use of $-x^{*}$ as reward in an RL algorithm to optimize our IL objective. This leads directly to our proposed approach for estimating the log distribution ratio, by coupling two flows through $x(s,a)$ . That is, instead of training two flows independently, we propose to do so through maximization of Equation 5. More specifically, we inject the following inductive bias, modeling $x$ as $x_{\psi,\phi}(s,a) = \log p_{\psi}(s,a) - \log q_{\phi}(s,a)$ , where $p_{\psi}$ and $q_{\phi}$ are normalizing flows. + +This coupling guarantees more meaningful values when the flows are evaluated on each others data, since it has already occurred during the maximization phase, hence sidestepping the OOD issue described earlier. Figure 1 illustrates this advantage. The right shows the coupled flows' BC graph, clearly remedying the issue with their uncoupled counterparts: A potential learner will now have the proper incentive to reproduce the expert's behavior. + +The drop in synthetic reward (i.e. $-x^{*}$ ) towards the end of the BC graph may seem daunting, but it actually expresses how well our estimator captures the expert's behavior: The drop occurs precisely beyond expert level, where the agent, + +while good in the environment, diverges from the true expert's behavior.5 + +Given this improved estimator, our full IL objective can then be written as: + +$$ +\underset {\pi} {\arg \max } \underset {p _ {\psi}, q _ {\phi}} {\min } \log \mathbb {E} _ {p _ {e} (s, a)} \left[ e ^ {\log \frac {p _ {\psi}}{q _ {\phi}}} \right] - \mathbb {E} _ {p _ {\pi} (s, a)} \left[ \log \frac {p _ {\psi}}{q _ {\phi}} \right]. \tag {6} +$$ + +As is commonplace in adversarial like approaches (Goodfellow et al., 2014; Ho & Ermon, 2016; Kostrikov et al., 2019), the max-min objective above is trained in an alternating fashion, switching between learning $x$ and using $-x$ as reward in an RL algorithm. Moreover, we find it useful to use a squashing function on $x$ , avoiding a potentially exploding KL, due to a lack of common support (subtly different than the earlier issue of OOD). + +Our approach still enables training each flow independently along the way. We call this flow regularization and importantly, our method succeeds without such regularization. More specifically, for expert and agent batches of size $M$ , this regularization involves incorporating the following additional loss function into the minimization step of Equation 6: + +$$ +\mathcal {L} = - \frac {1}{M} \sum_ {i = 1} ^ {M} \log q _ {\phi} \left(s _ {e} ^ {i}, a _ {e} ^ {i}\right) + \log p _ {\psi} \left(s ^ {i}, a ^ {i}\right), \tag {7} +$$ + +with $\mathcal{L}$ to be weighted by a coefficient $\alpha$ . + +Noting that training flows is a delicate process, our approach further benefits from—though again does not require—use of a smoothing akin to the dequantization used when training normalizing flows on discrete data (Papamakarios et al., 2017). More specifically, since our input is unnormalized, we smooth each dimension with uniform noise scaled to its value. That is, if $(s,a)$ is the vector to be smoothed, we sample uniform noise with dimension $dim((s,a))$ , multiply them element-wise and add that to the original vector: + +$$ +(s, a) + = \beta \cdot (s, a) \odot u, u \sim U n i f o r m \left(- \frac {1}{2}, \frac {1}{2}\right) ^ {\dim ((s, a))}, \tag {8} +$$ + +with weight $\beta$ controlling the smoothing level. Note if regularization is also present, smoothing still applies within the additional loss $\mathcal{L}$ . + +Finally, combining all the above is our resulting algorithm, Coupled Flow Imitation Learning (CFIL). It is summarized in Algorithm 1, with only the number of batches per density update omitted. As in ValueDICE (Kostrikov et al., 2019), + +# Algorithm 1 CFIL + +Input: Expert demos $\mathcal{R}_E = \{(s_e,a_e)\}_{t = 1}^N$ ; parameterized flow pair $p_{\psi},q_{\phi}$ ; off-policy RL algorithm $\mathcal{A}$ ; density update rate $k$ ; squashing function $\sigma$ ; regularization and smoothing coefficients $\alpha, \beta$ . + +Define: $x_{\psi ,\phi} = \sigma (\log p_{\psi} - \log q_{\phi})$ + +1: for timestep $t = 0, 1, \ldots, \mathbf{d}\mathbf{o}$ +2: Take a step in $\mathcal{A}$ with reward $r = -x_{\psi, \phi}$ , while filling agent buffer $\mathcal{R}_A$ and potentially updating the policy and value networks according to $\mathcal{A}$ 's settings. + +3: if $t \mod k = 0$ then + +4: Sample expert and agent batches: + +$$ +5: \quad \left\{\left(s _ {e} ^ {t}, a _ {e} ^ {t}\right) \right\} _ {t = 1} ^ {M} \sim \mathcal {R} _ {E} \text {a n d} \left\{\left(s ^ {t}, a ^ {t}\right) \right\} _ {t = 1} ^ {M} \sim \mathcal {R} _ {A} +$$ + +6: if smooth then + +$$ +7: \quad (s, a) + = \beta \cdot (s, a) \odot u, u \sim U (- \frac {1}{2}, \frac {1}{2}) ^ {\dim ((s, a))} +$$ + +8: end if + +9: Compute loss: + +$$ +\begin{array}{l l} \text {1 0 :} & \quad \mathcal {J} = \log \frac {1}{M} \sum_ {i = 1} ^ {M} e ^ {x \left(s _ {e} ^ {i}, a _ {e} ^ {i}\right)} - \frac {1}{M} \sum_ {i = 1} ^ {M} x \left(s ^ {i}, a ^ {i}\right) \end{array} +$$ + +11: if flow reg then + +12: Compute regularization loss: + +$$ +\begin{array}{l} \mathcal {L} = - \frac {1}{M} \sum_ {i = 1} ^ {M} \log q _ {\phi} \left(s _ {e} ^ {i}, a _ {e} ^ {i}\right) + \log p _ {\psi} \left(s ^ {i}, a ^ {i}\right) \tag {13:} \\ \mathcal {I} = \mathcal {J} + \alpha \mathcal {L} \\ \end{array} +$$ + +15: end if + +16: Update $\psi \gets \psi -\eta \nabla_{\psi}\mathcal{J}$ +17: Update $\phi \gets \phi -\eta \nabla_{\phi}\mathcal{J}$ +18: end if +19: end for + +we found the bias due to the log-exp over the mini-batches did not hurt performance and was therefore left unhandled. + +Another setting of interest is learning from observations (LFO) alone (Zhu et al., 2020; Torabi et al., 2018; 2021). That is, attempting to minimize: + +$$ +\underset {\pi} {\arg \min } D _ {K L} \left(d _ {\pi} \left(s, s ^ {\prime}\right) \| d _ {e} \left(s, s ^ {\prime}\right)\right). \tag {9} +$$ + +While this objective is clearly underspecified in a non-injective and deterministic MDP, in practice, recovering the expert's behavior is highly feasible (Zhu et al., 2020). Seeing as none of our description above is specific to the domain of states and actions, CFIL naturally extends to LFO with no need of modification. This is in stark contrast to previous works in the LFO setting which have been highly tailored (Zhu et al., 2020). We shall demonstrate CFIL's utility in the LFO setting in the following section, where remarkably, we even find success when the flows model the single state distribution $d(s)$ . + +# 5. Experiments + +We evaluate CFIL on the standard Mujoco benchmarks (Todorov et al., 2012), first comparing it to state-of-the-art imitation methods, including ValueDICE (Kostrikov + +et al., 2019) and their optimized implementation of DAC (Kostrikov et al., 2018), along with a customary behavioral cloning (BC) baseline. We then move to evaluation on a variety of other settings described below. We use ValueDICE's original expert demonstrations, with exception to the Humanoid environment, for which we train our own expert, since they did not originally evaluate on it. We use ValueDICE's open-source implementation to comfortably run all three baselines. NDI (Kim et al., 2021b) would be the ideal candidate for comparison, given the similarities, however no code was available. Still, we reference some relevant results described in their paper. + +For CFIL, our RL algorithm of choice is SpinningUps's (Achiam, 2018) SAC (Haarnoja et al., 2018). We leave all hyper-parameters unchanged, only reducing the start-steps down to 2000, matching that of the baselines above. Our choice for both flows is a single layered MAF (Papamakarios et al., 2017), amending no hyper-parameters from the following open-source implementation (Bliznashki, 2019). This lack of tuning highlights the robustness of our method. Our density update rate is 10 batches of 100, every 1000 timesteps. We use the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 0.001. For squashing we use $\sigma = 6\tanh\left(\frac{x}{15}\right)$ , while the smoothing and regularization coefficients are 0.5 and 1 respectively. + +For all algorithms, we run 80 epochs, each consisting of 4000 timesteps, evaluating over 10 episodes after each. We do this across 5 random seeds and plot means and standard deviations. The plots are smoothed by a rolling mean with window length 5. All results are using a single expert trajectory. Appendix E shows CFIL's performance with more expert trajectories. + +The top of Figure 2 shows a comparison to the baselines in the standard state-action setting. Clearly, CFIL achieves expert level performance on all tasks with merely a single expert trajectory—something the baselines fail to do. CFIL either outperforms or performs similarly to the competition in terms of asymptotic performance (though slight outperformance is not meaningful once all are at expert level), with a massive advantage in the Ant and Humanoid environments. + +In terms of environment interactions required until reaching expert level: Our approach slightly lags behind the optimized DAC and largely lags behind ValueDICE. That is of course, only for the tasks they succeed on. ValueDICE—where it succeeds—clearly wins in terms of interactions needed to reach expert level, though its performance then occasionally degrades as seen in Figure 2. Being semi-offline (i.e. it can be instantiated fully offline), ValueDICE is difficult to compete with in terms of environment interactions. However, minimizing interactions was not the objective of this work, yet still we are on par. + +![](images/15c961b3c82468f8d92679d69dea913a4540a387fdd42181811212a16d5a7cab.jpg) +Figure 2. Top: A comparison of CFIL to ValueDICE and DAC on a single expert trajectory in the standard state-action setting. Bottom: A comparison of two versions of CFIL to OPOLO on a single expert trajectory in the LFO setting, with one version limiting itself only to single states. CFIL uses identical hyperparameters on all environments in all three incarnations, showing outstanding results, particularly in the LFO setting where it far outperforms the highly tailored competitor OPOLO. + +Despite not being our aim, we do note that CFIL far outperforms NDI which requires an order of magnitude more interactions (see their appendix), who in turn outperform GAIL (Ho & Ermon, 2016) by another order of magnitude. Clearly, CFIL is still very impressive in terms of environment interactions and the lag between us and DAC may simply be due to a lack of tuning of CFIL's RL component. We deliberately avoid this tuning, preferring to keep the RL algorithm a black box, since too much tuning may jeopardize robustness and extendability to other settings: We aspire for robust competitiveness rather than a tailored, minor and frail improvement. + +We now turn to the state-only and subsampled regimes. Settings in which ValueDICE finds no dice: By its very nature, ValueDICE is inapplicable to the state-only scenario (see Appendix 9.8 of (Zhu et al., 2020)) and extensive experiments already showed its failure when demonstrations are subsampled (Li et al., 2022). SparseDICE (Camacho et al., 2021) attempted to remedy this by adding a series of regularizers. However, they require at least 10 demonstrations, each subsampled at a rate of 0.1, not nearly the sparsity of the settings we next describe. + +First in the state-only regime, we compare against OPOLO (Zhu et al., 2020), a state-of-the-art LFO method. We avoid comparison with on-policy LFO methods, like GAIfo (Torabi et al., 2018), due to the sheer number of interactions they require. We use OPOLO's open-source implementation, with the same setup described previously, once again using only a single expert trajectory. Importantly, CFIL uses identical hyperparameters to the previous setting. + +The bottom of Figure 2 shows two versions of CFIL compared with OPOLO. One estimating the state–next-state distribution ratio, while the other limiting itself to the single state distribution. As can be seen, both incarnations of CFIL once again achieve expert level performance on all environments. This is in sharp contrast to OPOLO which, despite being highly tailored towards the LFO setting, employing inverse action models, discriminators and a host of practical considerations, struggles immensely when provided with a single expert trajectory (they originally report success with 4 trajectories). All the while, CFIL requires no amendment whatsoever, extending simply and elegantly, while distinctly outdoing OPOLO on every imitation learning metric. CFIL’s results in the LFO setting are nothing short of remarkable. We effectively set a new state-of-the-art for learning from observations alone, while our method was not originally designed for it. + +We finally turn to the subsampled regime. Comparing again to DAC with a similar setup as before. Our comparison here involves four different subsampling rates, sampling every 10, 20, 50 and $100^{\prime}$ th transition. Since we are still working with a single expert trajectory, the final rate implies learning from 10 state-action pairs alone. For this setting, we found the hyperparameters used above did not extend well enough. Specifically, the use of flow regularization seemed to hurt performance in some environments. We therefore drop the regularization to 0, leaving the smoothing at 0.5 and amending the squasher to be $\sigma = 3\tanh\left(\frac{x}{10}\right)$ . We use these parameters for all environments in all four new settings, with exception to cheetah who, when subsampled, + +![](images/33409b1e7158da75d9a60f0e9ad4e3181a582ffa1ffa99eaad6429c7b660cd72.jpg) +Figure 3. A comparison of CFIL and DAC on four subsampling rates with a single expert trajectory. Subsample $N$ refers to sampling every $N$ 'th transition in the trajectory. + +actually struggles without the flow regularization. + +Figure 3 shows the comparison in the subsampled setting, with CFIL once again demonstrating stellar performance. While DAC has the advantage in the Cheetah environment, where single crashing seeds significantly reduced the apparent performance of CFIL, CFIL still generally outperforms it, remarkably able to recover the expert behavior when only provided with a measly 10 state-action pairs. + +# 6. Ablation + +We now concisely ablate various aspects of our approach. We first put into question the need for our squasher, our coupling and our inductive bias, by comparing to NoSquash; IndFlow and IndFlowNS; and RegularNet. NoSquash is CFIL stripped of the squasher. IndFlow and IndFlowNS refer to learning the log distribution ratio directly using independent flows (as in section 4), with and without a squasher, respectively. RegularNet alters CFIL's inductive bias by setting $x$ to a regular MLP, completely avoiding the use of flows (reminiscent of MINE (Belghazi et al., 2018)). Along with these we also run a numerator only approach termed Numerator, which simply involves direct learning of the expert's distribution with a single flow, then using it alone as reward in an RL algorithm: $J(\pi, r = \log p_e)$ (akin to NDI (Kim et al., 2021b) with $\lambda_f = 0$ ). + +We run these with our usual setup of a single expert trajectory and compute their overall score as the average normalized asymptotic reward over all 25 seeds (5 for each environment). Table 1 summarizes these results, showing all the CFIL alternatives fail, demonstrating the necessity of its components. + +Next we vary CFIL's smoothing and regularization coef- + +Table 1. A comparison of CFIL to some alternatives, ablating its squasher, coupling and inductive bias. The score is the average normalized asymptotic reward over 25 seeds (5 for each environment). Note: rows with two values indicate runs with smoothing of 0 (left) and 0.5 (right). + +
SCORE
EXPERT1
CFIL1.012
NOSQUASH-0.091
REGULARNET0.1960.190
INDFLOW0.1580.127
INDFLOWS0.0900.072
NUMERATOR-0.051-0.001
+ +![](images/13c85d5abfc2d2f9d54dcc87591343def1e9430acedcb9f950690af37b2e416f.jpg) +Figure 4. Averages (as color) and standard deviations (as size) of normalized asymptotic rewards for CFIL with varied levels of smoothing and regularization. Each point summarizes 25 seeds (5 per environment). The utility of the smoothing and regularization is apparent as well as CFIL's lack of sensitivity to them. + +ficients to test its sensitivity. Specifically, we run CFIL (that of the three settings in Figure 2), but with pairs $\alpha, \beta \in \{0, 0.25, 0.5, 0.75, 1\}$ , along with select others, and compute their averages and standard deviations of normalized asymptotic rewards over all 25 seeds. Figure 4 illustrates the results for these runs, showcasing both the utility of the smoothing and regularization as well as CFIL's robustness to them. + +# 7. Conclusion + +We presented CFIL, a unique approach to imitation learning based on the coupling of a pair of flows. CFIL introduced many novelties including its estimator for the log ratio, its smoothing and regularization and more generally its employment of flows, while we also performed a unique analysis using BC graphs and raised concerns about trends in relevant literature. Crucially, CFIL was empirically shown to outperform state-of-the-art baselines in a variety of settings with only a single expert trajectory. A future work could include coupled flows for general ratio estimation. + +# References + +Achiam, J. Spinning Up in Deep Reinforcement Learning. 2018. +Belghazi, M. I., Baratin, A., Rajeswar, S., Ozair, S., Bengio, Y., Courville, A., and Hjelm, R. D. Mine: mutual information neural estimation. arXiv preprint arXiv:1801.04062, 2018. +Bliznashki, K. https://github.com/kamenbliznashki/normalizing_flow, 2019. +Camacho, A., Gur, I., Moczulski, M. L., Nachum, O., and Faust, A. Sparsedice: Imitation learning for temporally sparse data via regularization. In ICML 2021 Workshop on Unsupervised Reinforcement Learning, 2021. +Chang, W.-D., Higuera, J. C. G., Fujimoto, S., Meger, D., and Dudek, G. Il-flow: Imitation learning from observation using normalizing flows. arXiv preprint arXiv:2205.09251, 2022. +Dadashi, R., Hussenot, L., Geist, M., and Pietquin, O. Primal wasserstein imitation learning. arXiv preprint arXiv:2006.04678, 2020. +Dinh, L., Krueger, D., and Bengio, Y. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014. +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016. +Donsker, M. D. and Varadhan, S. S. Asymptotic evaluation of certain markov process expectations for large time--iii. Communications on pure and applied Mathematics, 29 (4):389-461, 1976. +Durkan, C., Bekasov, A., Murray, I., and Papamakarios, G. Neural spline flows. In Advances in Neural Information Processing Systems, pp. 7511-7522, 2019. +Fu, J., Luo, K., and Levine, S. Learning robust rewards with adversarial inverse reinforcement learning. arXiv preprint arXiv:1710.11248, 2017. +Fujimoto, S., Van Hoof, H., and Meger, D. Addressing function approximation error in actor-critic methods. arXiv preprint arXiv:1802.09477, 2018. +Fujimoto, S., Meger, D., and Precup, D. Off-policy deep reinforcement learning without exploration. In International Conference on Machine Learning, pp. 2052-2062. PMLR, 2019. + +Gangwani, T., Peng, J., and Zhou, Y. Harnessing distribution ratio estimators for learning agents with quality and diversity. arXiv preprint arXiv:2011.02614, 2020. +Germain, M., Gregor, K., Murray, I., and Larochelle, H. Made: Masked autoencoder for distribution estimation. In International conference on machine learning, pp. 881-889. PMLR, 2015. +Ghasemipour, S. K. S., Zemel, R., and Gu, S. A divergence minimization perspective on imitation learning methods. In Conference on Robot Learning, pp. 1259-1277. PMLR, 2020. +Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. Advances in neural information processing systems, 27, 2014. +Haarnoja, T., Zhou, A., Abbeel, P., and Levine, S. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290, 2018. +Hazan, E., Kakade, S., Singh, K., and Van Soest, A. Provably efficient maximum entropy exploration. In International Conference on Machine Learning, pp. 2681-2691. PMLR, 2019. +Ho, J. and Ermon, S. Generative adversarial imitation learning. arXiv preprint arXiv:1606.03476, 2016. +Ho, J., Chen, X., Srinivas, A., Duan, Y., and Abbeel, P. Flow++: Improving flow-based generative models with variational dequantization and architecture design. In International Conference on Machine Learning, pp. 2722-2730. PMLR, 2019. +Huang, C.-W., Krueger, D., Lacoste, A., and Courville, A. Neural autoregressive flows. In International Conference on Machine Learning, pp. 2078-2087. PMLR, 2018. +Islam, R., Teru, K. K., Sharma, D., and Pineau, J. Off-policy policy gradient algorithms by constraining the state distribution shift. arXiv preprint arFXiv:1911.06970, 2019. +Ke, L., Choudhury, S., Barnes, M., Sun, W., Lee, G., and Srinivasa, S. Imitation learning as f-divergence minimization. In International Workshop on the Algorithmic Foundations of Robotics, pp. 313-329. Springer, 2020. +Kim, G.-H., Seo, S., Lee, J., Jeon, W., Hwang, H., Yang, H., and Kim, K.-E. Demodice: Offline imitation learning with supplementary imperfect demonstrations. In International Conference on Learning Representations, 2021a. + +Kim, K., Jindal, A., Song, Y., Song, J., Sui, Y., and Ermon, S. Imitation with neural density models. Advances in Neural Information Processing Systems, 34:5360-5372, 2021b. +Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Kingma, D. P. and Dhariwal, P. Glow: Generative flow with invertible 1x1 convolutions. In Advances in neural information processing systems, pp. 10215-10224, 2018. +Kirichenko, P., Izmailov, P., and Wilson, A. G. Why normalizing flows fail to detect out-of-distribution data. arXiv preprint arXiv:2006.08545, 2020. +Kobyzev, I., Prince, S. J., and Brubaker, M. A. Normalizing flows: An introduction and review of current methods. IEEE transactions on pattern analysis and machine intelligence, 43(11):3964-3979, 2020. +Kostrikov, I., Agrawal, K. K., Dwibedi, D., Levine, S., and Tompson, J. Discriminator-actor-critic: Addressing sample inefficiency and reward bias in adversarial imitation learning. arXiv preprint arXiv:1809.02925, 2018. +Kostrikov, I., Nachum, O., and Tompson, J. Imitation learning via off-policy distribution matching. arXiv preprint arXiv:1912.05032, 2019. +Lee, L., Eysenbach, B., Parisotto, E., Xing, E., Levine, S., and Salakhutdinov, R. Efficient exploration via state marginal matching. arXiv preprint arXiv:1906.05274, 2019. +Li, Z., Xu, T., Yu, Y., and Luo, Z.-Q. Rethinking valuedice: Does it really improve performance? arXiv preprint arXiv:2202.02468, 2022. +Liu, Q., Li, L., Tang, Z., and Zhou, D. Breaking the curse of horizon: Infinite-horizon off-policy estimation. Advances in Neural Information Processing Systems, 31, 2018. +Mutti, M., De Santi, R., De Bartolomeis, P., and Restelli, M. Challenging common assumptions in convex reinforcement learning. arXiv preprint arXiv:2202.01511, 2022. +Nachum, O., Chow, Y., Dai, B., and Li, L. Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections. Advances in Neural Information Processing Systems, 32, 2019. +Nota, C. and Thomas, P. S. Is the policy gradient a gradient? arXiv preprint arXiv:1906.07073, 2019. +Papamakarios, G., Pavlakou, T., and Murray, I. Masked autoregressive flow for density estimation. In Advances in Neural Information Processing Systems, pp. 2338-2347, 2017. + +Papamakarios, G., Nalisnick, E. T., Rezende, D. J., Mohamed, S., and Lakshminarayanan, B. Normalizing flows for probabilistic modeling and inference. J. Mach. Learn. Res., 22(57):1-64, 2021. +Pathak, D., Agrawal, P., Efros, A. A., and Darrell, T. Curiosity-driven exploration by self-supervised prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 16-17, 2017. +Qin, Z., Chen, Y., and Fan, C. Density constrained reinforcement learning. In International Conference on Machine Learning, pp. 8682-8692. PMLR, 2021. +Ross, S., Gordon, G., and Bagnell, D. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 627-635. JMLR Workshop and Conference Proceedings, 2011. +Schroecker, Y. and Isbell, C. Universal value density estimation for imitation learning and goal-conditioned reinforcement learning. arXiv preprint arXiv:2002.06473, 2020. +Schroecker, Y. and Isbell, C. L. State aware imitation learning. In Advances in Neural Information Processing Systems, pp. 2911-2920, 2017. +Schroecker, Y., Vecerik, M., and Scholz, J. Generative predecessor models for sample-efficient imitation learning. arXiv preprint arXiv:1904.01139, 2019. +Schroecker, Y. K. D. Manipulating State Space Distributions for Sample-Efficient Imitation-Learning. PhD thesis, Georgia Institute of Technology, 2020. +Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. +Sun, M., Mahajan, A., Hofmann, K., and Whiteson, S. Soft-dice for imitation learning: Rethinking off-policy distribution matching. arXiv preprint arXiv:2106.03155, 2021. +Sutton, R. S. and Barto, A. G. Reinforcement learning: An introduction. MIT press, 2018. +Sutton, R. S., McAllester, D. A., Singh, S. P., and Mansour, Y. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pp. 1057-1063, 2000. +Todorov, E., Erez, T., and Tassa, Y. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026-5033. IEEE, 2012. + +Torabi, F., Warnell, G., and Stone, P. Generative adversarial imitation from observation. arXiv preprint arXiv:1807.06158, 2018. +Torabi, F., Warnell, G., and Stone, P. Dealio: Data-efficient adversarial learning for imitation from observation. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2391-2397. IEEE, 2021. +Zahavy, T., O'Donoghue, B., Desjardins, G., and Singh, S. Reward is enough for convex mdps. Advances in Neural Information Processing Systems, 34:25746-25759, 2021. +Zhang, J., Koppel, A., Bedi, A. S., Szepesvari, C., and Wang, M. Variational policy gradient method for reinforcement learning with general utilities. Advances in Neural Information Processing Systems, 33:4572-4583, 2020. +Zhu, Z., Lin, K., Dai, B., and Zhou, J. Off-policy imitation learning from observations. Advances in Neural Information Processing Systems, 33:12402-12413, 2020. +Ziebart, B. D., Maas, A. L., Bagnell, J. A., Dey, A. K., et al. Maximum entropy inverse reinforcement learning. 2008. + +# A. Reproducibility + +Code for reproducibility of CFIL, including a detailed description for reproducing our environment, is available at https: //github.com/gfrend123/cfil. + +# B. Ablation Results per Environment + +Table 2. Extended version of Table 1, showing the ablation results per environment. The table contains averages and standard deviations of normalized asymptotic rewards over 5 seeds. + +
HALFCHEETAH-V2WALKER2D-V2ANT-V2HOPPER-V2HUMANOID-V2
CFIL1.122±0.0150.8542±0.0631.095±0.0220.986±0.0111.004±0.015
NoSQUASH-0.056±0.074-0.002±0.001-0.411±0.3770.001±0.0000.010±0.006
REGULARNET (SMOOTH=0)-0.000±0.0000.149±0.0020.185±0.0160.291±0.0090.357±0.312
REGULARNET (SMOOTH=0.5)-0.000±0.0000.152±0.0010.169±0.0170.289±0.0020.340±0.155
INDFLOW (SMOOTH=0)0.642±0.0940.000±0.0020.000±0.0010.138±0.3060.012±0.000
INDFLOW (SMOOTH=0.5)0.611±0.0680.011±0.0180.000±0.0010.001±0.0000.012±0.000
INDFLOWNS (SMOOTH=0)0.203±0.1830.082±0.0580.126±0.1760.026±0.0310.017±0.001
INDFLOWNS (SMOOTH=0.5)0.290±0.2550.014±0.0250.030±0.0290.010±0.0170.017±0.001
NUMERATOR (SMOOTH=0)-0.047±0.0500.026±0.034-0.270±0.2300.011±0.0090.023±0.015
NUMERATOR (SMOOTH=0.5)-0.006±0.0060.007±0.015-0.037±0.0350.011±0.0000.015±0.001
+ +![](images/4f2e0df6870c9fd4036a9774558653163bafbd59386703388d5d1c4d458c40b9.jpg) +Figure 5. Extended version of Figure 4, showing per environment the average (as color) and standard deviations (as size) of normalized asymptotic rewards for CFIL with varied levels of smoothing and regularization. Each point summarizes 5 seeds. Once again, the importance of the smoothing and regularization is apparent, as well as CFIL lack of sensitivity. + +# C. Justification of Modeling the Undiscounted State Distribution + +With the goal in an MDP being policy optimization, standard RL taxonomy divides between value-based and policy-based methods. The policy gradient theorem (Sutton et al., 2000), being the foundation of policy-based methods, provides the following expression for the gradient of $J$ with respect to a parameterized policy $\pi_{\theta}$ : + +$$ +\nabla_ {\theta} J (\pi_ {\theta}) = \sum_ {s} d _ {\pi} ^ {\gamma} (s) \sum_ {a} \frac {d \pi_ {\theta} (a | s)}{d \theta} Q ^ {\pi} (s, a) \tag {10} +$$ + +where $d_{\pi}^{\gamma}(s) = \sum_{t=0}^{\infty} \gamma^{t} Pr(s_{t} = s | s_{0} \sim p_{0})$ is the improper discounted state distribution. Equation (10) enables the construction of policy optimization algorithms such as REINFORCE (Sutton & Barto, 2018), which are based on stochastic gradient descent (SGD). However, garnering some criticism (Nota & Thomas, 2019), most modern policy gradient methods (Schulman et al., 2017; Fujimoto et al., 2018; Haarnoja et al., 2018) opt to ignore the discount factor in $d_{\pi}^{\gamma}(s)$ , which disqualifies them as SGD methods, hence losing guarantees it provides. Instead, they update the policy with a similar expression to (10) in which $d_{\pi}^{\gamma}(s)$ is exchanged with the undiscounted version $d_{\pi}(s) = \sum_{t=0}^{\infty} Pr(s_{t} = s | s_{0} \sim p_{0})$ . The reason being due to stability and reduction of variance (see Appendix A of (Haarnoja et al., 2018)). We in this work follow suit, modeling the undiscounted $d_{\pi}(s)$ . + +# D. More on RealNVP and MAF + +In pursuit of inspiring more research incorporating flow-based models of the state distribution, and since flows are presumably the topic least familiar to an IL researcher reading the paper, we now provide a brief but thorough review of RealNVP and MAF, along with some additional discussion. + +Following the description in Section 2.2: For both training and generation to be efficient, $f_{\theta}$ must be easy to evaluate, easy to invert and the determinant of its Jacobian must be easy to compute. Moreover, for training to be plausible, $f_{\theta}$ must be expressive enough to capture complex high-dimensional distributions. + +RealNVP (Dinh et al., 2016) uses coupling layers (distinct from our coupled approach) to achieve the above requirements. As described in the paper (Dinh et al., 2016), a coupling layer receives a $D$ dimensional input vector $x$ and outputs: + +$$ +y _ {1: d} = x _ {1: d} \tag {11} +$$ + +$$ +y _ {d + 1: D} = x _ {d + 1: D} \odot \exp (s \left(x _ {1: d}\right)) + t \left(x _ {1: d}\right) +$$ + +where $d < D$ , $s$ and $t$ are functions from $R^d \to R^{D - d}$ and $\odot$ represents an element-wise product. Since coupling layers leave certain variables unchanged, they are composed in an alternating pattern to allow all variables to be transformed. Note that the Jacobian of this transformation is triangular, and its determinant is simply $\exp(\sum_{j} s(x_{1:d})_j)$ . Note further that the inverse of this transformation is also a simple computation: + +$$ +x _ {1: d} = y _ {1: d} \tag {12} +$$ + +$$ +x _ {d + 1: D} = \left(y _ {d + 1: D} - t \left(x _ {1: d}\right)\right) \odot \exp (- s \left(x _ {1: d}\right)) +$$ + +Finally, note that $s$ and $t$ have no requirement to be invertible, so they can be arbitrary neural networks. + +Masked Autoregressive Flow (MAF) (Papamakarios et al., 2017) is a generalization of RealNVP which substitutes (11) with the autoregressive + +$$ +y _ {i} = x _ {i} \exp \left(s _ {i} \left(x _ {1: i - 1}\right)\right) + t _ {i} \left(x _ {1: i - 1}\right) \tag {13} +$$ + +As stated in 2.2, this improves expressiveness but prevents single pass sampling. However, density evaluation—our main concern—can still be performed efficiently using masked autoencoders (Germain et al., 2015). + +Generally, normalizing flows are sensitive objects, after all, density estimation is a difficult task, particularly with such limited data of such high dimension. Given the delicacy involved in their training, much deliberation was done over the potential influence of various things. In particular, the need for normalization, something common in other IL works (Kostrikov et al., 2018; 2019; Schroecker & Isbell, 2020). Although we did finally settle on the more robust approach of using no normalization, or any other special pre-processing, still, this could be useful and even necessary for other applications incorporating density models of the state distribution. Moreover, despite the existence of higher capacity flows (Kingma & Dhariwal, 2018; Huang et al., 2018; Durkan et al., 2019; Ho et al., 2019), when learning from such few demonstrations, seemingly more powerful flows may be far more prone to overfitting then their less expressive counterparts. We aim to inspire others to take advantage of modern powerful density estimators for explicit modeling of the state and state-action distributions, and apply them all around the reinforcement learning literature. + +# E. CFIL With More Trajectories + +![](images/8e208a9b30e7d818846f84d505ce955cbf2cc54480e68abaf1227a333c739062.jpg) +Figure 6. CFIL's performance with varying numbers of expert trajectories (1, 5, and 10), demonstrating its consistency. Note, the single trajectory run is identical to that of Figure 2. + +# F. BC Analysis Extras + +# F.1. RegularNet's BC Graph + +Figure 7 shows the BC graph of RegularNet (see ablation) for the HalfCheetah-v2 environment. This serves as an example estimator which fails in RL despite a good BC graph (its failure in RL is illustrated in Table 1). As described in Section 4, a BC graph's utility is mainly one way. That is, when poor it is indicative of failure in RL, but the reverse is not necessarily the case. Beyond that however, it also provides insight into what is captured by an estimator as well as being valuable as a quick sanity test of an estimator's quality (can be generated quickly on saved BC trajectories). + +![](images/675b2b7068917b57d76735a4ad9267259148a2378d52d3614d7b6605e70a390a.jpg) +Figure 7. The BC graph of RegularNet (see ablation) for the HalfCheetah-v2 environment. + +# F.2. Two-Dimensional BC Analysis + +What we have termed the BC graph is the diagonal slice of a broader two-dimensional picture. Recall the generation process described in Section 4: BC graphs show the $i$ 'th estimator's evaluation of the $i$ 'th BC trajectory against the true environment reward of that same trajectory. However, one may take interest in evaluating all $N$ estimators on all $N$ trajectories. To that end, out of the $N = 375$ estimator updates, we show every 25'th (along with the first), evaluated on all the BC trajectories, scattered against their true environment rewards. This is illustrated in Figures 8, 9 and 10 corresponding to coupled flows, uncoupled (independent) flows and RegularNet, respectively. The figures provide insight into what individual estimators captured as well as how they morphed over time. For example, for the coupled flows, we can see how the later estimators (synthetic rewards) incentivize the agent not to go beyond expert level, while the early ones do not. This is reasonable, since estimator $i$ is only encountered by policy $\pi_i$ (hence why the BC graph is only the diagonal cut). For this reason, evaluations of early estimators on much later trajectories aren't particularly insightful with regards to what an RL agent with the analogously constructed reward will face. Still, they're interesting in their own right, showing what has been captured by the estimator at that time. + +![](images/2f4d50d95d3c8d9f0f832978a6df560890de09c0e55665829ab4fca7c56555a1.jpg) +Coupled Flow Two-Dimensional BC Analysis (HalfCheetah-v2) +Figure 8. Two-dimensional BC analysis of coupled flows for the HalfCheetah-v2 environment. In contrast to the BC graph which is the $i$ 'th estimator's evaluation of the $i$ 'th BC trajectory vs the true environment reward of that same trajectory, here we see scatters for individual estimators along the way. That is, their evaluations of all BC trajectories scattered against the true environment rewards of the trajectories. + +![](images/fc8055c0a705a2e09118f3a0e21bc6457aed580bc8fb09929547c9cb15180e56.jpg) +Uncoupled Flow Two-Dimensional BC Analysis (HalfCheetah-v2) +Figure 9. Two-dimensional BC analysis of uncoupled flows for the HalfCheetah-v2 environment. See Figure 8's caption. + +![](images/6df85331df7302adb4dc0f77f3f19fbdd6a91f12f18d7d9c0d81d33ce47d67ce.jpg) +ReuglarNet Two-Dimensional BC Analysis (HalfCheetah-v2) +Figure 10. Two-dimensional BC analysis of RegularNet for the HalfCheetah-v2 environment. See Figure 8's caption. \ No newline at end of file diff --git a/acoupledflowapproachtoimitationlearning/images.zip b/acoupledflowapproachtoimitationlearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..2d1a271651a4575a5ba524299ee55c8450bbef19 --- /dev/null +++ b/acoupledflowapproachtoimitationlearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cefb8e0d56be8471d50b074dc659940026fd293d4c42fd073cdf3ed1895db54c +size 737948 diff --git a/acoupledflowapproachtoimitationlearning/layout.json b/acoupledflowapproachtoimitationlearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6fdc66ab1e987887903ffc3f4a754a862c548d10 --- /dev/null +++ b/acoupledflowapproachtoimitationlearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b5dd4833c4ac8c2ae51cf91449653de2930448a5b6cbb473769f7a3e32bbf35d +size 525877 diff --git a/acriticalrevisitofadversarialrobustnessin3dpointcloudrecognitionwithdiffusiondrivenpurification/38874e5f-d0d5-479c-bd8d-c7551b1236b0_content_list.json b/acriticalrevisitofadversarialrobustnessin3dpointcloudrecognitionwithdiffusiondrivenpurification/38874e5f-d0d5-479c-bd8d-c7551b1236b0_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..736954c6955fc00212f7cfcd52ffea8610838bc7 --- /dev/null +++ b/acriticalrevisitofadversarialrobustnessin3dpointcloudrecognitionwithdiffusiondrivenpurification/38874e5f-d0d5-479c-bd8d-c7551b1236b0_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fc2b6674f5a2098d18621ba0cfedcfb5cdb4579adc94cc1d89c51fee16f06c3 +size 118417 diff --git a/acriticalrevisitofadversarialrobustnessin3dpointcloudrecognitionwithdiffusiondrivenpurification/38874e5f-d0d5-479c-bd8d-c7551b1236b0_model.json b/acriticalrevisitofadversarialrobustnessin3dpointcloudrecognitionwithdiffusiondrivenpurification/38874e5f-d0d5-479c-bd8d-c7551b1236b0_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9925b0625ecfd9824c897807ffe7695cbc4f57af --- /dev/null +++ b/acriticalrevisitofadversarialrobustnessin3dpointcloudrecognitionwithdiffusiondrivenpurification/38874e5f-d0d5-479c-bd8d-c7551b1236b0_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:621ea12a2c960888fa4dcbcc70a7ef394e34af9c6aaad1dbc823b893ba5031fc +size 148538 diff --git a/acriticalrevisitofadversarialrobustnessin3dpointcloudrecognitionwithdiffusiondrivenpurification/38874e5f-d0d5-479c-bd8d-c7551b1236b0_origin.pdf b/acriticalrevisitofadversarialrobustnessin3dpointcloudrecognitionwithdiffusiondrivenpurification/38874e5f-d0d5-479c-bd8d-c7551b1236b0_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..cd7893463850912f1e49d78f523d9837b0a1ebab --- /dev/null +++ b/acriticalrevisitofadversarialrobustnessin3dpointcloudrecognitionwithdiffusiondrivenpurification/38874e5f-d0d5-479c-bd8d-c7551b1236b0_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ab13ee7a041cb69e42488f1e7de527c76df7f27d95404f8198597c4094a689a +size 1288647 diff --git a/acriticalrevisitofadversarialrobustnessin3dpointcloudrecognitionwithdiffusiondrivenpurification/full.md b/acriticalrevisitofadversarialrobustnessin3dpointcloudrecognitionwithdiffusiondrivenpurification/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0e84b9270ee18650881190213e7599ab5cc30877 --- /dev/null +++ b/acriticalrevisitofadversarialrobustnessin3dpointcloudrecognitionwithdiffusiondrivenpurification/full.md @@ -0,0 +1,456 @@ +# A Critical Revisit of Adversarial Robustness in 3D Point Cloud Recognition with Diffusion-Driven Purification + +Jiachen Sun1 Jiongxiao Wang2 Weili Nie3 Zhiding Yu3 Z. Morley Mao1 Chaowei Xiao2,3,4 + +# Abstract + +3D point clouds serve as a crucial data representation in numerous real-world applications such as autonomous driving, robotics, and medical imaging. While the advancements in deep learning have spurred the utilization of 3D point clouds, deep models are notoriously vulnerable to adversarial attacks. Various defense solutions have been proposed to build robust models against adversarial attacks. In this work, we pinpoint a major limitation of the leading empirical defense, adversarial training, when applied to 3D point cloud models: gradient obfuscation, which significantly hampers robustness against potent attacks. To bridge the gap, we propose PointDP, a purification strategy that leverages diffusion models to defend against 3D adversarial attacks. Since PointDP does not rely on predefined adversarial examples for training, it can defend against a variety of threats. We conduct a comprehensive evaluation of PointDP across six representative 3D point cloud architectures, employing sixteen strong and adaptive attacks to manifest its foundational robustness. Our evaluation shows that PointDP achieves significantly better (i.e., $12.6\% -40.3\%$ ) adversarial robustness than state-of-the-art methods under strong attacks bounded by different $\ell_p$ norms. + +# 1. Introduction + +Point cloud data is emerging as one of the most broadly used representations in 3D computer vision. It is a versatile data format available from various sensors like LiDAR and stereo cameras and computer-aided design (CAD) models, which depict physical objects by many coordinates in + +1University of Michigan, Ann Arbor, MI, USA 2Arizona State University, AZ, USA 3NVIDIA, USA 4University of Wisconsin, Madison, WI, USA. Correspondence to: Jiachen Sun . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +the 3D space. Many deep learning-based 3D perception models have been proposed (Wang & Posner, 2015; Maturana & Scherer, 2015; Riegler et al., 2017; Wang et al., 2017; Qi et al., 2017a; Choy et al., 2019) and thus realized several safety-critical applications (e.g., autonomous driving) (Yin et al., 2021; Shi et al., 2019; 2020). Although deep learning models (Qi et al., 2017a;b) have exhibited performance boosts on many challenging tasks, extensive studies show that they are notoriously vulnerable to adversarial attacks (Cao et al., 2019; Sun et al., 2020a; Xiang et al., 2019), where attackers manipulate the input in an imperceptible manner, leading to incorrect predictions of the target model. Because of the broad applications of 3D point clouds in safety-critical fields (Hu et al., 2021), it is imperative to study the adversarial robustness of point cloud recognition models. + +The manipulation space for 2D adversarial attacks primarily involves altering pixel-level numeric values in input images. However, the flexible representation of 3D point clouds arguably expands the attack surface. For example, adversaries could shift or detach existing points (Xiang et al., 2019; Zheng et al., 2019), add new points into the pristine point cloud (Sun et al., 2021b), or generate totally new point clouds (Zhou et al., 2020) to launch attacks. Different strategies, including limits of the number of altered points and constraints of the maximal magnitude of shifted points (Sun et al., 2021b) were proposed to make attacks less perceptible. The inherent flexibility of 3D point cloud data formats enables a variety of attacks, complicating the development of a practical and universal defense mechanism. + +Given the safety-critical nature of 3D point cloud applications, numerous studies have aimed to enhance the robustness of 3D point cloud recognition models. Pioneering efforts such as DUP-Net (Zhou et al., 2019) and GvG-PointNet++ (Dong et al., 2020) incorporated statistical outlier removal (SOR) modules as pre-processing and in-network blocks, respectively, as mitigation strategies. More lately, Sun et al. (2020b) broke the robustness of DUP-Net and GvG-PointNet++ by specific adaptive attacks. Adversarial training has been acknowledged as the most potent defense to deliver strong empirical robustness for PointNet, DGCNN, and PCT (Sun et al., 2021b). Meanwhile, advanced purification strategies like IF-Defense (Wu et al., + +2020) and LPC (Li et al., 2022) leverage more complex modules to cleanse the adversarial point clouds. However, given that the point cloud is a sparse and unstructured data format and there are significant differences between 2D and 3D perception models, it motivates us to re-think that whether the current adversarial training and purification-based methods are robust enough against stronger adversarial attacks. + +Our journey in this work starts with revisiting the prior studies and exploring their actual adversarial robustness. By devising various types of strong adaptive attacks, we, for the first time, demonstrate that standard adversarial training (Madry et al., 2017) suffers from gradient obfuscation in the deep point cloud recognition models as the unstructured data format requires unique architectural designs to digest ( $\S 3$ ). We also extensively evaluate IF-Defense and LPC to show that their purification strategies are actually vulnerable to stronger attacks and limits to perfectly-structured point clouds ( $\S 5.3$ ). + +Furthermore, we propose PointDP, an adversarial purification method that leverages a diffusion model as a preprocessing module to defend against 3D adversaries. As shown in Figure 2, PointDP consists of two components (1) an off-the-shelf 3D point cloud diffusion model and (2) a classifier. Given an input point cloud, PointDP take three steps: (i) adding noise to the input data gradually via the diffusion process of the diffusion model, (ii) purifying the noised data step by step to get the reversed sample via the reverse process of a diffusion model (§ 4.1), and (iii) feeding the reversed sample to the final classifier. Applying a diffusion denoiser for point clouds is non-trivial as they have fewer semantics than 2D images. Different from DiffPure (Nie et al., 2022) that relies on unconditional diffusion models, we leverage a conditional diffusion model to improve the quality of the purified input. We, therefore, use another supervised contrastive loss term to further improve the end-to-end robustness in the latent feature space during training our PointDP. Since PointDP does not rely on any types of predefined adversarial examples for training, it can defend against diverse unseen threats. + +We rigorously evaluate PointDP with six representative point cloud models and sixteen adversarial attacks, including PGD (Sun et al., 2021b; Madry et al., 2017), C&W (Xiang et al., 2019; Carlini & Wagner, 2017), and point cloud-specific attacks (Zheng et al., 2019; Hamdi et al., 2020) with $\ell_0$ , $\ell_2$ , and $\ell_{\infty}$ norms. PointDP on average achieves $75.9\%$ robust accuracy while maintaining similar clean accuracy to the original models, outperforming existing studies by a significant margin. In a nutshell, our contributions are summarized as three-fold: + +- We are the first to demonstrate that standard adversarial training (Madry et al., 2017; Sun et al., 2021b), the most longstanding defense in the 2D image recognition task, + +has a major limitation in its application in 3D point cloud models due to architecture designs. We launch black-box attacks to validate our claim that degrades adversarially trained models' robust accuracy to merely $\sim 10\%$ , which is no longer useful for 3D point cloud recognition. + +- We propose PointDP that leverages diffusion models to purify adversarial 3D point clouds with a supervised contrastive loss term. PointDP is a general framework that is independent of the diffusion model used. We also formulate rigorous adaptive attacks on PointDP. We conduct an extensive evaluation on six representative models with numerous attacks to comprehensively understand the robustness of PointDP. Our evaluation shows that PointDP outperforms previous state-of-the-art (SOTA) purification methods, IF-Defense (Wu et al., 2020) and LPC (Li et al., 2022), by $12.6\%$ and $40.3\%$ on average, respectively. PointDP also achieves $14 - 27\times$ speed up than SOTA purification methods. +- Based on our extensive exploration and experimentation, we set up a rigorous protocol with diverse attacks for robustness evaluation on 3D point cloud models to benefit future research in assessing the true robustness. + +# 2. Related Work + +In this section, we review the current progress of deep learning, adversarial attacks, and defenses for 3D point cloud recognition tasks. + +# 2.1. Deep Learning on 3D Point Cloud Recognition + +2D computer vision has achieved stellar progress on architectural designs of convolutional neural networks (He et al., 2016), followed by vision transformers (Dosovitskiy et al., 2020). However, there is currently no consensus on the architecture of 3D perception models since there is no standard data format for 3D perception (Sun et al., 2022). 3D networks at the early stage use dense voxel grids for perception (Maturana & Scherer, 2015; Song & Xiao, 2016; Tchapmi et al., 2017), which discretize a point cloud to voxel cells. PointNet pioneered leveraging global pooling to help achieve memory-efficient permutation invariance in an end-to-end manner. PointNet++ (Qi et al., 2017b) and DGCNN (Wang et al., 2019) followed up to add sophisticated local clustering operations to advance the performance. Sparse tensors are the other direction in 3D network designs (Graham & van der Maaten, 2017; Choy et al., 2019) to use 3D convolutions to improve 3D perception performance. PointCNN and RSCNN reformed the classic pyramid CNN to improve the local feature generation (Li et al., 2018; Liu et al., 2019b). PointConv and KPConv designed new convolution operations for point cloud learning (Wu et al., 2019; Thomas et al., 2019). PointTransformer and PCT advanced self-attention blocks in the 3D space and achieved good performance (Zhao et al., 2021; Guo et al., + +2020). Various novel local clustering operations (Xiang et al., 2021; Ma et al., 2022) also show enhancements in the clean performance. In this work, we focus on PointNet, PointNet++, DGCNN, PCT, CurveNet, and PointMLP as our evaluation backbones since they are representative and achieve state-of-the-art results in point cloud recognition. + +# 2.2. Adversarial Attacks and Defenses + +Adversarial attacks have become the main obstacle that hinders deep learning models from real-world deployments, especially in safety-critical applications (Eykholt et al., 2018; Sun et al., 2020a; Zhang et al., 2021; 2022c). There are a lot of adversarial attacks proposed in the 2D space to break the various vision models (Carlini & Wagner, 2017; Xiao et al., 2018b; Yang et al., 2020; Xie et al., 2017; Huang et al., 2019; 2020; Sun et al., 2021c). To fill this gap between standard and robust accuracies, many mitigation solutions have been studied and presented to improve the robustness against adversarial attacks (Yang et al., 2019; Xu et al., 2017; Bafna et al., 2018; Papernot et al., 2016; Meng & Chen, 2017; Zhang et al., 2019a; Xiao et al., 2018a; Zhang et al., 2020; Xiao et al., 2019). However, most of them including adding randomization (Liu et al., 2019a; Dhillon et al., 2018; Dong et al., 2020), model distillation (Papernot et al., 2016), adversarial detection (Meng & Chen, 2017), and input transformation (Yang et al., 2019; Xu et al., 2017; Papernot & McDaniel, 2017; Bafna et al., 2018; Zhou et al., 2019) have been compromised by adaptive attacks (Tramer et al., 2020; Athalye et al., 2018a). Adversarial training (AT) (Madry et al., 2017; Goodfellow et al., 2014; Wong et al., 2020; Shafahi et al., 2019), in contrast, delivered a more longstanding mitigation strategy (Xie et al., 2020a; Xie & Yuille, 2020; Zhang et al., 2019b). Most recently, Nie et al. (2022) proposed DiffPure that leverages diffusion models to defend against adversarial attacks, and following-up studies to extend it to certified defenses (Carlini et al., 2022). + +Adversarial attacks and defenses also extend to 3D point clouds. Xiang et al. (2019) first demonstrated that point cloud recognition models are vulnerable to adversarial attacks. They also introduced different threat models like point shifting and point adding attacks. Wen et al. (2019) enhanced the loss function in C&W attack to achieve attacks with smaller perturbations and Hamdi et al. (2020) presented transferable black-box attacks on point cloud recognition. Wicker & Kwiatkowska (2019) pioneered to study the point dropping attack under both white- and black-box settings. Zhou et al. (2019) and Dong et al. (2020) proposed to purify the adversarial point clouds by input transformation and adversarial detection. However, these methods have been successfully by Sun et al. (2020b) through adaptive attacks. Moreover, Liu et al. (2019a) made a preliminary investigation on extending countermeasures in the 2D space to defend against simple attacks like FGSM (Goodfellow et al., 2014) on point cloud data. Sun et al. (2021b;a) conducted a + +```python +1 def knn(x, k): +2 inner = -2*torch/matmul(x.transpose(2, 1), x) +3 xx = torch.sum(x**2, dim=1, keepdim=True) +4 pairwise_distance = -xx - inner - xx.transpose(2, 1) +5 idx = pairwise_distance.topk(k=k, dim=-1) [1] +6 # (batch_size, num_points, k) +7 return idx +8 +9 def get_graph_feature(x, k): +10 #x's shape is (batch_size, num_dims, num_points) +11 idx = knn(x, k=k) # (batch_size, num_points, k) +12 ....# # shape transformation here +13 feature = x.view(batch_size-num_points, -1) [idx, :] +14 # idx is used as index to select features +15 ... +16 return feature +17 +18 # forward function for EdgeConv +19 def forward(self, x): +20 ....... +21 x = get_graph_feature(x, kself.k) +22 x = self.conv1(x) # convolution +23 ....... +``` + +Figure 1: PyTorch-Style Code Snippet of EdgeConv (Wang et al., 2019) in Point Cloud Recognition Models. Adversarial training fails since the kNN layers leverage the top- $k$ function where the gradient propagates to the index, resulting in gradient obfuscation. + +more thorough study on the application of self-supervised learning in adversarial training for 3D point cloud recognition. Besides adversarial training, advanced purification methods IF-Defense (Wu et al., 2020) and LPC (Li et al., 2022) were proposed to transform the adversarial examples into a clean manifold. In this work, we present PointDP, which utilizes 3D diffusion models to purify adversarial point clouds that deliver SOTA empirical robustness. We also demonstrate that standard adversarial training suffers from strong black-box attacks and SOTA purification methods (i.e., IF-Defense and LPC) are vulnerable to PGD-styled adversaries. + +# 3. Catastrophe of Adversarial Training! + +Adversarial training (AT) is well-known as the most longstanding empirical defense for the 2D classification task, which has been applied to PointNet, DGCNN, and PCT with the help of self-supervised learning (Sun et al., 2021b). However, we find that AT is, in fact, a weak defense solution in 3D perception models. First, point cloud models (e.g., PointNet++ and CurveNet) often leverage different sampling strategies to select anchor points, like furthest point sampling (FPS). Such sampling involves high randomness. AT either cannot converge with different random seeds in each iteration or overfits to a single random seed. Therefore, AT does not suit these models. Moreover, we discover that the kNN layers will cause severe gradient obfuscation in point cloud models as well. Different from the standard training process that only needs the gradient of model parameters w.r.t. the loss function $\frac{\partial\mathcal{L}}{\partial w}$ , AT addition + +![](images/828ac5b9035bdacde0a2abb379c555720778fc4f7aa7d4dc2cdc11590886240a.jpg) +Figure 2: Illustration of PointDP, where PointDP serve as a purification module. We leverage a supervised contrastive loss term during the training of the diffusion model. The adversarial point cloud will be incorrectly classified as "toilet" by the recognition model if not purified by our PointDP. + +Table 1: Robust Accuracy of Adversarial Training $(\%)$ with $\ell_{\infty}$ norm $\epsilon = 0.05$ + +
PointNetDGCNNPCT
None87.890.689.7
PGD52.167.451.3
AutoAttack40.556.447.2
SPSA56.77.811.4
Nattack55.15.46.5
+ +ally requires the gradient flow to the input (i.e., point cloud) $\frac{\partial\mathcal{L}}{\partial x}$ in the inner maximization stage. As shown in Line 5 from Figure 1, kNN essentially applies top- $k$ operation for point selection. Top- $k$ is a general case for max pooling that does not have trainable model parameters, so it does not affect standard training. However, top- $k$ is not differentiable w.r.t. the input $x$ . Therefore, the implementation simplifies the gradient flow through the top- $k$ function as an indexing function to make the chain propagation smooth: + +$$ +\left\{\boldsymbol {y} \right\} _ {1} ^ {k} = \operatorname {t o p - k} \left(\left\{\boldsymbol {x} \right\} _ {1} ^ {n}\right) \frac {\partial \boldsymbol {y}}{\partial \boldsymbol {x} _ {i}} = \left\{ \begin{array}{l l} 1 & \text {i f} i \in \arg \operatorname {t o p - k} \left(\left\{\boldsymbol {x} \right\} _ {1} ^ {n}\right) \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {1} +$$ + +However, such simplification still cannot resolve the differentiability issue of the top- $k$ function w.r.t. the input (Xie et al., 2020b). + +Different from 2D models usually at most use one layer of max pooling, the heavy usage of $kNN$ layers in 3D point cloud models like DGCNN and PCT will drastically hinder the actual gradient flow w.r.t. the input. As mentioned in $\S 5.1$ , we exploit black-box SPSA and Nattack to validate our findings. Table 1 presents the results of AT. SPSA and Nattack can greatly lower the average robust accuracy $(7.8\%)$ than white-box attacks $(55.6\%)$ on DGCNN and PCT. This phenomenon exactly reveals gradient obfuscation as white-box attacks rely on the backward propagated gradient to succeed. The results demonstrated that the approximated gradients from black-box attacks are more accurate than the propagated ones. PointNet, in contrast, achieves stronger robustness under black-box attacks because it only + +has one max pooling layer and does not employ kNN layers. The failure of AT demonstrates that adversarial analysis of 3D point cloud models requires extra care. Otherwise, the claimed robustness may fail with adaptive attacks. We further show the failure of other purification methods in § 5.3. All these results highlight the desire for a rigorous robustness evaluation protocol for 3D point cloud models. + +# 4. PointDP: Diffusion-Driven Purification + +We first introduce the preliminaries of diffusion models and then propose PointDP that first introduces noise to the adversarial 3D point clouds, followed by the forward process of diffusion models to get diffused point clouds. Purified point clouds are recovered through the reverse process (§4.2). Next, we follow Nie et al. (2022) to apply the adjoint method to backward propagate through SDE for efficient gradient evaluation with strong adaptive attacks (§4.3). + +# 4.1. Preliminaries + +In this section, we briefly review the background of conditional diffusion models in 3D vision tasks. Following Luo & Hu (2021), we use the discrete-time formulation of the forward and reverse processes. + +Given a clean point cloud sampled from the unknown data distribution $\pmb{x}_0 \sim q(\pmb{x})$ , the forward process of the diffusion model leverages a fixed Markov chain to gradually add Gaussian noise to the clean point cloud $\pmb{x}_0$ over a predefined $N$ time steps, resulting in a number of noisy point clouds $\{\pmb{x}_1, \pmb{x}_2, \dots, \pmb{x}_N\}$ . Mathematically, the forward process is defined as: + +$$ +q \left(\boldsymbol {x} _ {1: N} \mid \boldsymbol {x} _ {0}\right) := \prod_ {n = 1} ^ {N} q \left(\boldsymbol {x} _ {n} \mid \boldsymbol {x} _ {n - 1}\right), \tag {2} +$$ + +$$ +q \left(\boldsymbol {x} _ {n} \mid \boldsymbol {x} _ {n - 1}\right) := \mathcal {N} \left(\boldsymbol {x} _ {n}; \sqrt {1 - \beta_ {n}} \boldsymbol {x} _ {n - 1}, \beta_ {n} \mathbf {I}\right) +$$ + +where $\beta_{n}$ is a scheduling function of the added Gaussian noise, satisfying $0 < \beta_{1},\dots ,\beta_{N} < 1$ + +The reverse process, in contrast, is trained to recover the diffused point cloud in an iterative manner. 3D Point clouds have less semantics than 2D images due to the lack of texture information. Therefore, point cloud diffusion models leverage a separate encoder $e$ to as a latent feature $z_{\pmb{x}} = e(\pmb{x})$ as a condition to help recover the clean point cloud: + +$$ +p _ {\boldsymbol {\theta}} \left(\boldsymbol {x} _ {0: N} \mid \boldsymbol {z}\right) := p \left(\boldsymbol {x} _ {N}\right) \prod_ {n = 1} ^ {N} p _ {\boldsymbol {\theta}} \left(\boldsymbol {x} _ {n - 1} \mid \boldsymbol {x} _ {n}, \boldsymbol {z}\right), \tag {3} +$$ + +$$ +p _ {\boldsymbol {\theta}} \left(\boldsymbol {x} _ {n - 1} \mid \boldsymbol {x} _ {n}, \boldsymbol {z}\right) := \mathcal {N} \left(\boldsymbol {x} _ {n - 1} \mid \boldsymbol {\mu} _ {\boldsymbol {\theta}} \left(\boldsymbol {x} _ {n}, n, \boldsymbol {z}\right), \beta_ {n} \mathbf {I}\right) +$$ + +where $\mu_{\theta}$ denotes the approximated mean value parameterized by a neural network. The training objective is to learn the variational bound of the negative log-likelihood (Luo & Hu, 2021). In practice, we jointly train the encoder $e$ with the noise predictor $\epsilon_{\theta}(x_n,n,z)$ . Similar to the DDPM model (Dhariwal & Nichol, 2021), we can conduct the sampling by reparameterizing $\mu_{\theta}$ as + +$$ +\boldsymbol {\mu} _ {\boldsymbol {\theta}} \left(\boldsymbol {x} _ {n}, n, \boldsymbol {z}\right) = \frac {1}{\sqrt {1 - \beta_ {n}}} \left(\boldsymbol {x} _ {n} - \frac {\beta_ {n}}{\sqrt {1 - \bar {\alpha} _ {n}}} \boldsymbol {\epsilon} _ {\boldsymbol {\theta}} \left(\boldsymbol {x} _ {n}, n, \boldsymbol {z}\right)\right) \tag {4} +$$ + +where $\overline{\alpha}_n = \prod_{i=1}^n (1 - \beta_i)$ . It is worth noting that point cloud diffusion models have recently achieved SOTA performance on generating and autoencoding 3D point clouds, which provides us with opportunities for adversarial point cloud purification. + +# 4.2. Design of PointDP + +Overview. Figure 2 illustrates the pipeline of PointDP. Different from Nie et al. (2022) using unconditional diffusion model to remove the adversarial effect for 2D images, we use conditional diffusion models as mentioned in § 4.1. Specifically, PointDP first adds pre-quantified Gaussian noise to the input data and then leverages a well-trained diffusion model to purify the noisy point cloud step by step to recover the clean point cloud. The reversed point cloud will be finally fed into the recognition model for the classification task. Note that we do not aim at designing new point cloud diffusion models, but instead propose a novel purification pipeline with rigorous evaluations as our main contributions. + +Following Nie et al. (2022), in order to backward propagate through the forward and reverse processes for computing gradients, we first convert the discrete-time formulation defined in Eqs. (2) and (3) to its continuous-time counterpart, i.e., the forward and reverse stochastic differential equations (SDEs) (Song et al., 2021). Let $\pmb{x}_a$ be an adversarial example w.r.t. the pristine classifier $f$ , we initialize the input of the forward diffusion process as $\pmb{x}_a$ , i.e., $\pmb{x}_0 = \pmb{x}_a$ . Also, let $\pmb{x}\left(\frac{n}{N}\right) := \pmb{x}_n$ , $\beta\left(\frac{n}{N}\right) := \beta_n$ , $\alpha\left(\frac{n}{N}\right) := \overline{\alpha}_n$ , and $t \in \{0,1,\dots,\frac{N-1}{N}\}$ . The forward diffusion process from + +$t = 0$ to $t = t^{*}\in (0,1)$ can be solved by: + +$$ +\boldsymbol {x} \left(t ^ {*}\right) = \sqrt {\alpha \left(t ^ {*}\right)} \boldsymbol {x} _ {a} + \sqrt {1 - \alpha \left(t ^ {*}\right)} \boldsymbol {\epsilon} \tag {5} +$$ + +where $\epsilon \sim \mathcal{N}(0, \mathbf{I})$ . We leverage Eq. 3 to recover the clean point clouds. Equivalently, the truncated reverse process can be also solved by the SDE solver in (Nie et al., 2022) (denoted as sdeint): + +$$ +\hat {\boldsymbol {x}} (0) = \operatorname {s d e i n t} \left(\boldsymbol {x} \left(t ^ {*}\right), \boldsymbol {f} _ {\text {r e v}}, g _ {\text {r e v}}, \boldsymbol {w}, t ^ {*}, 0\right) \tag {6} +$$ + +where the six inputs are initial value, drift coefficient, diffusion coefficient, Wiener process, initial time, and end time (Nie et al., 2022), with the definitions: + +$$ +\boldsymbol {f} _ {\text {r e v}} (\boldsymbol {x}, t, \boldsymbol {z}) = - \frac {1}{2} \beta (t) [ \boldsymbol {x} + 2 \boldsymbol {s} _ {\boldsymbol {\theta}} (\boldsymbol {x}, t, \boldsymbol {z}) ], g _ {\text {r e v}} (t) = \sqrt {\beta (t)} \tag {7} +$$ + +and the score function $s_\theta$ is derived from $\epsilon_\theta(x_n, n, z)$ in Eq. (4) by following: + +$$ +\boldsymbol {s} _ {\boldsymbol {\theta}} (\boldsymbol {x}, t, \boldsymbol {z}) = - \frac {1}{\sqrt {1 - \alpha (t)}} \boldsymbol {\epsilon} _ {\boldsymbol {\theta}} (\boldsymbol {x} (t), t N, \boldsymbol {z}) \tag {8} +$$ + +Note that the hyper-parameter $t^*$ and $N$ trade off the denoising performance and efficiency. We empirically choose $t^* = 0.15$ and $N = 200$ in our study, which has shown satisfactory results in our evaluation (§ 5). We also conduct ablation studies on $t$ in § 5.2. + +Since we leverage conditional diffusion models, we add a contrastive loss term (Khosla et al., 2020) to further improve the robustness of the latent feature during training: + +$$ +\mathcal {L} _ {\operatorname {S u p C o n}} = \sum_ {i \in I} \frac {- 1}{| P (i) |} \sum_ {p \in P (i)} \log \frac {\exp \left(\boldsymbol {z} _ {i} \cdot \boldsymbol {z} _ {p}\right)}{\sum_ {a \in A (i)} \exp \left(\boldsymbol {z} _ {i} \cdot \boldsymbol {z} _ {a}\right)} \tag {9} +$$ + +where $A(i)$ denotes the mini-batch $I$ except for $i$ itself and $P(i) := \{p \in A(i) : y_p = y_i\}$ . The intuition is that $\mathcal{L}_{\mathrm{SupCon}}$ could further enforce the latent feature to be clustered distantly for different classes. + +# 4.3. Adpative Attacks on PointDP + +PointDP is a pre-processing module that purifies the adversarial perturbations. (Athalye et al., 2018a) have shown that input transformation-based methods can be broken by specifically designed attacks. Therefore, it is essential to model the adaptive attacks on PointDP to demonstrate its lower-bound adversarial robustness. We thus formulate two types of adaptive attacks on PointDP. + +Attack on the Latent Feature. As PointDP utilizes conditional diffusion models for adversarial purification, the latent feature $z$ is a good candidate for adversaries to launch attacks. Concretely, adversaries can set the goal to maximize some distance metric $\mathcal{D}$ between the latent feature + +of the optimized adversarial examples and the oracle latent feature of clean inputs $z_{\mathrm{oracle}}$ . Without loss of generality, the adaptive attacks can be formulated as: + +$$ +\boldsymbol {x} _ {s + 1} = \operatorname {P r o j} _ {\boldsymbol {x} + S} \left(\boldsymbol {x} _ {s} + \alpha \cdot \operatorname {n o r m} \left(\nabla_ {\boldsymbol {x} _ {s}} \mathcal {D} \left(e \left(\boldsymbol {x} _ {s}\right), \boldsymbol {z} _ {\text {o r a c l e}}\right)\right)\right), \tag {10} +$$ + +where $x_{s}$ denotes the adversarial examples from the $s$ -th step, Proj is the function to project the adversarial examples to the pre-defined space $S$ , and $\alpha$ is the attack step size. We choose two distance metrics in our study, where the first one is the KL divergence (Goldberger et al., 2003) and the other is the $\ell_1$ norm distance. In our evaluation (§ 5), we report the lowest accuracy achieved under attacks with two distance metrics. + +Attack on the Reverse Diffusion Process. We follow Nie et al. (2022) to formulate the adaptive attack as an augmented SDE process. We re-state the attack formulation as below. For the SDE in Equation 6, the augmented SDE that computes the gradient $\frac{\partial\mathcal{L}}{\partial\boldsymbol{x}(t^{*})}$ of backward propagating through it is given by: + +$$ +\binom {\boldsymbol {x} \left(t ^ {*}\right)} {\frac {\partial \mathcal {L}}{\partial \boldsymbol {x} \left(t ^ {*}\right)}} = \operatorname {s d e i n t} \binom {\left(\hat {\boldsymbol {x}} (0) \right.} {\left(\frac {\partial \mathcal {L}}{\partial \hat {\boldsymbol {x}} (0)}\right), \tilde {\boldsymbol {f}}, \tilde {\boldsymbol {g}}, \tilde {\boldsymbol {w}}, 0, t ^ {*}} \tag {11} +$$ + +where $\frac{\partial\mathcal{L}}{\partial\hat{x}(0)}$ is the gradient of the objective $\mathcal{L}$ w.r.t. the output $\hat{x} (0)$ of the SDE in Equation 6, and + +$$ +\tilde {\boldsymbol {f}} ([ \boldsymbol {x}; \boldsymbol {z} ], t) = \left( \begin{array}{c} \boldsymbol {f} _ {\text {r e v}} (\boldsymbol {x}, t) \\ \frac {\partial \boldsymbol {f} _ {\text {r e v}} (\boldsymbol {x} , t)}{\partial \boldsymbol {x}} \boldsymbol {z} \end{array} \right), +$$ + +$$ +\tilde {\boldsymbol {g}} (t) = \left( \begin{array}{c} - g _ {\mathrm {r e v}} (t) \mathbf {1} \\ \mathbf {0} \end{array} \right), \quad \tilde {\boldsymbol {w}} (t) = \left( \begin{array}{c} - \boldsymbol {w} (1 - t) \\ - \boldsymbol {w} (1 - t) \end{array} \right) +$$ + +where $\mathbf{1}$ and $\mathbf{0}$ denote the vectors of all ones and all zeros, respectively. Nie et al. (Nie et al., 2022) have demonstrated that such approximation aligns well with the actual gradient value. Therefore, we leverage this adaptive attack formulation for our evaluation. + +# 5. Experiments and Results + +In this section, we first introduce our experimental setups (§ 5.1). We then present the standard robustness evaluation of PointDP(§ 5.2). We next show that how the SOTA adversarial training and adversarial purification methods fail under various strong attacks (§ 5.3). We finally conduct a stress test on PointDP to show its actual robustness under various stronger adaptive attacks (§ 5.4). + +# 5.1. Experimental Setups + +Datasets and Network Architectures. We conduct all the main experiments on the widely used ModelNet40 point cloud classification benchmark (Wu et al., 2015), consisting of 12,311 CAD models from 40 artificial object categories. + +Table 2: Robust Accuracy (%) of Plain Model on PA and PD on ModelNet40. Models under other attacks mostly have $0.0\%$ accuracy, which are detailed in Appendix A. + +
PointNetPointNet++DGCNNPCTCurveNetPointMLP
None90.192.892.592.893.293.5
PA44.119.935.120.848.97.2
PD33.369.864.553.072.671.1
+ +In addition, we leverage ScanObjectNN (Uy et al., 2019) to further demonstrate the superiority of PointDP. ScanObjectNN is a real-world dataset consisting of 2,902 point clouds within 15 classes. We adopt the official split with 9,843 samples for training and 2,468 for testing. We also uniformly sample 1024 points from the surface of each object and normalize them into an edge-length-2 cube, following most of the prior arts (Qi et al., 2017a). For the ScanObjectNN dataset, we adhere to the original configuration of 2048 points and maintain experimental setups consistent with those employed for ModelNet40. As mentioned before, there are various backbones for 3D point cloud recognition in the literature. To demonstrate the universality of PointDP, we select six representative model architectures including PointNet (Qi et al., 2017a), PointNet++ (Qi et al., 2017b), DGCNN (Wang et al., 2019), PCT (Guo et al., 2020), CurveNet (Xiang et al., 2021), and PointMLP (Ma et al., 2022). These backbones either have representative designs (e.g., Transformer and MLP) or achieve SOTA performance on the ModelNet40 benchmark (e.g., CurveNet and PointMLP). + +Adversarial Attacks. As briefly described in § 2.2, adversarial attacks could be roughly categorized into C&W- and PGD-styled attacks. C&W attacks involve the perturbation magnitude into the objective term of the optimization procedure by Lagrange multiplier, while PGD attacks set the perturbation magnitude as a firm constraint in the optimization procedure. Moreover, adversarial attacks by $\ell_p$ norm as the distance metric for the perturbation. Although a number of attacks measure Chamfer and Handoff "distances" in 3D point cloud (Xiang et al., 2019), they are not formal distance metrics as they do not satisfy the triangular inequality. Therefore, we still leverage $\ell_2$ and $\ell_{\infty}$ norm, following most defense studies in both 2D and 3D vision tasks (Carlini & Wagner, 2017; Sun et al., 2021b). We also have designed adaptive attacks on our proposed method § 4.3. Besides naive C&W and PGD attacks, we leverage specific attacks designed to break the robustness of point cloud recognition such as kNN (Tsai et al., 2020) and AdvPC (Hamdi et al., 2020). We also apply strong adaptive AutoAttack (Croce & Hein, 2020) (i.e., APGD) in our evaluation. Moreover, we use SPSA (Uesato et al., 2018) and Nattack (Li et al., 2019) as black-box adversaries, followed by the suggestion of Carlini et al. (Carlini et al., 2019). We also leverage EOT-AutoAttack. Point adding (PA) and dropping/detaching (PD) attacks are also evaluated in our study, followed + +Table 3: Robust Accuracy (\%) of Adversarial Attacks on PointDP on ModelNet40. Colored rows correspond to rows in Table 5 for clear comparisons with IF-Defense results. + +
PointNetPointNet++DGCNNPCTCurveNetPointMLP
None86.887.986.987.088.088.2
\(\ell_{\infty}\)C&W77.978.678.976.873.176.2
PGD78.180.680.377.274.879.8
\(\epsilon = 0.05\)AdvPC69.776.679.179.472.675.2
PA82.185.184.885.586.385.8
\(\ell_{2}\)C&W82.482.981.980.981.582.6
PGD80.175.074.672.071.776.4
\(\epsilon = 1.25\)AdvPC69.176.379.074.274.175.6
kNN83.582.983.382.381.583.1
\(\ell_{0}\)PD68.974.177.376.376.877.4
\(\epsilon = 200\)
+ +by the setups in (Sun et al., 2021b). We set the attack steps to 200 to maximize the adversarial capability and follow the settings in (Sun et al., 2021b) for other attack parameters. + +Evaluation Metrics. We leverage two main metrics to evaluate the performance of our defense proposal, which are standard and robust accuracy. The standard accuracy measures the performance of the defense method on clean data, which is evaluated on the whole test set from ModelNet40. The robust accuracy measures the performance on adversarial examples generated by different attacks. Because of the high computational cost of applying adaptive and black-box attacks to our method, we evaluate robust accuracy for our defense on a fixed subset of 128 point clouds randomly sampled from the test set. Notably, robust accuracies of most baselines do not change much on the sampled subset, compared to the whole test set. We evaluate the robust accuracy on the whole test set for other adversarial attacks with acceptable overhead (e.g., C&W and PGD attacks). + +Baseline. Without any defense applied to the original recognition models, the robust accuracy is mostly $0.0\%$ for all models under $\ell_{2}$ and $\ell_{\infty}$ based attacks (see Appendix A). DGCNN exceptionally achieves $64\%$ on $\ell_{2}$ -based PGD and AutoAttack, respectively, due to its dynamic clustering design, which adaptively discards outlier points. PA and PD are two weaker attacks and Table 2 presents robust accuracy against these two attacks. + +# 5.2. Experiment Results of PointDP + +In this section, we first present the evaluation results of PointDP under attacks on the plain models. We train the diffusion and 3D point cloud recognition models in sequential order. Table 3 presents the detailed results of PointDP against attacks on six models. We find that PointDP overall achieves satisfactory results across all models and attacks. The average robust accuracy against adversarial attacks is above $75\%$ . We observe a drop in the clean accuracy for the chosen models due to the imperfect reconstruction of diffusion models. As mentioned before, + +Table 5: Robust Accuracy (%) of Adversarial Attacks on IF-Defense on ModelNet40. Colored rows correspond to rows in Table 3 for clear comparisons with PointDP results. + +
PointNetPointNet++DGCNNPCTCurveNetPointMLP
ONetNone90.092.892.492.893.193.5
\( \ell_{\infty} \)PGD69.974.061.054.151.961.6
\( \epsilon = 0.05 \)AdvPC69.472.861.653.953.662.5
\( \ell_2 \)PGD74.277.570.567.268.770.5
\( \epsilon = 1.25 \)AdvPC69.072.963.064.555.467.9
ConvONetNone90.192.892.592.893.293.5
\( \ell_{\infty} \)PGD66.473.252.946.845.355.7
\( \epsilon = 0.05 \)AdvPC63.771.255.547.246.755.0
\( \ell_2 \)PGD72.276.769.865.662.771.4
\( \epsilon = 1.25 \)AdvPC63.474.356.659.847.271.0
+ +designing diffusion models for 3D point clouds is a more difficult task than 2D image diffusion, which may lead to partial semantic loss. The average drop in standard accuracy is $4.9\%$ . We find that DGCNN still achieves the best robustness combined with PointDP, which has a $79.9\%$ of robust accuracy. We further compare the performance of PointDP with adversarial training, IF-Defense, and LPC in the next section. + +We also ab- Table 4: Ablation Study on Overhead Introdu cate the ef- duced by Adversarial Purification Methods. + +
DUP-NetIF-DefensePointDP
Time (s)1.332.600.097
+ +Figure 4 shows the averaged evaluation results of point shifting, adding, and dropping attacks with PGD-styled adversaries over the selected models. Point shifting attack is much stronger than point adding and dropping attacks. It is, thus, more sensitive to the diffusion steps in $\text{PointDP}$ . We find that the robust accuracy converges after the number of diffusion steps $n \geq 30$ (or equivalently $t \geq 0.15$ ). Therefore, we choose to use $t^* = 0.15$ in the main evaluation of our study. Adversarial purification inevitably introduces overhead during model inference, we benchmark the computation cost of $\text{PointDP}$ and other baselines using an RTX3080 GPU and a batch size of 32 over 100 runs. Table 4 presents the results, where $\text{PointDP}$ achieves the lowest cost than existing SOTA methods, which is a $27 \times$ speed-up than IF-Defense. + +# 5.3. Comparison with State-of-the-Art Defenses + +Existing purification-based defenses against 3D adversarial point clouds mainly leverage C&W-styled attacks in their evaluation. C&W attacks utilize the method of Lagrange multipliers to find tractable adversarial examples while minimizing the magnitudes of the perturbation. From the perspective of an adversary, such attacks are desirable due to their stealthiness, while this does not hold from a defensive view. Defense methods should be evaluated against strong adaptive attacks (Carlini et al., 2019). DUP-Net (Zhou et al., 2019) is a pioneer study that uses statistical outlier removal + +![](images/e4cea1d3f03108f1d1d75bb3287e0adb6cac6916f154c0d36a395550aff76044.jpg) +Figure 3: Compare among SOTA Adversarial Purification Strategies (i.e., IF-Defense (Wu et al., 2020), LPC (Li et al., 2022), and PointDP). The results of IF-Defense and PointDP are averaged from six models on ModelNet40. + +![](images/0dba49956e9ca557e0bc87a0964da683264e1ed9dcf39aa27d807e52be1a8990.jpg) +Figure 4: Ablation on Discrete Diffusion Steps in PointDP on ModelNet40. + +and an upsampler network for purification, but it was adaptively attacked by (Sun et al., 2020b). We thus present the evaluation results of DUP-Net in Appendix A. IF-Defense and LPC are the SOTA adversarial purification methods for 3D point cloud models. We leverage PGD and AdvPC attacks, which assign constant adversarial budgets in the adversarial optimization stage. We follow the original setups of IF-Defense and LPC in our study. Such evaluation is stronger than C&W attacks, while we note that they are not strict adaptive attacks since the adversarial target is still the classifier itself. Similar to PointDP, IF-Defense can be pre-pended to any point cloud classifier, but LPC uses a specific backbone. Table 5 presents the detailed evaluation results of IF-Defense under various settings and attacks. We find that PointDP achieves much better robustness than IF-Defense, which is on average a $12.6\%$ improvement. However, IF-Defense achieves slightly higher clean accuracy $(4.9\%)$ . This is because IF-Defense leverages SOR to smooth the point cloud (Zhou et al., 2019). However, such an operation has been demonstrated to be vulnerable (Sun et al., 2020b). With specific adaptive attacks, there will be an even larger drop in robust accuracy for IF-Defense. + +In addition, we have performed supplementary experiments on ScanObjectNN. The results, outlined in Table 6, underscore the effectiveness of PointDP. IF-Defense necessitates a pristine dataset for estimating the 3D occupancy field, which becomes infeasible with ScanObjectNN due to its real-world origin and partial visibility caused by occlusion. We use the pretrained ConvONet for these experiments. On the other hand, LPC transposes the point cloud into 2D space, thereby disrupting the native point cloud structure, rendering it inadequate for ScanObjectNN by default. PointDP manifests reasonable robust accuracies of $65.7\%$ and $66.7\%$ under $\ell_{\infty}$ and $\ell_{2}$ norm PGD attacks, respectively. In contrast, other methods fail to maintain any level of robustness when applied to real-world ScanObjectNN. + +Figure 3 shows the comparison among PointDP and existing methods. PointDP overall achieves the best performance than prior arts, demonstrating $12.6\%$ and $40.3\%$ improvements over IF-Defense and LPC, respectively. We find that + +Table 6: Robust Accuracy (%) of PGD-styled Attacks on PointDP with Baselines on ScanObjectNN. + +
PCTCurveNetPointMLP
l∞ε=0.05None0.00.00.0
DUP-Net0.00.00.0
IF-Defense3.54.13.1
LPC---
PointDP63.764.369.2
l2ε=1.25None0.00.00.0
DUP-Net8.27.910.1
IF-Defense5.14.54.9
LPC---
PointDP64.065.970.1
+ +even without adaptive attacks, adversaries with constant budgets can already hurt the robust accuracy by a significant gap. This suggests that IF-Defense and LPC fail to deliver strong robustness to 3D point cloud recognition models. Especially, LPC appears in the proceedings of CVPR 2022 but achieves trivial robustness, emphasizing that a rigorous evaluation protocol is highly required in this community. Evaluation results of ScanObjectNN further highlight the limitations of existing methods and substantiate the superior effectiveness of PointDP. + +# 5.4. Defense against Adaptive Threats + +We have so far illustrated that state-of-the-art defenses can be easily broken by (adaptive) adversarial attacks and PointDP consistently achieves the best robustness. In this section, we further extensively evaluate the robustness of PointDP on even stronger adaptive attacks to demonstrate the actual robustness realized by PointDP. As mentioned in § 5.1, we leverage two types of adaptive attacks in our study, and Table 7 presents their results. We also leverage black-box SPSA and Nattack to validate our results. We find that BPDA-PGD has the strongest adaptive attacks, which aligns well with the previous study on 2D diffusion-driven purification (Nie et al., 2022). Even though with strong adaptive attacks, PointDP still achieves much stronger robustness. Besides, black-box attacks are much less effective. Although we admit that PointDP still relies on gradient obfuscation, the extremely high randomness will hinder the + +Table 7: Robust Accuracy (%) of Strong Adaptive Attacks on Our Plain PointDPon ModelNet40. + +
PointNetPointNet++DGCNNPCTCurveNetPointMLP
\( \ell_{\infty} \)€=0.05BPDA-PGD77.178.679.276.173.977.7
EOT-AutoAttack78.079.979.176.575.978.9
PGD-Latent80.880.782.982.580.879.9
AdvPC-Latent69.976.879.479.872.975.4
SPSA76.678.974.978.576.480.9
Nattack75.277.974.478.076.178.9
PA-Latent81.784.784.184.584.885.2
\( \ell_{2} \)€=1.25BPDA-PGD78.973.373.371.270.775.1
EOT-AutoAttack79.674.474.271.371.375.9
PGD-Latent85.186.682.085.386.786.8
AdvPC-Latent69.176.979.274.574.376.1
SPSA76.177.074.474.577.078.9
Nattack74.976.573.974.076.377.2
\( \ell_{0} \)€=200PD-Latent61.372.173.575.974.174.4
+ +black-box adversaries from finding correct gradients. We further ablate the usage of $\mathcal{L}_{\mathrm{SupCon}}$ and Table 8 shows that it will bring additional $\sim 1.2\%$ robustness under attacks on the latent feature. We also ablate the effectiveness of PointDP with larger attack budgets in Appendix A, where PointDP consistently achieves the strongest robustness. In addition, we employ attacks with greater $\ell_{\infty}$ norm distance to dissect the extra robustness provided by $\mathcal{L}_{\mathrm{SupCon}}$ . The evaluation results, as presented in Table 9, indicate that $\mathcal{L}_{\mathrm{SupCon}}$ is even more helpful in enhancing robustness with the increase in the attack budget. + +Table 8: Robust Accuracy (%) of Strong Adaptive Attacks on PointDP with $\mathcal{L}_{\mathrm{SupCon}}$ on ModelNet40. + +
PCTCurveNetPointMLP
\( \ell_{\infty} \)€ = 0.05PGD-Latent83.981.881.0
AdvPC-Latent80.574.176.7
PA-Latent84.985.085.7
\( \ell_{2} \)€ = 1.25PGD-Latent86.387.687.7
AdvPC-Latent76.376.077.7
\( \ell_{0} \)€ = 200PD-Latent76.875.576.4
+ +Further experiments, including larger attack budgets and additional adversaries, are conducted and detailed in Appendix A. Across different setups, PointDP consistently achieves the highest robust accuracy. + +# 6. A Rigorous Robustness Evaluation Protocol + +Our evaluation unveils a concerning fact that existing defenses in the 3D domain could be easily broken by strong attacks. Therefore, we follow Carlini et al. (2019) to set up a rigorous evaluation protocol to help future robustness assessment in the 3D point cloud community: + +- A defense study should strictly follow formal distance metrics with quantified budgets. FGSM and C&W attacks are not designed for robustness evaluation. As mentioned in § 5.3, those attacks were proposed to minimize perturbation. + +Table 9: Robust Accuracy (%) of PGD-styled Attacks on PointDP with $\mathcal{L}_{\mathrm{SupCon}}$ on ModelNet40. + +
PCTCurveNetPointMLP
\( \ell_{\infty} \)PointDP65.165.267.8
\( \epsilon = 0.075 \)+ \( \mathcal{L}_{\text{SupCon}} \)67.867.270.9
\( \ell_{\infty} \)PointDP53.253.257.4
\( \epsilon = 0.1 \)+ \( \mathcal{L}_{\text{SupCon}} \)57.556.560.3
\( \ell_{\infty} \)PointDP40.540.043.7
\( \epsilon = 0.125 \)+ \( \mathcal{L}_{\text{SupCon}} \)45.044.448.3
+ +tions, which are not suitable for the defense evaluation. We strongly suggest future research use PGD-styled adversaries to test the real robustness with the claimed budget. + +- It is crucial to perform adaptive attacks on the proposed defense and verify that the adaptive attacks are effective. BPDA and EOT techniques are good methods to formulate adaptive attacks. Adaptive attacks usually should reflect the lower-bound robustness of a defense. +- Evaluation of black-box attacks is indeed necessary. As shown in § 3, the results of black-box attacks are a good indicator of severe gradient obfuscation. Should black-box attacks yield lower robust accuracy compared to white-box attacks, there's a significant likelihood that the defense is considerably less potent than its claimed efficacy. +- It is also suggested to perform point-cloud-specific attacks like point adding and dropping to demonstrate the generalization of the proposed defense. Other attacks include GeoA3 (Wen et al., 2020), AOF (Liu et al., 2022), and SS (Zhang et al., 2022a). It's worth acknowledging that our PointDP may be susceptible to transformation-based attacks like SS, given that SS lies outside our assumed $\ell_p$ norm threat model, as discussed in Appendix B. + +# 7. Conclusion + +In this paper, we propose PointDP, an adversarial purification method against attacks on 3D point cloud recognition. Our study exposes the vulnerability of adversarial training and current purification techniques under strong attacks. We then performed extensive rigorous evaluations to validate that PointDP outperforms existing SOTA methods by a significant margin (12.6%-40.3%) in robust accuracy, while achieving $14 - 27\times$ speed-up in purification. + +# Acknowledgment + +We thank our area chairs and anonymous reviewers for their insightful comments and feedback. This work was partially supported by NSF under CNS-1930041, the National AI Institute for Edge Computing Leveraging Next Generation Wireless Networks, Grant # 2112562, DHS No. 17STQAC00001-06-00, and a grant from Mcity. + +# References + +Athalye, A., Carlini, N., and Wagner, D. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning, pp. 274-283. PMLR, 2018a. +Athalye, A., Engstrom, L., Ilyas, A., and Kwok, K. Synthesizing robust adversarial examples. In Dy, J. and Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 284-293. PMLR, 10-15 Jul 2018b. URL https://proceedings.mlrpress/v80/athalyel8b.html. +Bafna, M., Murtagh, J., and Vyas, N. Thwarting adversarial examples: An $l\_0$ -robustsparse fourier transform. arXiv preprint arXiv:1812.05013, 2018. +Cao, Y., Xiao, C., Cyr, B., Zhou, Y., Park, W., Rampazzi, S., Chen, Q. A., Fu, K., and Mao, Z. M. Adversarial sensor attack on lidar-based perception in autonomous driving. In Proceedings of the 2019 ACM SIGSAC conference on computer and communications security, pp. 2267-2281, 2019. +Carlini, N. and Wagner, D. Towards evaluating the robustness of neural networks. In 2017 *ieee symposium on security and privacy (sp)*, pp. 39-57. IEEE, 2017. +Carlini, N., Athalye, A., Papernot, N., Brendel, W., Rauber, J., Tsipras, D., Goodfellow, I., Madry, A., and Kurakin, A. On evaluating adversarial robustness. arXiv preprint arXiv:1902.06705, 2019. +Carlini, N., Tramer, F., Kolter, J. Z., et al. (certified!!) adversarial robustness for free! arXiv preprint arXiv:2206.10550, 2022. +Choy, C., Gwak, J., and Savarese, S. 4d spatio-temporal convnets: Minkowski convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3075-3084, 2019. +Croce, F. and Hein, M. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International Conference on Machine Learning, pp. 2206-2216. PMLR, 2020. +Dhariwal, P. and Nichol, A. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34:8780-8794, 2021. +Dhillon, G. S., Azizzadenesheli, K., Lipton, Z. C., Bernstein, J., Kossaifi, J., Khanna, A., and Anandkumar, A. Stochastic activation pruning for robust adversarial defense. arXiv preprint arXiv:1803.01442, 2018. + +Dong, X., Chen, D., Zhou, H., Hua, G., Zhang, W., and Yu, N. Self-robust 3d point recognition via gather-vector guidance. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11513–11521. IEEE, 2020. +Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. +Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., and Song, D. Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1625-1634, 2018. +Goldberger, J., Gordon, S., Greenspan, H., et al. An efficient image similarity measure based on approximations of kldivergence between two gaussian mixtures. In ICCV, volume 3, pp. 487-493, 2003. +Goodfellow, I. J., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. +Graham, B. and van der Maaten, L. Submanifold sparse convolutional networks. arXiv preprint arXiv:1706.01307, 2017. +Guo, M.-H., Cai, J.-X., Liu, Z.-N., Mu, T.-J., Martin, R. R., and Hu, S.-M. Pct: Point cloud transformer. arXiv preprint arXiv:2012.09688, 2020. +Hamdi, A., Rojas, S., Thabet, A., and Ghanem, B. Advpc: Transferable adversarial perturbations on 3d point clouds. In European Conference on Computer Vision, pp. 241-257. Springer, 2020. +He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. +Hu, S., Chen, Q. A., Sun, J., Feng, Y., Mao, Z. M., and Liu, H. X. Automated Discovery of Denial-of-Service Vulnerabilities in Connected Vehicle Protocols. In Proceedings of the 29th USENIX Security Symposium (USENIX Security '21), 2021. +Huang, L., Gao, C., Zhou, Y., Xie, C., Yuille, A., Zou, C., and Liu, N. Universal physical camouflage attacks on object detectors, 2019. +Huang, L., Gao, C., Zhou, Y., Xie, C., Yuille, A. L., Zou, C., and Liu, N. Universal physical camouflage attacks on object detectors. In Proceedings of the IEEE/CVF + +Conference on Computer Vision and Pattern Recognition, pp. 720-729, 2020. +Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., and Krishnan, D. Supervised contrastive learning. Advances in neural information processing systems, 33:18661-18673, 2020. +Li, K., Zhang, Z., Zhong, C., and Wang, G. Robust structured declarative classifiers for 3d point clouds: Defending adversarial attacks with implicit gradients. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15294-15304, 2022. +Li, Y., Bu, R., Sun, M., Wu, W., Di, X., and Chen, B. Pointcnn: Convolution on x-transformed points. Advances in neural information processing systems, 31:820-830, 2018. +Li, Y., Li, L., Wang, L., Zhang, T., and Gong, B. Nattack: Learning the distributions of adversarial examples for an improved black-box attack on deep neural networks. In International Conference on Machine Learning, pp. 3866-3876. PMLR, 2019. +Liu, B., Zhang, J., and Zhu, J. Boosting 3d adversarial attacks with attacking on frequency. IEEE Access, 10: 50974-50984, 2022. +Liu, D., Yu, R., and Su, H. Extending adversarial attacks and defenses to deep 3d point cloud classifiers. In 2019 IEEE International Conference on Image Processing (ICIP), pp. 2279-2283. IEEE, 2019a. +Liu, Y., Fan, B., Xiang, S., and Pan, C. Relation-shape convolutional neural network for point cloud analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8895-8904, 2019b. +Luo, S. and Hu, W. Diffusion probabilistic models for 3d point cloud generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2837-2845, 2021. +Ma, X., Qin, C., You, H., Ran, H., and Fu, Y. Rethinking network design and local geometry in point cloud: A simple residual mlp framework. arXiv preprint arXiv:2202.07123, 2022. +Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. +Maturana, D. and Scherer, S. Voxnet: A 3d convolutional neural network for real-time object recognition. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 922-928. IEEE, 2015. + +Meng, D. and Chen, H. Magnet: a two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, pp. 135-147, 2017. +Nie, W., Guo, B., Huang, Y., Xiao, C., Vahdat, A., and Anandkumar, A. Diffusion models for adversarial purification. arXiv preprint arXiv:2205.07460, 2022. +Papernot, N. and McDaniel, P. Extending defensive distillation. arXiv preprint arXiv:1705.05264, 2017. +Papernot, N., McDaniel, P., Wu, X., Jha, S., and Swami, A. Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE symposium on security and privacy (SP), pp. 582-597. IEEE, 2016. +Qi, C. R., Su, H., Mo, K., and Guibas, L. J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 652-660, 2017a. +Qi, C. R., Yi, L., Su, H., and Guibas, L. J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. arXiv preprint arXiv:1706.02413, 2017b. +Riegler, G., Ulusoy, A. O., and Geiger, A. Octnet: Learning deep 3d representations at high resolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. +Shafahi, A., Najibi, M., Ghiasi, A., Xu, Z., Dickerson, J., Studer, C., Davis, L. S., Taylor, G., and Goldstein, T. Adversarial training for free! arXiv preprint arXiv:1904.12843, 2019. +Shi, S., Wang, X., and Li, H. Pointcnn: 3d object proposal generation and detection from point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 770-779, 2019. +Shi, S., Guo, C., Jiang, L., Wang, Z., Shi, J., Wang, X., and Li, H. Pv-rcnn: Point-voxel feature set abstraction for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10529-10538, 2020. +Song, S. and Xiao, J. Deep Sliding Shapes for amodal 3D object detection in RGB-D images. In CVPR, 2016. +Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021. +Sun, J., Cao, Y., Chen, Q. A., and Mao, Z. M. Towards robust lidar-based perception in autonomous driving: General black-box adversarial sensor attack and + +countermeasures. In 29th USENIX Security Symposium (USENIX Security 20), pp. 877-894. USENIX Association, August 2020a. ISBN 978-1-939133-17-5. URL https://www.usenix.org/conference/usenixsecurity20/presentation/sun. +Sun, J., Koenig, K., Cao, Y., Chen, Q. A., and Mao, Z. M. On the adversarial robustness of 3d point cloud classification, 2020b. +Sun, J., Cao, Y., Choy, C., Yu, Z., Xiao, C., Anandkumar, A., and Mao, Z. M. Improving adversarial robustness in 3d point cloud classification via self-supervisions. In International Conference on Machine Learning Workshop (ICMLW), volume 1, 2021a. +Sun, J., Cao, Y., Choy, C. B., Yu, Z., Anandkumar, A., Mao, Z. M., and Xiao, C. Adversarily robust 3d point cloud recognition using self-supervisions. Advances in Neural Information Processing Systems, 34:15498-15512, 2021b. +Sun, J., Mehra, A., Kailkhura, B., Chen, P.-Y., Hendrycks, D., Hamm, J., and Mao, Z. M. Certified adversarial defenses meet out-of-distribution corruptions: Benchmarking robustness and simple baselines. arXiv preprint arXiv:2112.00659, 2021c. +Sun, J., Zhang, Q., Kailkhura, B., Yu, Z., Xiao, C., and Mao, Z. M. Benchmarking robustness of 3d point cloud recognition against common corruptions. arXiv preprint arXiv:2201.12296, 2022. +Tchapmi, L. P., Choy, C. B., Armeni, I., Gwak, J., and Savarese, S. Segcloud: Semantic segmentation of 3d point clouds. In International Conference on 3D Vision (3DV), 2017. +Thomas, H., Qi, C. R., Deschaud, J.-E., Marcotegui, B., Goulette, F., and Guibas, L. J. Kpconv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6411-6420, 2019. +Tramer, F., Carlini, N., Brendel, W., and Madry, A. On adaptive attacks to adversarial example defenses. arXiv preprint arXiv:2002.08347, 2020. +Tsai, T., Yang, K., Ho, T.-Y., and Jin, Y. Robust adversarial objects against deep learning models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 954-962, 2020. +Uesato, J., O'donoghue, B., Kohli, P., and Oord, A. Adversarial risk and the dangers of evaluating against weak attacks. In International Conference on Machine Learning, pp. 5025-5034. PMLR, 2018. + +Uy, M. A., Pham, Q.-H., Hua, B.-S., Nguyen, T., and Yeung, S.-K. Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1588-1597, 2019. +Wang, D. Z. and Posner, I. Voting for voting in online point cloud object detection. In Robotics: Science and Systems, volume 1, pp. 10-15607. Rome, Italy, 2015. +Wang, P.-S., Liu, Y., Guo, Y.-X., Sun, C.-Y., and Tong, X. O-cnn: Octree-based convolutional neural networks for 3d shape analysis. ACM Transactions on Graphics (TOG), 36(4):1-11, 2017. +Wang, Y., Sun, Y., Liu, Z., Sarma, S. E., Bronstein, M. M., and Solomon, J. M. Dynamic graph cnn for learning on point clouds. Acm Transactions On Graphics (tog), 38 (5):1-12, 2019. +Wen, Y., Lin, J., Chen, K., and Jia, K. Geometry-aware generation of adversarial and cooperative point clouds. 2019. +Wen, Y., Lin, J., Chen, K., Chen, C. P., and Jia, K. Geometry-aware generation of adversarial point clouds. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44 (6):2984-2999, 2020. +Wicker, M. and Kwiatkowska, M. Robustness of 3d deep learning in an adversarial setting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11767-11775, 2019. +Wong, E., Rice, L., and Kolter, J. Z. Fast is better than free: Revisiting adversarial training. arXiv preprint arXiv:2001.03994, 2020. +Wu, W., Qi, Z., and Fuxin, L. Pointconv: Deep convolutional networks on 3d point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9621-9630, 2019. +Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., and Xiao, J. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1912-1920, 2015. +Wu, Z., Duan, Y., Wang, H., Fan, Q., and Guibas, L. J. If-defense: 3d adversarial point cloud defense via implicit function based restoration. arXiv preprint arXiv:2010.05272, 2020. +Xiang, C., Qi, C. R., and Li, B. Generating 3d adversarial point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9136-9144, 2019. + +Xiang, T., Zhang, C., Song, Y., Yu, J., and Cai, W. Walk in the cloud: Learning curves for point clouds shape analysis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 915-924, 2021. +Xiao, C., Deng, R., Li, B., Yu, F., Liu, M., and Song, D. Characterizing adversarial examples based on spatial consistency information for semantic segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 217-234, 2018a. +Xiao, C., Li, B., Zhu, J.-Y., He, W., Liu, M., and Song, D. Generating adversarial examples with adversarial networks. arXiv preprint arXiv:1801.02610, 2018b. +Xiao, C., Deng, R., Li, B., Lee, T., Edwards, B., Yi, J., Song, D., Liu, M., and Molloy, I. Advit: Adversarial frames identifier based on temporal consistency in videos. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3968-3977, 2019. +Xie, C. and Yuille, A. Intriguing properties of adversarial training at scale. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=HyxJhCEFDS. +Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., and Yuille, A. Adversarial examples for semantic segmentation and object detection. In International Conference on Computer Vision. IEEE, 2017. +Xie, C., Tan, M., Gong, B., Yuille, A., and Le, Q. V. Smooth adversarial training. arXiv preprint arXiv:2006.14536, 2020a. +Xie, Y., Dai, H., Chen, M., Dai, B., Zhao, T., Zha, H., Wei, W., and Pfister, T. Differentiable top-k with optimal transport. Advances in Neural Information Processing Systems, 33:20520-20531, 2020b. +Xu, W., Evans, D., and Qi, Y. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155, 2017. +Yang, C., Kortylewski, A., Xie, C., Cao, Y., and Yuille, A. Patchattack: A black-box texture-based attack with reinforcement learning. In European Conference on Computer Vision, pp. 681-698. Springer, 2020. +Yang, Y., Zhang, G., Katabi, D., and Xu, Z. Me-net: Towards effective adversarial robustness with matrix estimation. arXiv preprint arXiv:1905.11971, 2019. +Yin, T., Zhou, X., and Krahenbuhl, P. Center-based 3d object detection and tracking. CVPR, 2021. +Zhang, H., Chen, H., Xiao, C., Gowal, S., Stanforth, R., Li, B., Boning, D., and Hsieh, C.-J. Towards stable and + +efficient training of verifiably robust neural networks. arXiv preprint arXiv:1906.06316, 2019a. +Zhang, H., Yu, Y., Jiao, J., Xing, E., El Ghaoui, L., and Jordan, M. Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning, pp. 7472-7482. PMLR, 2019b. +Zhang, H., Chen, H., Xiao, C., Li, B., Boning, D. S., and Hsieh, C.-J. Robust deep reinforcement learning against adversarial perturbations on observations. 2020. +Zhang, J., Dong, Y., Zhu, J., Zhu, J., Kuang, M., and Yuan, X. Improving transferability of 3d adversarial attacks with scale and shear transformations. arXiv preprint arXiv:2211.01093, 2022a. +Zhang, K., Zhou, H., Zhang, J., Huang, Q., Zhang, W., and Yu, N. Ada3diff: Defending against 3d adversarial point clouds via adaptive diffusion. arXiv preprint arXiv:2211.16247, 2022b. +Zhang, Q., Hu, S., Sun, J., Chen, Q. A., and Mao, Z. M. On adversarial robustness of trajectory prediction for autonomous vehicles. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15159-15168, June 2022c. +Zhang, X., Zhang, A., Sun, J., Zhu, X., Guo, Y. E., Qian, F., and Mao, Z. M. Emp: Edge-assisted multi-vehicle perception. In Proceedings of the 27th Annual International Conference on Mobile Computing and Networking, pp. 545-558, 2021. +Zhao, H., Jiang, L., Jia, J., Torr, P. H., and Koltun, V. Point transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 16259-16268, 2021. +Zheng, T., Chen, C., Yuan, J., Li, B., and Ren, K. Pointcloud saliency maps. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1598-1606, 2019. +Zhou, H., Chen, K., Zhang, W., Fang, H., Zhou, W., and Yu, N. Dup-net: Denoiser and upsampler network for 3d adversarial point clouds defense. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1961-1970, 2019. +Zhou, H., Chen, D., Liao, J., Chen, K., Dong, X., Liu, K., Zhang, W., Hua, G., and Yu, N. Lg-gan: Label guided adversarial network for flexible targeted attack of point cloud based deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10356-10365, 2020. + +# A. Evaluation Details + +As mentioned in § 5.1, the robust accuracies of the unprotected base models are mostly $0\%$ . Table 10 presents the detailed results. + +Table 10: Robust Accuracy of Adversarial Attacks on Base Models(%) + +
PointNetPointNet++DGCNNPCTCurveNetPointMLP
None90.192.892.592.893.293.5
l∞C&W0.00.00.00.00.00.0
PGD0.40.50.20.40.80.3
ε = 0.05AdvPC0.40.30.00.20.60.3
PA44.119.935.120.848.97.2
l2C&W0.00.00.00.00.00.0
PGD0.10.364.50.50.50.5
ε = 1.25AdvPC0.00.562.70.40.30.5
l0PD33.369.864.553.072.671.1
ε = 200
+ +We also include (Wicker & Kwiatkowska, 2019) in our evaluation. (Wicker & Kwiatkowska, 2019) proposed ISO attack that iterative drops the most salient points. This setting is very similar to our point-dropping (PD) adversary evaluated in § 5.2. The difference is that (Wicker & Kwiatkowska, 2019) leverages a heuristic way to determine critical points, but PD uses the gradient that backward propagates to each point to select the critical points. (Wicker & Kwiatkowska, 2019) only works for PointNet because i) + +both (Wicker & Kwiatkowska, 2019) and PointNet are very first explorations in the area of 3D point cloud recognition and ii) PointNet utilizes global max pooling so that only the critical points will affect the prediction results. We evaluate ISO under PointNet with an attack budget of 200 points; the results are shown in Table 11. + +We observe that ISO is a less potent attack compared to PD as it inherently restricts its attack capability. While this may be advantageous for an attack paper, it fails to showcase the worst-case robustness of a defense proposal. + +We also evaluate DUP-Net with IF-Defense and PointDP under $\ell_{\infty}$ norm PGD attacks using different attack budgets. As Table 12 presents, DUP-Net is vulnerable to such attacks due to the sensitivity of the upsampler network to $\ell_{\infty}$ norm noises (Sun et al., 2020b). The robust accuracy for LPC is $27.8\%$ and $19.1\%$ for $\epsilon = 0.075$ and $\epsilon = 0.1$ , respectively. Even with these extremely large distortions, PointDP achieves the strongest robustness, outperforming existing SOTA by an extremely large margin. Comparable improvements are observed under PGD attacks with larger $\ell_{2}$ norms. We limit our selection to three models due to time constraints. + +# B. Discussion + +Adversarial robustness has been well-established in 2D vision tasks, where Carlini et al. (Carlini et al., 2019) and many other researchers have devoted significant efforts to setting up a rigorous evaluation protocol. In this study, we emphasize that this evaluation protocol should be strictly followed in the 3D point cloud robustness study as well. Counter-intuitively, we have demonstrated that standard adversarial training (AT) is not a good candidate to deliver robustness against strong black-box adversaries because gradient obfuscation in 3D point cloud architectures will hinder the inner maximization stage from making real progress in AT. We propose PointDP as an adversarial purification strategy to mitigate the robustness loss in the 3D space. We want to clarify that almost all purification methods (including PointDP) still depend on gradient obfuscation to mislead adaptive attackers. However, we argue that proper usage of gradient obfuscation could still serve as a good defense, as long as the obfuscation is sophisticated enough. The multi-step purification in diffusion models adds extremely high-level randomness that EOT (Athalye et al., 2018b) and BPDA (Athalye et al., 2018a) attacks are hard to model. Therefore, we believe our extensive evaluation reveals the actual robustness of PointDP. Our evaluation also unveils a concerning fact that existing defenses in the 3D domain could be easily broken by strong attacks. Therefore, we hope our evaluation protocol sets a standard for robustness assessment in this community, i.e., a defense study should strictly follow a formal distance metric and leverage strong attacks including PGD, black-box, and adaptive attacks to evaluate its actual + +Table 11: Robust Accuracy $(\%)$ of Different Purification Methods under the ISO Attack. + +
IF-DefensePointDP
ISO67.370.1
PD66.168.9
+ +Table 12: Robust Accuracy (%) of Adversarial Attacks on Different Purification Methods. + +
PointNetPointNet++DGCNNPCTCurveNetPointMLP
\( \ell_{\infty} \)€=0.05DUP-Net0.01.30.90.90.61.0
IF-Defense66.473.252.946.845.355.7
PointDP80.880.782.982.580.879.9
\( \ell_{\infty} \)€=0.075DUP-Net0.50.30.00.20.20.6
IF-Defense60.767.347.240.939.850.9
PointDP73.973.674.270.267.972.5
\( \ell_{\infty} \)€=0.1DUP-Net0.00.00.00.20.10.3
IF-Defense53.957.142.035.133.344.7
PointDP67.362.464.259.258.363.1
\( \ell_{2} \)€=2.0DUP-Net---40.139.844.7
IF-Defense---50.951.456.3
PointDP---61.561.165.2
\( \ell_{2} \)€=2.5DUP-Net---24.624.329.5
IF-Defense---39.238.947.0
PointDP---46.944.853.1
+ +robustness. We notice a concurrent work (Zhang et al., 2022b) that primarily focuses on defending against 3D adversarial attacks using diffusion models under Chamfer distance. In contrast, our study proposes to address the formal $\ell_p$ norm-based adversarial robustness. We believe these two studies are complementary to each other. + +Limitation. Mitigation solutions to adversarial attacks are critical and essential for modern machine learning systems. Given that the 3D point cloud is heavily adopted in safety-critical applications, we believe our study is valuable in demonstrating the vulnerabilities of existing SOTA defenses. On the other hand, diffusion models need multiple steps in the reverse process to recover the point cloud and hinder adaptive attacks, which will incur additional computational overhead, although PointDP has demonstrated to achieve the lowest cost. PointDP also limits itself to empirical robustness without theoretical guarantees. As previously stated, PointDP is currently designed to offer robustness against adversaries based on $\ell_p$ norms. Developing a more general defense mechanism is left for challenging future research. \ No newline at end of file diff --git a/acriticalrevisitofadversarialrobustnessin3dpointcloudrecognitionwithdiffusiondrivenpurification/images.zip b/acriticalrevisitofadversarialrobustnessin3dpointcloudrecognitionwithdiffusiondrivenpurification/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..62cbe7743fbd34ebde70ca8e341c03b2f42122d1 --- /dev/null +++ b/acriticalrevisitofadversarialrobustnessin3dpointcloudrecognitionwithdiffusiondrivenpurification/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57f9ff5c47b41881e764cf7c73da47b08bb3aea7417eee164c751a20a14c559d +size 571629 diff --git a/acriticalrevisitofadversarialrobustnessin3dpointcloudrecognitionwithdiffusiondrivenpurification/layout.json b/acriticalrevisitofadversarialrobustnessin3dpointcloudrecognitionwithdiffusiondrivenpurification/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d13e91acf4b210e9f876c160f6fd1715642f864d --- /dev/null +++ b/acriticalrevisitofadversarialrobustnessin3dpointcloudrecognitionwithdiffusiondrivenpurification/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6d3133ab7766f0de29cb1ba217c23c79ad42d70f0fede6a6084714aed869fae9 +size 558130 diff --git a/acriticalviewofvisionbasedlongtermdynamicspredictionunderenvironmentmisalignment/6bbc53e2-de2e-4c8c-99b6-2a5bf61bf66c_content_list.json b/acriticalviewofvisionbasedlongtermdynamicspredictionunderenvironmentmisalignment/6bbc53e2-de2e-4c8c-99b6-2a5bf61bf66c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e7aa114716b586153bbc6dcaee22099a71a267d2 --- /dev/null +++ b/acriticalviewofvisionbasedlongtermdynamicspredictionunderenvironmentmisalignment/6bbc53e2-de2e-4c8c-99b6-2a5bf61bf66c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bdfa406ad08b29a1e9d12514132220414e03a6cd2f3b08cb49577f78c5b9c50f +size 101433 diff --git a/acriticalviewofvisionbasedlongtermdynamicspredictionunderenvironmentmisalignment/6bbc53e2-de2e-4c8c-99b6-2a5bf61bf66c_model.json b/acriticalviewofvisionbasedlongtermdynamicspredictionunderenvironmentmisalignment/6bbc53e2-de2e-4c8c-99b6-2a5bf61bf66c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4709d671b932bdf0652a275f2e6c5f49d270c73d --- /dev/null +++ b/acriticalviewofvisionbasedlongtermdynamicspredictionunderenvironmentmisalignment/6bbc53e2-de2e-4c8c-99b6-2a5bf61bf66c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4020e5ed35ef536f5149bb9a5a8c01618a1734a2cce45599b3ca7d8ac4912495 +size 119866 diff --git a/acriticalviewofvisionbasedlongtermdynamicspredictionunderenvironmentmisalignment/6bbc53e2-de2e-4c8c-99b6-2a5bf61bf66c_origin.pdf b/acriticalviewofvisionbasedlongtermdynamicspredictionunderenvironmentmisalignment/6bbc53e2-de2e-4c8c-99b6-2a5bf61bf66c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d78a3d999a808ff5fe413cf1eb34117d5128feb6 --- /dev/null +++ b/acriticalviewofvisionbasedlongtermdynamicspredictionunderenvironmentmisalignment/6bbc53e2-de2e-4c8c-99b6-2a5bf61bf66c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1528aaed0aef20db8945673a58ab6027d1db8ccc3c7fc4a908469f3e073dd5c7 +size 2283726 diff --git a/acriticalviewofvisionbasedlongtermdynamicspredictionunderenvironmentmisalignment/full.md b/acriticalviewofvisionbasedlongtermdynamicspredictionunderenvironmentmisalignment/full.md new file mode 100644 index 0000000000000000000000000000000000000000..31b2c20c6c9c86cbaf683bd69bac77790c133c44 --- /dev/null +++ b/acriticalviewofvisionbasedlongtermdynamicspredictionunderenvironmentmisalignment/full.md @@ -0,0 +1,298 @@ +# A Critical View of Vision-Based Long-Term Dynamics Prediction Under Environment Misalignment + +Hanchen Xie $^{12}$ Jiageng Zhu $^{13}$ Mahyar Khayatkhoei $^{1}$ Jiazhi Li $^{13}$ Mohamed E. Hussein $^{14}$ Wael AbdAlmageed $^{123}$ + +# Abstract + +Dynamics prediction, which is the problem of predicting future states of scene objects based on current and prior states, is drawing increasing attention as an instance of learning physics. To solve this problem, Region Proposal Convolutional Interaction Network (RPCIN), a vision-based model, was proposed and achieved state-of-the-art performance in long-term prediction. RPCIN only takes raw images and simple object descriptions, such as the bounding box and segmentation mask of each object, as input. However, despite its success, the model's capability can be compromised under conditions of environment misalignment. In this paper, we investigate two challenging conditions for environment misalignment: Cross-Domain and Cross-Context by proposing four datasets that are designed for these challenges: SimB-Border, SimB-Split, BlenB-Border, and BlenB-Split. The datasets cover two domains and two contexts. Using RPCIN as a probe, experiments conducted on the combinations of the proposed datasets reveal potential weaknesses of the vision-based long-term dynamics prediction model. Furthermore, we propose a promising direction to mitigate the Cross-Domain challenge and provide concrete evidence supporting such a direction, which provides dramatic alleviation of the challenge on the proposed datasets. + +$^{1}$ USC Information Sciences Institute, Marina del Rey, USA $^{2}$ USC Thomas Lord Department of Computer Science, Los Angeles USA $^{3}$ USC Ming Hsieh Department of Electrical and Computer Engineering, Los Angeles, USA $^{4}$ Alexandria University, Alexandria, Egypt. Correspondence to: Hanchen Xie . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +![](images/8bc566f154c1df5ccb36f002cafd12bc95dedec6054ac8a599670ffaf8e8d751.jpg) +Figure 1. Illustration of two Environment Misalignment challenges: Cross-Domain and Cross-Context, and samples of four proposed datasets: SimB-Border, SimB-Split, BlenB-Border, and BlenB-Split. We adjusted images resolution and only display the ground-truth paths of the ball with displacement over time for better visualization. + +# 1. Introduction + +In addition to identifying the visual patterns observed in a scene, such as in object detection (Ren et al., 2015; Redmon et al., 2016) and segmentation (Long et al., 2015; He et al., 2017), recently, increasing attention has been drawn to learning the underlining mechanisms of scene generation, such as learning the laws of physics (Battaglia et al., 2016; Chang et al., 2017; Yi* et al., 2020; Ding et al., 2021). An intuitive instance of learning physics is dynamics prediction (Ye et al., 2019; Qi et al., 2021), in which the future states of discrete objects are predicted from the observations of reference states. A number of approaches has been proposed to solve the dynamics prediction problem. Object-centric models (Chang et al., 2017; Janner et al., 2019) along with interaction networks (Battaglia et al., 2016; Watters et al., 2017) focus on extracting representations and modeling dynamics of each object. + +One stream of approaches models the dynamics by incorporating human defined physics models (Todorov et al., + +2012; Ullman et al., 2014; de Avila Belbute-Peres et al., 2018; Ding et al., 2021), where the physics parameters of the reference states, such as position, mass, and velocity, can be extracted from visual information or given as prior knowledge. Despite their accuracy with the guidance of Newtonian physics theory, developing a physics model that precisely describes the mechanism underlying complex real-world scenario requires expert knowledge and sophisticated physics parameters which can be hard to acquire, such as energy lost during inelastic collision. In an alternative stream of approaches, several works (Wu et al., 2015; Fragkiadaki et al., 2016; Battaglia et al., 2016; Chang et al., 2017; Wu et al., 2017; Ye et al., 2019) rely on Deep Neural Network (DNN) for learning physics, where the underline mechanism can be approximated in a data-driven way. Instead of only focusing on the abstract object state, Qi et al. (2021) propose Region Proposal Convolutional Interaction Network (RPCIN), a vision-based dynamics prediction model that takes raw image and descriptions of objects, such as bounding box and segmentation of objects which can be directly extracted from image, as input, so that it is more applicable for real-world scenario without instrumented setting (Qi et al., 2021). By adopting Region of Interest (RoI) Pooling (Girshick, 2015) and convolutional neural network (CNN), RPCIN incorporates visual information of both objects and environment, where the latter one tends to be ignored in previous works, and achieves state-of-the-art (SOTA) results, especially for long-term prediction. However, in spite of its generality to input requirements, as typical with DNNs, vision-based dynamics prediction models, like RPCIN, can be vulnerable to environment misalignment between the training and testing. In this paper, by using RPCIN as probe, we explore two types of environment misalignment challenges: Cross-Context and Cross-Domain, which can significantly compromise the model capability. + +Cross-Domain environment misalignment, where the visual domain is different between training and testing, challenges the model performance, where possessing such transferability can be critical. In the real-world, although it might be easy to gather static visual information of common objects, it can be difficult to collect dynamic visual information that capture the physics properties. For example, car images are easily acquirable whereas videos of car collisions are rare. While it is possible to synthesize such scenes, transferring a model trained on synthetic data to the real world leads to performance degradation. Chang et al. (2017) postulated that the visual appearance and the dynamic properties of the object are disentangled and should be separately modeled. This postulate, therefore, narrows the discussion to be on the state space of the objects, where the inputs are semantic properties of objects rather than images, while the visual environment characteristics are ignored (Qi et al., 2021). Thus, the actual real-world challenge of shifting the visual domain + +of the entire environment still remains under-explored. + +Cross-Context focuses on another aspect of environment misalignment challenge, where, even if the visual domain stays the same, the environment context is altered. For instance, a self-driving vehicle trained under normal traffic may suffer from irregular road condition, such as road closure with hazard sign. Most model generality discussions in previous works either ignore the changes to the environment context (Fragkiadaki et al., 2016; Ye et al., 2019; Qi et al., 2021) or represent the environment by the composition of designed objects (Battaglia et al., 2016; Chang et al., 2017; Bakhtin et al., 2019), where the relationships between objects along with different physics properties of an object, such as position, mass, and velocity, may appear during training. Furthermore, efficiently and accurately decomposing an arbitrary environment by using pre-defined objects is also challenging. Thus, we seek to directly introduce alteration to the environment context, which is distinct from varying objects composition, between training and testing stage, so that the model has to be able to correctly derive environment context characteristic by only leveraging the raw visual information. + +To the best of our knowledge, there does not exist dataset benchmarks that can be used to characterize the performance of vision-based long-term dynamics prediction models, in the presence of environment misalignment challenges. To investigate the cross-domain and cross-context challenges, a set of matching datasets is necessary, where, while the visual domain or environment context is different, the underlying dynamics stay same. Thus, we propose four datasets: SimB-Border, SimB-Split, BlenB-Border, and BlenB-Split, which cover two visual domains and two environment contexts. + +We show that the performance of SOTA dynamics prediction method, e.g. RPCIN, significantly degrades on either Cross-Domain or Cross-Context challenge. Further, for addressing Cross-Domain challenge, following Chang et al. (2017), we extend their postulate and argue that we should seek a common intermediate representation space for both object and environment. Inspired by Bakhtin et al. (2019), where the data is simply provided as semantic masks, we argue that the semantic segmentation space can serve as the intermediate space, where a visual observation model (Long et al., 2015; He et al., 2017) first maps raw images to semantic segmentation masks, and then, the masks, along with the static information of objects, are used for predicting the dynamics. Our experiments show that, on the proposed dataset with Cross-Domain setup, the performance by using ground-truth masks as input to RPCIN can significantly exceed the performance by using raw image as input. Even in the case that the ground-truth mask is absent, sub-optimal masks, which can be obtained via self-supervised learning, can also dramatically mitigate the Cross-Domain challenge. + +![](images/6dd1fbc261db56f7ea51923c12e56dad37bec29377d369e7f809f2a26871af59.jpg) +(a) Cross-Domain on Short-Term Prediction + +![](images/971aa1a8f9a22a748c94d1ec7be1548236886e726bbbb47e222ab70231804bbe.jpg) +(b) Cross-Domain on Long-Term Prediction +Figure 2. Performance of RPCIN with different normalization layers on Cross-Domain challenge. Numerical results are in Tables 2 and 3. + +Our contributions can be summarised as follow: + +- We identify two environment misalignment challenges, Cross-Domain and Cross-Context, for vision-based long-term dynamics prediction by using RPCIN as a probe. +- We propose four datasets, SimB-Border, SimB-Split, BlenB-Border, and BlenB-Split that cover two visual domains and two environment contexts for assessing the model performance on the challenges. +- We discuss a promising direction for mitigating Cross-Domain challenge and provide an intuitive instance as a concretization. + +# 2. Related Works + +Dynamics Prediction is proposed as an instance of learning physics where the model is trained to derive the future states of objects or the scene given reference states as input (Battaglia et al., 2016; Chang et al., 2017; Ye et al., 2019; Janner et al., 2019; Sanchez-Gonzalez et al., 2020; Qi et al., 2021), where the dynamics model can be further applied to various tasks, such as dynamics planning (Bakhtin et al., 2019; Qi et al., 2021; Li et al., 2022) and dynamics visual reasoning (Yi* et al., 2020; Ding et al., 2021; Chen et al., 2021). Compared to considering the entire input scene as a whole, object-centric approaches (Fragkiadaki et al., 2016; Chang et al., 2017; Ye et al., 2019) focus on each object of interest individually, where the relations between objects can be derived using interaction networks (Battaglia et al., 2016; Qi et al., 2021). Instead of relying on the manually + +defined physics models or requiring sophisticated object physics properties as input to infer dynamics (Todorov et al., 2012; Ullman et al., 2014; Chang et al., 2017; Wu et al., 2017; de Avila Belbute-Peres et al., 2018; Ding et al., 2021), Qi et al. (2021) propose RPCIN for long-term dynamics prediction by only consuming raw images and simple object descriptions, such as bounding boxes and segmentation masks, as input, which is more feasible for real-world scenarios, and achieves SOTA performance. + +Domain Adaptation has been studied on various vision tasks (Chen et al., 2018; Peng et al., 2019; Vu et al., 2019) for adapting the trained model from source visual domain to target visual domain. Literature has been created on learning to adapt with multiple supervision signal strengths (Ganin & Lempitsky, 2015; Saito et al., 2019; Shu et al., 2019), between visual domains that share various degrees of category overlapping (Zhang et al., 2018; You et al., 2019), and with source-free setting where source domain data is unavailable for adaptation (Liang et al., 2020; Liu et al., 2021). Similar with source-free setting, for better applying to nowadays real-world scenarios where the source data can be inaccessible (Liang et al., 2020), we also preclude the trained model from seeing the source data during adaptation. In conventional domain adaptation setting, where, despite vision tasks and data availability varying, the visual data itself in the target domain is sufficient for task-specific learning. Contrarily, in our proposed Cross-Domain challenge, model is limited from accessing the information that the physics mechanism is inherent in, such as temporal sequence. Thus, directly applying the conventional domain adaptation methods to the identified challenge may be impractical. + +![](images/3312fc632f786adc97bb490c26cd6581095a1cb1f0a05408daa836570aab6c9a.jpg) +(a) Cross-Context on Short-Term Prediction + +![](images/18ca572102fe69e73a02d3273cf4e972de6485150c4afdf450e2ce1db6ff1e53.jpg) + +![](images/bcc6617d6d444a9c57a118d00a6994a3589f65995ae7c349438a4fa025f4f0fc.jpg) +Figure 3. Performance of RPCIN with different normalization layers on Cross-Context challenge. Numerical results are in Appendix. + +![](images/4c3adfc42e600e70fc8340677ef507bd3c586444cc8d3c14393552a1454d7efc.jpg) + +![](images/60acae803b1737b76c6665762bbfe164f0f5a49b9edaa44a3951580c39d69d37.jpg) +(b) Cross-Context on Long-Term Prediction + +![](images/5c060ee9b2d8bf47cdf0178e4b5e9a99738c9d758d4a2e9ca6b1fc8566da7cc6.jpg) + +![](images/bda5b208104df6ab0e5021471c0c61a3b076c5e03f40a98a83ad51e471a2bdf5.jpg) + +![](images/cd5af0282d5f4874c1d38183c117a8b4713e30424e0b459e65de1cf15d633482.jpg) + +# 3. Method + +In this section, we first introduce the problem setup of dynamics prediction and background about RPCIN, which achieves SOTA performance. Then, we propose four datasets that cover two visual domains and two environment contexts, followed by investigating Cross-Domain and Cross-Context challenges individually by using RPCIN as a case study. Finally, we discuss an encouraging direction for mitigating Cross-Domain challenge by mapping raw images to a common intermediate space prior to learning dynamics prediction, and discuss an intuitive instance of this scheme. + +# 3.1. Task Formulation and Preliminary + +In this paper, we focus on the dynamics prediction in billiard game scenario where the evaluations are conducted on both short-term and long-term prediction (Qi et al., 2021). During training, model takes $T_{ref}$ consecutive image frames $\{X_{1 - T_{ref}}\ldots X_0\}$ and reference states which describe ball states in each frame $\{S_{1 - T_{ref}}\ldots S_0\}$ as input and predicts ball states of next $T_{pred}$ frames $\{S_1^\prime \dots S_{T_{pred}}^\prime \}$ , where the ground truth states $\{S_1\dots S_{T_{pred}}\}$ are provided as a supervision signal. During inference, the model also consumes $\{X_{1 - T_{ref}}\ldots X_0\}$ and $\{S_{1 - T_{ref}}\ldots S_0\}$ for predicting $\{S_1^{\prime}\ldots S_{T_{pred}}^{\prime}\}$ and $\{S_{T_{pred} + 1}^{\prime}\ldots S_{2T_{pred}}^{\prime}\}$ which are evaluated as short-term and long-term predictions, respectively. + +RPCIN (Qi et al., 2021) proposes to solve this problem end-to-end by extending the interaction network (Battaglia et al., 2016), and only requiring the bounding box of each ball to represent the frame state $S$ . By utilizing RoI Pooling (Girshick, 2015), for each reference frame, each ball state feature $b_{i}$ can be directly extracted from the visual + +feature which is encoded from the raw image through a DNN. For completeness, we will briefly summarize RPCIN here and refer the readers to Qi et al. (2021) for details. For inferring the dynamics of each ball, borrowing the notations from Qi et al. (2021), RPCIN defines five CNNs. $f_{O}$ takes the state feature $b_{i}^{t}$ of the $i$ -th ball at the $t$ -th timestep as input to infer its self-dynamic features. $f_{R}$ pair-wisely takes $b_{i}^{t}$ and each of the state features $b_{j}^{t}$ of other balls at the $t$ -th timestep as input to infer the pair-wise relative-dynamics, where the summation combines them for the total relative-dynamics. $f_{A}$ takes the sum of self- and relative-dynamics features for systematically inferring the overall-dynamics features, where the results, along with the state feature $b_{i}^{t}$ are consumed by $f_{Z}$ for a static-dynamics mixture feature. Finally, $f_{P}$ takes the mixture features of the previous $T_{ref}$ time steps to predict the state feature of the $i$ -th ball at the next time step. The entire process is shown in Equation (1) (Qi et al. (2021)): + +$$ +e _ {i} ^ {t} = f _ {A} (f _ {O} (b _ {i} ^ {t}) + \sum_ {j \neq i} f _ {R} (b _ {i} ^ {t}, b _ {j} ^ {t})), +$$ + +$$ +z _ {i} ^ {t} = f _ {Z} \left(b _ {i} ^ {t}, e _ {i} ^ {t}\right), \tag {1} +$$ + +$$ +b _ {i} ^ {t + 1} = f _ {P} (z _ {i} ^ {t}, z _ {i} ^ {t - 1}, \dots , z _ {i} ^ {t - T _ {r e f} + 1}) +$$ + +# 3.2. Datasets for Environment Misalignments + +SimB dataset was used in RPCIN, which simulates the movements of and collisions of three balls without any information about the context of the environment, besides the image boundaries (Qi et al., 2021). Thus, we find this dataset not suitable for our usage due to its over simplicity. To this end, we propose SimB-Border as an extension of SimB by + +![](images/dee94db46802b2d2319270b6667d30a75dfa3eb40f4ab3d94c46da23aa78b12e.jpg) +(a) Cross-Context Challenge Visualization + +![](images/7174570dfd84a48e796e7bffa56165e93917c35deed09b5fa88725c46132fb0f.jpg) +(b) Environment Context Mask Visualization +Figure 4. Visualizations of (a) Cross-Context challenge where trained model fails on the target context, and (b) various environment context mask. We adjusted images resolution for better visualization. + +Table 1. Details of the four proposed datasets and the original SimB dataset. Sim stands for the visual domain of simple simulation and Blen stands for the visual domain rendered by Blender. + +
DatasetResolutionDomainContextBorder WidthSplit Center
SimB (Qi et al., 2021)64 × 64SimN/AN/AN/A
SimB-Border192 × 96SimBorder[0, 15]N/A
SimB-Split192 × 96SimSplit[0, 15][64, 128]
BlenB-Border192 × 96BlenBorder[0, 15]N/A
BlenB-Border192 × 96BlenSplit[0, 15][64, 128]
+ +increasing the image size from $64 \times 64$ to $192 \times 96$ , which includes more space for adding various contexts of environment, and introducing borders to the image boundaries. The width of the borders for the four boundaries are individually and randomly selected as integers in the range of [0, 15], where zero stands for no border, and kept consistent for all frames in a single video. Introducing borders is important since it challenges the model to correctly extract the border properties from the raw image to precisely predict the locations where the ball bounces back. Additionally, to further increase the prediction difficulty and penalize the model for failing to understand the visual context, we extend SimB-Border to SimB-Split by adding a five-pixels wide vertical bar into the scene. For each video, the horizontal center of the bar is randomly selected as an integer in the range of [64, 128] and kept consistent over the video frames. + +Aside from the Sim domain, for investigating the performance of the model under dissimilar visual domains, it is essential to create matching datasets in a different domain with closely comparable underlying dynamics. Empowered by having access to the full ground-truth environment contexts and objects information of datasets in Sim domain, we + +use Blender (Community, 2018) to create matching datasets, BlenB-Border and BlenB-Split, in Blen domain. In Blender, borders, split, and balls are rendered from objects where their properties, such as width, length, and locations, are adjusted by taking the ground-truth information of samples in Sim domain as guidance. Since Blender takes metric units of length (e.g., meter or inch) as dimension measurement, where the sizes in pixel in the rendered images are relative values controlled by multiple Blender parameters, we manually adjusted relevant factors for seeking a close match between Sim and Blen domains. More details about creating dataset in Blen domain are included in the Appendix. Visualizations of the four proposed datasets are shown in Figure 1 and details are include in Table 1. + +# 3.3. Cross-Domain Misalignment + +For the Cross-Domain challenge, we extend the problem setup described in Section 3.1 with two visual domains: Source Domain and Target Domain. During training, data on the source domain contains all information including image frames, static information of balls, and temporal sequence that connect individual frames for learning dynamics. On the target domain, the model still has access to individual frames and the static balls information, but the temporal sequence, where the physical dynamics are evident, is absent so that learning correct dynamics prediction is impractical. Furthermore, to increase the model's generality in the real-world, where the source domain data may be unavailable after domain shifts (Liang et al., 2020), we also limit the trained model from accessing the source domain data. The setup of testing data on both domains is the same as described in Section 3.1. Since the only substantial difference + +Table 2. Performance of Sim domain trained RPCIN model on Cross-Domain challenge with various types of input. BN is used for all segmentation mask input training. In addition to the BN baseline, which takes RGB image as input, we also list other normalization method results as baselines for a comprehensive comparison and further demonstrating the advantage of unifying the visual domains with segmentation masks. Aligned and Cross results are the same for GT-Mask trained model because they share exactly the same data. Bold highlights the best results and underline highlights the second best results. + +
Source DatasetSimB-BorderSimB-Split
Target DatasetSimB-Border (Aligned)BlenB-Border (Cross)SimB-Split (Aligned)BlenB-Split (Cross)
Eval PeriodP1P2P1P2P1P2P1P2
Raw RGB Image Input
RGB-BN1.131 ± 0.0119.568 ± 0.1216.185 ± 2.20623.564 ± 4.3070.913 ± 0.0197.732 ± 0.2088.622 ± 2.31327.511 ± 4.322
RGB-IN1.102 ± 0.0459.426 ± 0.4462.754 ± 0.18815.863 ± 1.0730.945 ± 0.0827.641 ± 0.5492.127 ± 0.38111.281 ± 1.358
RGB-GN1.117 ± 0.0589.323 ± 0.3461.637 ± 0.10611.507 ± 0.4250.899 ± 0.0427.632 ± 0.3862.315 ± 0.44712.335 ± 0.792
RGB-LN1.085 ± 0.0339.165 ± 0.1453.114 ± 1.08616.074 ± 3.7160.922 ± 0.0427.433 ± 0.2373.648 ± 1.19915.662 ± 3.483
Segmentation Mask Input
GT-Mask1.091 ± 0.0449.358 ± 0.4651.091 ± 0.0449.358 ± 0.4650.916 ± 0.0057.431 ± 0.5110.916 ± 0.0057.431 ± 0.511
Sup-Mask1.093 ± 0.0219.396 ± 0.2851.093 ± 0.0219.397 ± 0.2860.971 ± 0.0117.372 ± 0.0890.981 ± 0.0127.422 ± 0.095
Self-Mask1.119 ± 0.0379.604 ± 0.3001.132 ± 0.0359.614 ± 0.2910.911 ± 0.0257.837 ± 1.3340.959 ± 0.0208.017 ± 1.309
+ +between datasets in Sim and Blen domains is the visual appearance, given an optimal appearance-agnostic dynamics prediction model, the performance should be comparable across visual domains. Due to the lack of sufficient information for fine tuning RPCIN model on the target domain, we directly evaluate the source domain trained model on the target domain. Furthermore, considering prior works in the domain adaptation field (Li et al., 2017; Chang et al., 2019) showing that various normalization methods in DNN may significantly affect the performance of a model when visual domain shifts occur, we conduct experiments with several normalization methods to comprehensively validate the performance of RPCIN under environment misalignment challenges. In detail, in addition to the widely used Batch Normalization (BN) (Ioffe & Szegedy, 2015), we also explore Instance Normalization (IN) (Ulyanov et al., 2016), Layer Normalization (LN) (Ba et al., 2016), and Group Normalization (GN) (Wu & He, 2018). + +As shown in Figure 2, the model performance significantly decreases when the domain changes, where the impacts are more severe for applying the models trained on Blen domain to Sim domain. This might be due to the fact that, compared to extracting information from over simplified visual appearance of Sim domain, models are more likely to overfit to Blen domain so that the generality is further compromised. Furthermore, it is also noticeable that the model with BN, which are arguably the most commonly used architecture, is most vulnerable. This is due to the heavy dependence of its normalization statistics on the domain specific knowledge (Li et al., 2017). Unlike the conventional domain adaptation problem, where the visual data that contains sufficient task-specific information is available, such information specific for dynamics prediction is unavailable for the target domain in the proposed Cross-Domain + +challenge because the correct temporal sequence, where the underlying mechanism is inherent in, is missing. For instance, correctly inferring the dynamics of a ball is infeasible with merely one image of the ball that only contains the static properties of the ball rather than a sequence of images that can describe the movement of the ball. + +# 3.4. Cross-Context Misalignment + +Similar to the setup of Cross-Domain challenge, as described in Section 3.3, we also consider two environment contexts for Cross-Context challenge: Source Context and Target Context, where, during training, in source context, the model has access to all information, as described in Section 3.1. After shifting to the target context, correct temporal sequence and source context data become unavailable. The setup of testing data on both context is the same as described in Section 3.1. Unlike the model generality discussions in previous works, our proposed Cross-Context challenge focuses on altering the environment context, which is not represented by any object whose static information is explicitly provided for dynamics prediction. As shown in Figure 3, even when the visual domains are exactly the same, the performance of the model dramatically decreases by simply placing or removing a split bar in the middle of the environment context. As visualizations shown in Figure 4(a), for the models trained in the Split context, they tend to hallucinate a split bar in the middle of the image, whereas for the model trained in the Border context, they tend to ignore the bar. We hypothesize that such behavior might be because, in addition to predicting the state feature that only encodes object static information in the future, the model also tends to fuse auxiliary knowledge with respect to the particular environment context as a short-cut to minimize the empirical loss. For instance, in Split context, besides the visual + +Table 3. Performance of Blen domain trained RPCIN model on Cross-Domain challenge with various types of input. BN is used for all segmentation mask input training. In addition to the BN baseline, which takes RGB image as input, we also list other normalization method results as baselines for a comprehensive comparison and demonstrating the advantage of unifying visual domains with segmentation masks. Aligned and Cross results are the same with Table 2 for GT-Mask trained model because they share exactly the same data. Bold highlights the best results and underline highlights the second best results. + +
Source DatasetBlenB-BorderBlenB-Split
Target DatasetBlenB-Border (Aligned)SimB-Border (Cross)BlenB-Split (Aligned)SimB-Split (Cross)
Eval PeriodP1P2P1P2P1P2P1P2
Raw RGB Image Input
RGB-BN1.084 ± 0.0239.713 ± 0.554261.768 ± 75.655233.977 ± 34.4600.918 ± 0.0377.478 ± 0.079171.750 ± 84.274187.635 ± 53.940
RGB-IN1.103 ± 0.0419.471 ± 0.44325.600 ± 15.78158.599 ± 21.8270.906 ± 0.0627.368 ± 0.16824.460 ± 17.23942.613 ± 18.159
RGB-GN1.075 ± 0.0459.560 ± 0.1453.899 ± 0.52620.425 ± 1.9130.931 ± 0.0397.641 ± 0.1655.033 ± 0.57518.970 ± 1.323
RGB-LN1.064 ± 0.0209.345 ± 0.35012.969 ± 1.50846.113 ± 1.1570.892 ± 0.0277.687 ± 0.3219.518 ± 2.02431.364 ± 3.740
Segmentation Mask Input
GT-Mask1.091 ± 0.0449.358 ± 0.4651.091 ± 0.0449.358 ± 0.4650.916 ± 0.0057.431 ± 0.5110.916 ± 0.0057.431 ± 0.511
Sup-Mask1.122 ± 0.0369.353 ± 0.2681.121 ± 0.0369.353 ± 0.2680.891 ± 0.0277.317 ± 0.2730.889 ± 0.0277.313 ± 0.272
Self-Mask1.136 ± 0.0249.945 ± 0.5631.136 ± 0.0249.943 ± 0.5600.914 ± 0.0227.539 ± 0.2270.944 ± 0.0237.650 ± 0.224
+ +information of the split bar, model may also be counting the prediction steps for determining the bouncing back location. Thus, in Border context, such overfitting can mislead the model for wrong predictions. + +# 3.5. Unifying Visual Domains with Segmentation Masks + +One of the core struggles that limits the model adaptation in the Cross-Domain challenge for vision-based dynamics prediction model is the absence of object dynamics information in the target domain, such as the correct temporal sequence as described in Section 3.3. However, the data for extracting the static information is available on both source and target domains. Therefore, the correct dynamics prediction is expected to be made on the target domain by feeding the aligned static information into the pretrained dynamics model, along with the proper reference temporal sequence once they become available during test. However, since the vision-based dynamics prediction model may entangle visual and dynamics information, such static information alignment can be difficult to reach without concurrently accessing the data from both domains, which is prohibited in our setup. Thus, we extend the belief held by Chang et al. (2017) to that the entire visual scene, including both objects and environment context, shall be mapped to an intermediate common abstraction space before feeding to the dynamics prediction model. In the scope of the billiard datasets discussed in this paper, since all balls being assumed to share the same physics properties, such as mass, radius, and shape, the bounding box of each ball is a sufficient object abstraction for dynamics prediction. + +To seek an instance of environment context abstraction, inspired by PHYRE (Bakhtin et al., 2019), where the data is directly provided as semantic masks, we argue that the semantic segmentation map can serve as an adequate abstraction. With successes on the visual observation tasks, + +acquiring the segmentation maps, even in the conventional domain adaptation (Vu et al., 2019) or unsupervised setting (Ji et al., 2019), is feasible. To investigate the feasibility resolving Cross-Domain challenge by replacing RGB image with semantic segmentation mask as input to the dynamics prediction model, we considered the mask obtained from three sources: ground-truth mask (GT-Mask), supervised trained mask (Sup-Mask), and self-supervised trained mask (Self-Mask). For the environment context in the proposed dataset, there are two semantic classes: table, where the ball traverses, and border, including split bar, which bounds the movement of the balls. Thus, the GT-Mask is a two-channel mask, and is used as supervision signal to train the semantic segmentation model which is used to extract the Sup-Mask. For relaxing the requirement of having access to the ground-truth masks, which might be hard to gather in the real-world, we also train the segmentation model with self-supervised learning. Inspired by the method proposed by Caron et al. (2018), we use k-means to extract masks for training images which serves as the supervision signal for training the segmentation model. Visualization of various masks are shown in Figure 4(b) and the training details are included in Appendix. Experiments show that, on the proposed datasets, comparing with baseline results of various normalization methods, using GT-Mask as replacement can principally resolve the Cross-Domain challenge, and even using Self-Mask as replacement can achieve comparable results. Detail discussions are included in Section 4.2. + +# 4. Experiments + +In this section, we first briefly introduce the experiment details, and then provide analysis on results of Cross-Domain and Cross-Context challenges. Finally, we discuss the limitations and open issues. + +![](images/cc93feb783c444c2f7eedce9607fd6fecb8fc0eb9ee36ff7650405931c9a621b.jpg) +(a) Varying Alignment Loss Weight + +![](images/b87f3b863b5488b309b96cae8383b3cb208ec1555a1796412cbfcb8703a2cbd4.jpg) +(b) Varying RoI Pooling Output Size +Figure 5. Ablation studies with GT-Masks as input on (a) varying alignment loss weight and (b) varying RoI pooling output size. + +# 4.1. Training and Evaluation Details + +We conduct experiments on the proposed SimB-Border, SimB-Split, BlenB-Border, and BlenB-Split datasets by using RPCIN (Qi et al., 2021) as a baseline model, and its public implementation as our code base. Following RPCIN, we use Hourglass (Newell et al., 2016) as the visual backbone, randomly flip the image horizontally and vertically as augmentation, and use the discounted loss (Watters et al., 2017) for stabilizing the training. To comprehensively evaluate the model performance, as described in Section 3.3, we replace the BN layer with GN, IN, and LN layer, where the group number in GN is set to 32. Since BN is arguably the most commonly used normalization layer, we use BN for all the training with masks as input. The RoI pooling output size is set to three. More details can be found in Appendix. + +For experiment details, we set the length of reference frames $T_{ref}$ to four and training prediction period $T_{pred}$ to 20. Evaluations are separately conducted on short-term predictions $\{1\ldots T_{pred}\}$ (P1) and long-term predictions $\{T_{pred} + 1\ldots 2\times T_{pred}\}$ (P2), where the squared $l_{2}$ distance between the ground-truth and predicted objects bounding box center is used as the evaluation metric. The distance is further scaled by 1000 for better demonstration. All those settings are aligned with RPCIN (Qi et al., 2021). + +# 4.2. Discussions on Cross-Domain + +Segmentation mask as common intermediate abstract space: In Section 3.5, we argued for mitigating Cross-Domain challenge by mapping the raw image to a common intermediate abstract space prior to dynamics prediction. We also argued that the segmentation mask can serve as a promising abstract space. We conduct experiments by replacing the raw images with masks and the results are shown in Tables 2 and 3. The masks for training and testing on source and target domain are the same kind. It is noticeable + +Table 4. Study of generality between various kinds of mask. Masks for training and testing on the source and target domain are obtained via different supervision strength. For the same environment context, GT-Masks are the same between different visual domains. + +
Sim Domain → Blen Domain
SimB-Border → BlenB-BorderSimB-Split → BlenB-Split
P1P2P1P2
GT-Mask → GT-Mask1.091 ± 0.0449.358 ± 0.4650.916 ± 0.0057.431 ± 0.511
GT-Mask → Sup-Mask1.091 ± 0.0449.360 ± 0.4630.926 ± 0.0087.479 ± 0.494
GT-Mask → Self-Mask1.280 ± 0.06510.105 ± 0.3811.236 ± 0.0348.576 ± 0.399
Sup-Mask → Self-Mask1.220 ± 0.0359.776 ± 0.3231.277 ± 0.0708.446 ± 0.299
Blen Domain → Sim Domain
BlenB-Border → SimB-BorderBlenB-Split → SimB-Split
P1P2P1P2
GT-Mask → Sup-Mask1.091 ± 0.0449.358 ± 0.4650.916 ± 0.0057.433 ± 0.511
GT-Mask → Self-Mask1.123 ± 0.0489.506 ± 0.4470.962 ± 0.0177.594 ± 0.442
Sup-Mask → Self-Mask1.144 ± 0.0389.468 ± 0.2860.906 ± 0.0267.365 ± 0.285
+ +that all three kinds of mask, including Self-Mask, can dramatically mitigate the Cross-Domain challenge and achieve a competitive performance compared with the best domain aligned results with raw images as input. Such outstanding performance empirically demonstrates the feasibility and strength of utilizing the segmentation mask as the common intermediate abstract space. + +Generality between various kinds of mask: As shown in Figure 4(b), Sup-Mask and Self-Mask of both Sim and Blend domain are comparable with GT-Mask. Thus we further study a generality problem that the masks used for training and testing on the source and target domains are obtained via different supervision strength, which further mimic the real-world scenario where the source domain usually contains richer information than the target domain. As shown in Table 4, even though the mask obtained with weaker supervision strength may be sub-optimal, the dynamics prediction model trained on the superior mask still preserves the robustness to the Cross-Domain challenge. + +# 4.3. Discussion on Cross-Context + +Alignment loss for the predicted state feature: As described in Section 3.1, RPCIN extracts the reference ball state features $b_{i}^{t}$ from the reference image and recursively used it to predict the ball state features $b_{i}^{t + 1}$ in the future. The latter one is further decoded to semantic output, such as bounding box. By replacing the raw images input with masks, future ball state features can also be extracted by utilizing the future ground-truth bounding box and reference mask, where we annotate this feature as $\hat{b}_{i}^{t + 1}$ . An intuitive path to constrain the dynamics model from encoding static irrelevant information to $b^{t + 1}$ is by introducing an alignment loss which reduces the difference between $b_{i}^{t + 1}$ and $\hat{b}_{i}^{t + 1}$ . We use mean-square loss as regularization with different loss weight and conduct experiments on GT-Mask. As results shown in Figure 5(a), although it seems that increasing the alignment loss weight may improve Cross-Context endurance on SimB-Split to SimB-Border, it does not fundamentally address the challenge and the performance gaps are still huge. + +Varying RoI Pooling Output Size: The RoI pooling operation in RPCIN enables the model to consider the environment for dynamics prediction (Qi et al., 2021). By increasing the size of RoI pooling output size, richer information regarding the environment may be encoded to the ball state features that may help with enduring Cross-Context challenge. However, as our results on GT-Mask shown in Figure 5(b), varying RoI pooling output size does not solve the Cross-Context challenge. + +# 4.4. Limitations and Open Issues + +Despite the success of utilizing segmentation masks to mitigate the Cross-Domain challenge, the Cross-Context challenge is still outstanding. Even though we specifically disentangle the visual information from the dynamics information, the learned dynamics may still be entangled with other confounders, for example encoding context specific knowledge as a short-cut to minimize empirical loss. We hypothesis that, further disentanglement between underlying dynamics and latent confounders is needed to address the Cross-Context challenge. Further, our experiments assume all objects share the same physics properties and focus on the environment differences. However, combining alterations to both environment and object properties can further push the boundary of model generality. Finally, our experiments are conducted using synthetic data in order to obtain consistent underlying mechanisms, which already challenges the vision-based dynamics prediction model. Creating a dataset of matching synthetic and real domains is still an open issue. Although we are aware of mechanism differences, we still include experimental results with RealB (Qi et al., 2021) in Appendix for completion. + +# 5. Conclusion + +In this paper, by using RPCIN as a probe, we investigated two environment misalignment challenges: Cross-Domain and Cross-Context. Four datasets: SimB-Border, SimB-Split, BlenB-Border, and BlenB-Split, which cover two domains and two contexts, are proposed. Experiment results on the combinations of the proposed datasets reveal potential weaknesses of vison-based long-term dynamics prediction model. Furthermore, to mitigate the Cross-Domain challenge, we studied a promising direction and provide an intuitive instance as a concretization, whose effectiveness is demonstrated by empirical results. Lastly, we provide a discussion on the limitations and open issues. + +# 6. Acknowledgements + +This material is based on research sponsored by Air Force Research Laboratory (AFRL) under agreement number FA8750-19-1-1000. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation therein. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Air Force Laboratory, DARPA or the U.S. Government. + +# References + +Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization, 2016. +Bakhtin, A., van der Maaten, L., Johnson, J., Gustafson, L., and Girshick, R. Phyre: A new benchmark for physical reasoning. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. +Battaglia, P., Pascanu, R., Lai, M., Jimenez Rezende, D., and kavukcuoglu, k. Interaction networks for learning about objects, relations and physics. In Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016. +Bradski, G. The OpenCV Library. Dr. Dobb's Journal of Software Tools, 2000. +Caron, M., Bojanowski, P., Joulin, A., and Douze, M. Deep clustering for unsupervised learning of visual features. In European Conference on Computer Vision, 2018. +Chang, M., Ullman, T. D., Torralba, A., and Tenenbaum, J. B. A compositional object-based approach to learning + +physical dynamics. In ICLR (Poster). OpenReview.net, 2017. +Chang, W.-G., You, T., Seo, S., Kwak, S., and Han, B. Domain-specific batch normalization for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. +Chen, Y., Li, W., Sakaridis, C., Dai, D., and Van Gool, L. Domain adaptive faster r-cnn for object detection in the wild. In Computer Vision and Pattern Recognition (CVPR), 2018. +Chen, Z., Mao, J., Wu, J., Wong, K.-Y. K., Tenenbaum, J. B., and Gan, C. Grounding physical concepts of objects and events through dynamic visual reasoning. In International Conference on Learning Representations, 2021. +Community, B. O. Blender - a 3D modelling and rendering package. Blender Foundation, Stichting Blender Foundation, Amsterdam, 2018. +de Avila Belbute-Peres, F., Smith, K., Allen, K., Tenenbaum, J., and Kolter, J. Z. End-to-end differentiable physics for learning and control. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. +Ding, M., Chen, Z., Du, T., Luo, P., Tenenbaum, J., and Gan, C. Dynamic visual reasoning by learning differentiable physics models from video and language. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 887-899. Curran Associates, Inc., 2021. +Fragkiadaki, K., Agrawal, P., Levine, S., and Malik, J. Learning visual predictive models of physics for playing billiards. In ICLR (Poster), 2016. +Ganin, Y. and Lempitsky, V. S. Unsupervised domain adaptation by backpropagation. In ICML, 2015. +Girshick, R. Fast r-cnn. In 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1440-1448, 2015. doi: 10.1109/ICCV.2015.169. +He, K., Gkioxari, G., Dollar, P., and Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017. +Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML'15, pp. 448-456. JMLR.org, 2015. + +Janner, M., Levine, S., Freeman, W. T., Tenenbaum, J. B., Finn, C., and Wu, J. Reasoning about physical interactions with object-centric models. In International Conference on Learning Representations, 2019. +Ji, X., Henriques, J. F., and Vedaldi, A. Invariant information clustering for unsupervised image classification and segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9865-9874, 2019. +Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In Bengio, Y. and LeCun, Y. (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. +Li, S., Wu, K., Zhang, C., and Zhu, Y. On the learning mechanisms in physical reasoning. In NeurIPS, 2022. +Li, Y., Wang, N., Shi, J., Liu, J., and Hou, X. Revisiting batch normalization for practical domain adaptation. In ICLR, 2017. +Liang, J., Hu, D., and Feng, J. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International Conference on Machine Learning (ICML), pp. 6028-6039, 2020. +Liu, Y., Zhang, W., and Wang, J. Source-free domain adaptation for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1215-1224, June 2021. +Long, J., Shelhamer, E., and Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015. +Loshchilov, I. and Hutter, F. SGDR: Stochastic gradient descent with warm restarts. In International Conference on Learning Representations, 2017. +Newell, A., Yang, K., and Deng, J. Stacked hourglass networks for human pose estimation. In Leibe, B., Matas, J., Sebe, N., and Welling, M. (eds.), Computer Vision – ECCV 2016, pp. 483–499, Cham, 2016. Springer International Publishing. +Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko, K., and Wang, B. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1406-1415, 2019. +Qi, H., Wang, X., Pathak, D., Ma, Y., and Malik, J. Learning long-term visual dynamics with region proposal interaction networks. In ICLR, 2021. + +Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. +Ren, S., He, K., Girshick, R., and Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. In Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015. +Saito, K., Kim, D., Sclaroff, S., Darrell, T., and Saenko, K. Semi-supervised domain adaptation via minimax entropy. ICCV, 2019. +Sanchez-Gonzalez, A., Godwin, J., Pfaff, T., Ying, R., Leskovec, J., and Battaglia, P. Learning to simulate complex physics with graph networks. In III, H. D. and Singh, A. (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 8459-8468. PMLR, 13-18 Jul 2020. +Shu, Y., Cao, Z., Long, M., and Wang, J. Transferable curriculum for weakly-supervised domain adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):4951-4958, Jul. 2019. doi: 10.1609/aaai.v33i01.33014951. +Todorov, E., Erez, T., and Tassa, Y. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026-5033, 2012. doi: 10.1109/IROS.2012.6386109. +Ullman, T., Stuhlmüller, A., Goodman, N., and Tenenbaum, J. B. Learning physics from dynamical scenes. In Proceedings of the 36th Annual Conference of the Cognitive Science society, pp. 1640-1645, 2014. +Ulyanov, D., Vedaldi, A., and Lempitsky, V. Instance normalization: The missing ingredient for fast stylization, 2016. +Vu, T.-H., Jain, H., Bucher, M., Cord, M., and Pérez, P. Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation. In CVPR, 2019. +Watters, N., Zoran, D., Weber, T., Battaglia, P., Pascanu, R., and Tacchetti, A. Visual interaction networks: Learning a physics simulator from video. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. + +Wu, J., Yildirim, I., Lim, J. J., Freeman, B., and Tenenbaum, J. Galileo: Perceiving physical object properties by integrating a physics engine with deep learning. In Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015. +Wu, J., Lu, E., Kohli, P., Freeman, B., and Tenenbaum, J. Learning to see physics via visual de-animation. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. +Wu, Y. and He, K. Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV), September 2018. +Ye, Y., Singh, M., Gupta, A., and Tulsiani, S. Compositional video prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019. +Yi*, K., Gan*, C., Li, Y., Kohli, P., Wu, J., Torralba, A., and Tenenbaum, J. B. Clevrer: Collision events for video representation and reasoning. In International Conference on Learning Representations, 2020. +You, K., Long, M., Cao, Z., Wang, J., and Jordan, M. I. Universal domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. +Zhang, J., Ding, Z., Li, W., and Ogunbona, P. Importance weighted adversarial nets for partial domain adaptation. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8156-8164, 2018. doi: 10.1109/CVPR.2018.00851. + +# A. Appendix + +# A.1. Dataset Details + +In the paper, we propose four datasets: SimB-Border, SimB-Split, BlenB-Border, and BlenB-Split which cover two domains and two contexts. For the datasets on the Sim domain, we modified the generation code used in Qi et al. (2021) for increasing the image size from $64 \times 64$ to $192 \times 96$ and introducing borders and split bar as described in Section 3.2. 1000 videos with 100 frames in each video are generated for train and test individually. Same with Qi et al. (2021), for each video, only one ball has initial velocities whose magnitude and direction are randomly selected from $\{2,3,4,5,6\}$ and $\{\frac{i}{6}\pi |i \in [0,1,2,\dots,11]\}$ . For generating dataset in Blen domain, by using the ground-truth information of dataset in Sim domain as guidance, we represent borders, balls, and split bar as objects in Blender (Community, 2018) and adjust the object properties, such as location, width, and length, and scene properties, such as camera height, for seeking a match between datasets in Sim domain and Blen domain. We used the Eevee as the rendering engine in Blender. + +To find a close match between Sim and Blen domains, we first create all objects, fix the camera position and direction, rendering engine, and output image resolution. Then, we iteratively adjust the scale and position of objects in Blender and check the output image until finding the unit conversion ratio between Blender (in metric) and output image (in pixel). Finally, we apply the ratio to the ground truth of each Sim domain sample for creating the respected sample in the Blen domain. However, similar to the real-world scenarios, where data is synthesised to match the real data, since the fundamental mechanisms of image sample generation are different in Sim and Blen domains, it is arguably infeasible to find an identical match. Furthermore, the slight difference between the two domains can also be considered as one aspect of the domain-specific disturbance inherent in the Cross-Domain challenge. By mapping various visual appearances to a common intermediate space, such disturbance can be filtered out, as shown in Tables 2 and 3. + +# A.2. Training Details + +For the visual backbone, we followed the hourglass (Newell et al., 2016) architectures used in RPCIN (Qi et al., 2021), which is a visual feature extractor, that contains a CNN layer and three residual blocks which down-sampled the input image size by a scale of four with output channel size equals to 256, followed by a hourglass module. Details of the model architecture can be found in RPCIN (Qi et al., 2021). For normalization layer, we used BN, IN, GN, and LN, where the group number of GN is set to 32, and LN is implemented by a GN with group number set to one (Wu & He, 2018). By default, we set the output size of RoI Pooling to three and ablation study results on varying the output size are shown in Figure 5(b). Furthermore, it appears that RPCIN implemented RoI Pooling by using RoI Aligning function of PyTorch which follows (He et al., 2017). + +Our experiments are conducted on two NVIDIA 1080ti GPUs. We set total batch size to 40 and models are trained over 50K iterations. We used Adam optimizer (Kingma & Ba, 2015) and set learning rate to $2 \times 10^{-4}$ with cosine learning rate decay (Loshchilov & Hutter, 2017). Weight decay is set to $1 \times 10^{-6}$ . For input image prepossessing, we only unified the input range to [0, 1] without using dataset specific statistics for further reducing the degree of misalignment between visual domains. Images are randomly flipped horizontally and vertically as augmentation. Same with RPCIN (Qi et al., 2021), each training sample contains four image frames and 24 frames of bounding boxes, where the first four frames of bounding boxes are used for reference and rests are used for prediction ground-truth. Similarly, each testing sample contains four image frames and 44 frames of bounding boxes for evaluating both short-term and long-term prediction. For the best use of dataset, $100 - 24 + 1 = 77$ and $100 - 44 + 1 = 57$ samples can be drawn from a single 100 frames video. Thus, for each dataset, there are 77K samples for train and 57K samples for test. + +# A.3. Extracting Segmentation Masks + +For extracting the segmentation masks with either ground-truth label or k-means pseudo label, we adopt a simple encoder-decoder based model. We use the visual feature extractor (prior to the hourglass module) that contains a CNN layer and three residual blocks, as previously described with BN layer, as encoder, which down-samples the size of image by a scale of four with 256 channel outputs. The decoder is a simple five CNN layer with LeakyRelu as activation layer that upsamples the visual feature to the original size with two channels which represent the semantic meaning of a certain location. We use cross-entropy loss for calculating the difference between the prediction and the ground-truth. For self-supervised learning, we use k-means function of OpenCV(Bradski, 2000) to generate pseudo labels for providing supervision signal. Prior to k-means segmentation, for removing the appearance of balls from image that might be mis-classified as border, we first + +blur the input image and then replace the pixels in those regions that contain balls by the respected pixels in the blurred image. The processed images are used for generating k-means pseudo label. For matching the k-means pseudo label with correct semantic meaning, we assume that the number of pixels that represent border is less than the number of pixels that represent the table. + +# A.4. Additional Results + +We provide numerical results on model under Cross-Context challenge in Tables 5 and 6, and under both Cross-Context and Cross-Domain challenges in Tables 7 and 8. For completion, we also conduct experiments on RealB dataset (Qi et al., 2021) and results are shown in Tables 9 and 10. However, since the underlying physics mechanism is different between RealB and the four proposed dataset, we find it is hard to draw any structural conclusion from the experiments and the results are only provided for reference. + +Table 5. Performance of Border context trained RPCIN model on Cross-Context challenge with various types of input. Aligned and Cross results on SimB and BlenB are the same for GT-Mask trained model because they share exactly the same data. + +
Source DatasetSimB-BorderBlenB-Border
Target DatasetSimB-Border (Aligned)SimB-Split (Cross)BlenB-Border (Aligned)BlenB-Split (Cross)
Eval PeriodP1P2P1P2P1P2P1P2
Raw RGB Image Input
RGB-BN1.131 ± 0.0119.568 ± 0.1216.804 ± 0.15939.773 ± 0.7261.084 ± 0.0239.713 ± 0.5546.587 ± 0.08540.294 ± 0.518
RGB-IN1.102 ± 0.0459.426 ± 0.4467.039 ± 0.16440.364 ± 0.4831.103 ± 0.0419.471 ± 0.4436.807 ± 0.02840.054 ± 0.600
RGB-GN1.117 ± 0.0589.323 ± 0.3466.721 ± 0.28939.356 ± 0.5531.075 ± 0.0459.560 ± 0.1456.759 ± 0.08740.223 ± 0.644
RGB-LN1.085 ± 0.0339.165 ± 0.1456.662 ± 0.12739.051 ± 0.3301.064 ± 0.0209.345 ± 0.3506.762 ± 0.15039.816 ± 0.597
Segmentation Mask Input
GT-Mask1.091 ± 0.0449.358 ± 0.4658.345 ± 1.22539.793 ± 0.9171.091 ± 0.0449.358 ± 0.4658.345 ± 1.22539.793 ± 0.917
Sup-Mask1.093 ± 0.0219.396 ± 0.2858.225 ± 0.39540.338 ± 0.4451.122 ± 0.0369.353 ± 0.2689.271 ± 1.36940.398 ± 0.805
Self-Mask1.119 ± 0.0379.604 ± 0.3007.507 ± 0.23740.639 ± 0.9081.136 ± 0.0249.945 ± 0.5638.727 ± 1.47641.788 ± 1.302
+ +Table 6. Performance of Split context trained RPCIN model on Cross-Context challenge with various types of input. Aligned and Cross results on SimB and BlenB are the same for GT-Mask trained model because they share exactly the same data. + +
Source DatasetSimB-SplitBlenB-Split
Target DatasetSimB-Split (Aligned)SimB-Border (Cross)BlenB-Split (Aligned)BlenB-Border (Cross)
Eval PeriodP1P2P1P2P1P2P1P2
Raw RGB Image Input
RGB-BN0.913 ± 0.0197.732 ± 0.2081.589 ± 0.11319.238 ± 0.5500.918 ± 0.0377.478 ± 0.0791.760 ± 0.26520.492 ± 0.660
RGB-IN0.945 ± 0.0827.641 ± 0.5493.087 ± 0.18128.518 ± 2.5170.906 ± 0.0627.368 ± 0.1684.083 ± 0.60230.913 ± 2.351
RGB-GN0.889 ± 0.0427.632 ± 0.3862.250 ± 0.06728.597 ± 1.7200.931 ± 0.0397.641 ± 0.1652.585 ± 0.34627.511 ± 1.150
RGB-LN0.922 ± 0.0427.433 ± 0.2371.683 ± 0.11521.756 ± 1.2320.892 ± 0.0277.687 ± 0.3211.730 ± 0.15622.934 ± 1.711
Segmentation Mask Input
GT-Mask0.916 ± 0.0057.431 ± 0.5112.085 ± 0.24521.713 ± 2.4580.916 ± 0.0057.431 ± 0.5112.085 ± 0.24521.713 ± 2.458
Sup-Mask0.971 ± 0.0117.372 ± 0.0892.006 ± 0.24120.220 ± 1.7250.891 ± 0.0277.317 ± 0.2732.151 ± 0.40121.798 ± 2.426
Self-Mask0.911 ± 0.0257.837 ± 1.3342.031 ± 0.29221.521 ± 1.3750.914 ± 0.0227.539 ± 0.2272.433 ± 0.27321.020 ± 0.127
+ +Table 7. Performance of Border context trained RPCIN model on Cross-Context and Cross-Domain challenge with various types of input. + +
Source DatasetSimB-BorderBlenB-Border
Target DatasetSimB-Border (Aligned)BlenB-Split (Cross)BlenB-Border (Aligned)SimB-Split (Cross)
Eval PeriodP1P2P1P2P1P2P1P2
Raw RGB Image Input
RGB-BN1.131 ± 0.0119.568 ± 0.12112.024 ± 3.01748.670 ± 2.2991.084 ± 0.0239.713 ± 0.554262.567 ± 75.581228.936 ± 41.361
RGB-IN1.102 ± 0.0459.426 ± 0.44610.359 ± 0.66443.412 ± 0.7361.103 ± 0.0419.471 ± 0.44328.746 ± 11.31667.175 ± 13.291
RGB-GN1.117 ± 0.0589.323 ± 0.3467.072 ± 0.26740.172 ± 0.7311.075 ± 0.0459.560 ± 0.1458.217 ± 0.24242.131 ± 1.377
RGB-LN1.085 ± 0.0339.165 ± 0.1457.729 ± 0.67141.274 ± 1.0921.064 ± 0.0209.345 ± 0.35014.404 ± 1.76253.815 ± 2.412
Segmentation Mask Input
GT-Mask1.091 ± 0.0449.358 ± 0.4658.345 ± 1.22539.793 ± 0.9171.091 ± 0.0449.358 ± 0.4658.345 ± 1.22539.793 ± 0.917
Sup-Mask1.093 ± 0.0219.396 ± 0.2858.259 ± 0.39440.414 ± 0.4201.122 ± 0.0369.353 ± 0.2689.232 ± 1.35640.303 ± 0.748
Self-Mask1.119 ± 0.0379.604 ± 0.3007.712 ± 0.28240.893 ± 0.9511.136 ± 0.0249.945 ± 0.5638.252 ± 1.11141.206 ± 1.113
+ +Table 8. Performance of Split context trained RPCIN model on Cross-Context and Cross-Domain challenge with various types of input. + +
Source DatasetSimB-SplitBlenB-Split
Target DatasetSimB-Split (Aligned)BlenB-Border (Cross)BlenB-Split (Cross)SimB-Border (Cross)
Eval PeriodP1P2P1P2P1P2P1P2
Raw RGB Image Input
RGB-BN0.913 ± 0.0197.732 ± 0.2089.471 ± 3.32843.641 ± 8.7710.918 ± 0.0377.478 ± 0.079171.341 ± 79.239193.574 ± 45.798
RGB-IN0.945 ± 0.0827.641 ± 0.54912.034 ± 2.19952.677 ± 5.3500.906 ± 0.0627.368 ± 0.16829.570 ± 14.69470.004 ± 15.201
RGB-GN0.889 ± 0.0427.632 ± 0.3868.250 ± 2.05247.316 ± 3.4740.931 ± 0.0397.641 ± 0.1658.721 ± 1.16844.849 ± 3.762
RGB-LN0.922 ± 0.0427.433 ± 0.2378.251 ± 3.20246.966 ± 10.5810.892 ± 0.0277.687 ± 0.32112.106 ± 1.25057.674 ± 2.721
Segmentation Mask Input
GT-Mask0.916 ± 0.0057.431 ± 0.5112.085 ± 0.24521.713 ± 2.4580.916 ± 0.0057.431 ± 0.5112.085 ± 0.24521.713 ± 2.458
Sup-Mask0.971 ± 0.0117.372 ± 0.0892.007 ± 0.24020.224 ± 1.7230.891 ± 0.0277.317 ± 0.2732.151 ± 0.40121.799 ± 2.426
Self-Mask0.911 ± 0.0257.837 ± 1.3342.053 ± 0.29121.519 ± 1.3900.914 ± 0.0227.539 ± 0.2272.428 ± 0.27421.021 ± 0.113
+ +Table 9. Performance of realB trained RPCIN model on Cross-Context and Cross-Domain challenge with various types of input. + +
Source DatasetRealB
Target DatasetRealB (Aligned)SimB-Border (Cross)SimB-Split (Cross)BlenB-Border (Cross)BlenB-Split (Cross)
Eval PeriodP1P2P1P2P1P2P1P2P1
Raw RGB Image Input
RGB-BN0.421 ± 0.0093.004 ± 0.09128.604 ± 3.42867.030 ± 4.84737.838 ± 10.68181.673 ± 9.41749.760 ± 19.00884.241 ± 19.33352.235 ± 17.935
RGB-IN0.549 ± 0.0203.598 ± 0.17312.492 ± 1.94848.657 ± 6.18815.412 ± 1.56356.800 ± 4.4589.121 ± 0.37041.226 ± 0.71015.324 ± 0.466
RGB-GN0.423 ± 0.0173.050 ± 0.3049.055 ± 0.99374.546 ± 58.71812.413 ± 0.97054.200 ± 3.5398.091 ± 0.37639.766 ± 1.03611.413 ± 0.604
RGB-LN0.416 ± 0.0383.163 ± 0.2688.669 ± 2.16840.725 ± 3.58312.341 ± 1.68854.137 ± 1.2088.366 ± 1.41241.000 ± 2.94712.527 ± 1.596
+ +Table 10. Performance of various datasets trained RPCIN models on challenge with input as RealB. + +
Source DatasetSimB-BorderSimB-SplitBlenB-BorderBlenB-Split
Target DatasetRealB (Cross)
Eval PeriodP1P2P1P2P1P2P1P2
Raw RGB Image Input
RGB-BN23.697 ± 1.54839.668 ± 6.77314.651 ± 3.37632.408 ± 4.78032.291 ± 2.96865.397 ± 16.15028.887 ± 4.33047.890 ± 9.987
RGB-IN7.746 ± 0.98129.736 ± 4.45712.424 ± 3.59442.159 ± 10.8073.181 ± 0.88516.514 ± 2.2808.243 ± 4.81532.189 ± 9.774
RGB-GN2.510 ± 0.23211.921 ± 1.3214.791 ± 0.87921.269 ± 2.6682.166 ± 0.41512.578 ± 0.3324.683 ± 0.76121.258 ± 2.482
RGB-LN3.208 ± 0.94714.067 ± 3.7124.447 ± 0.56723.891 ± 5.0794.661 ± 1.92819.314 ± 7.2012.447 ± 0.47814.576 ± 2.699
\ No newline at end of file diff --git a/acriticalviewofvisionbasedlongtermdynamicspredictionunderenvironmentmisalignment/images.zip b/acriticalviewofvisionbasedlongtermdynamicspredictionunderenvironmentmisalignment/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6a83dc82310beec1ed1be9af20d1997f6063eeab --- /dev/null +++ b/acriticalviewofvisionbasedlongtermdynamicspredictionunderenvironmentmisalignment/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:431e2baadb48594c938c0037632e280b2a03638099f4f95058878deaec3076dd +size 980859 diff --git a/acriticalviewofvisionbasedlongtermdynamicspredictionunderenvironmentmisalignment/layout.json b/acriticalviewofvisionbasedlongtermdynamicspredictionunderenvironmentmisalignment/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f9f42763653436a40461031c8d92cf32f9cfefdf --- /dev/null +++ b/acriticalviewofvisionbasedlongtermdynamicspredictionunderenvironmentmisalignment/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e11edd300d7bd53ac3cd8ae37c2d22f305a253803cd366495eb51a554fe9e1f +size 393265 diff --git a/adeepconjugatedirectionmethodforiterativelysolvinglinearsystems/5b1e0c0f-dd1e-42d4-abeb-4d72484b32f3_content_list.json b/adeepconjugatedirectionmethodforiterativelysolvinglinearsystems/5b1e0c0f-dd1e-42d4-abeb-4d72484b32f3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ceac971ec793596d45a01e375ecd16381031c171 --- /dev/null +++ b/adeepconjugatedirectionmethodforiterativelysolvinglinearsystems/5b1e0c0f-dd1e-42d4-abeb-4d72484b32f3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23bf2b364e8c4953a52f393278b2f34cca83f5e3e4f26146ca8a621eb671d077 +size 117624 diff --git a/adeepconjugatedirectionmethodforiterativelysolvinglinearsystems/5b1e0c0f-dd1e-42d4-abeb-4d72484b32f3_model.json b/adeepconjugatedirectionmethodforiterativelysolvinglinearsystems/5b1e0c0f-dd1e-42d4-abeb-4d72484b32f3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5a1b7a4386defabab65333049f6c9050a1f9000a --- /dev/null +++ b/adeepconjugatedirectionmethodforiterativelysolvinglinearsystems/5b1e0c0f-dd1e-42d4-abeb-4d72484b32f3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae9a2659c9302b7f82c3575ff4aeaeeaffcdf2bce207aef53c8872667505d126 +size 141638 diff --git a/adeepconjugatedirectionmethodforiterativelysolvinglinearsystems/5b1e0c0f-dd1e-42d4-abeb-4d72484b32f3_origin.pdf b/adeepconjugatedirectionmethodforiterativelysolvinglinearsystems/5b1e0c0f-dd1e-42d4-abeb-4d72484b32f3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9321ea22b3bd042b12e3410c8e6d0bb6f80f1d56 --- /dev/null +++ b/adeepconjugatedirectionmethodforiterativelysolvinglinearsystems/5b1e0c0f-dd1e-42d4-abeb-4d72484b32f3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4218359a7cc332ab7ed26d5a4b69c13a40665012cfab2c6b4ac50ba09f0ce021 +size 746251 diff --git a/adeepconjugatedirectionmethodforiterativelysolvinglinearsystems/full.md b/adeepconjugatedirectionmethodforiterativelysolvinglinearsystems/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9a1c68c6ecee266ca433fab765831800488132e4 --- /dev/null +++ b/adeepconjugatedirectionmethodforiterativelysolvinglinearsystems/full.md @@ -0,0 +1,429 @@ +# A Deep Conjugate Direction Method for Iteratively Solving Linear Systems + +Ayano Kaneda\* 1 Osman Akar\* 2 Jingyu Chen 2 Victoria Alicia Trevino Kala 2 David Hyde 3 Joseph Teran 4 + +# Abstract + +We present a novel deep learning approach to approximate the solution of large, sparse, symmetric, positive-definite linear systems of equations. Motivated by the conjugate gradients algorithm that iteratively selects search directions for minimizing the matrix norm of the approximation error, we design an approach that utilizes a deep neural network to accelerate convergence via data-driven improvement of the search direction at each iteration. Our method leverages a carefully chosen convolutional network to approximate the action of the inverse of the linear operator up to an arbitrary constant. We demonstrate the efficacy of our approach on spatially discretized Poisson equations, which arise in computational fluid dynamics applications, with millions of degrees of freedom. Unlike state-of-the-art learning approaches, our algorithm is capable of reducing the linear system residual to a given tolerance in a small number of iterations, independent of the problem size. Moreover, our method generalizes effectively to various systems beyond those encountered during training. + +# 1. Introduction + +The solution of large, sparse systems of linear equations is ubiquitous when partial differential equations (PDEs) are discretized to computationally simulate complex natural phenomena such as fluid flow (Losasso et al., 2006), thermodynamics (Chen et al., 2021), or mechanical fracture (Paluszny & Zimmerman, 2011). For linear systems arising + +*Equal contribution $^{1}$ Department of Applied Physics, Waseda University, Tokyo, Japan $^{2}$ Department of Mathematics, University of California, Los Angeles, USA $^{3}$ Department of Computer Science, Vanderbilt University, Nashville, USA $^{4}$ Department of Mathematics, University of California, Davis, USA. Correspondence to: Ayano Kaneda . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +from these diverse applications, we use the notation + +$$ +\boldsymbol {A} \boldsymbol {x} = \boldsymbol {b}, \tag {1} +$$ + +where the dimension $n$ of the matrix $\mathbf{A} \in \mathbb{R}^{n \times n}$ and the vector $\mathbf{b} \in \mathbb{R}^n$ correlates with spatial fidelity of the computational domain. Quality and realism of a simulation are proportional to this spatial fidelity; typical modern applications of numerical PDEs require solving linear systems with millions of unknowns. In such applications, numerical approximation to the solution of these linear systems is typically the bottleneck in overall performance; accordingly, practitioners have spent decades devising specialized algorithms for their efficient solution (Golub & Loan, 2012; Saad, 2003). + +The appropriate numerical linear algebra technique depends on the nature of the problem. Direct solvers that utilize matrix factorizations (QR, Cholesky, etc. (Trefethen & Bau, 1997)) have optimal approximation error, but their computational cost is $O(n^{3})$ , and they typically require dense storage, even for sparse $A$ . Although Fast Fourier Transforms (Nussbaumer, 1981) can be used in limited instances (periodic boundary conditions, etc.), iterative techniques are most commonly adopted for sparse systems, which are typical for discretized PDEs. Many applications with strict performance constraints (e.g., real-time fluid simulation) utilize basic iterations (Jacobi, Gauss-Seidel, successive over relaxation (SOR), etc.) given limited computational budget (Saad, 2003). However, large approximation errors must be tolerated since iteration counts are limited by the performance constraints. This is particularly problematic since the wide elliptic spectrum of these matrices (a condition that worsens with increased spatial fidelity/matrix dimension) leads to poor conditioning and iteration counts. Iterative techniques can achieve sub-quadratic convergence if their iteration count does not grow excessively with problem size $n$ since each iteration generally requires $O(n)$ floating point operations for sparse matrices. Discrete elliptic operators are typically symmetric positive (semi) definite, which means that the preconditioned conjugate gradients method (PCG) can be used to minimize iteration counts (Saad, 2003; Hestenes & Stiefel, 1952; Stiefel, 1952). + +In the present work, we consider sparse linear systems that arise from discrete Poisson equations in incompressible flow applications (Chorin, 1967; Fedkiw et al., 2001; Bridson, + +2008). These equations yield discrete elliptic operators, so PCG is the algorithm of choice for the associated linear systems; yet there is a subsequent question of which preconditioner to use. Preconditioners $P$ for PCG must simultaneously: be symmetric positive definite (SPD) (and therefore admit factorization $P = F^2$ ), improve the condition number of the preconditioned system $F A F y = F b$ , and be computationally cheap to construct and apply; accordingly, designing specialized preconditioners for particular classes of problems is somewhat of an art. Incomplete Cholesky preconditioners (ICPCG) (Kershaw, 1978) use a sparse approximation to the Cholesky factorization and significantly reduce iteration counts; however, their inherent data dependency prevents efficient parallel implementation. Nonetheless, these are very commonly adopted for Poisson equations arising in incompressible flow (Fedkiw et al., 2001; Bridson, 2008). Multigrid (Brandt, 1977) and domain decomposition (Saad, 2003) preconditioners greatly reduce iterations counts, but they must be updated (with non-trivial cost) each time the problem changes (e.g., in computational domains with time-varying boundaries) and/or for different hardware platforms. In general, choice of an optimal preconditioner for discrete elliptic operators is an open area of research. + +Recently, data-driven approaches that leverage deep learning techniques have shown promise for solving linear systems. Various researchers have investigated machine learning estimation of multigrid parameters (Greenfeld et al., 2019; Grebhahn et al., 2016; Luz et al., 2020). Others have developed machine learning methods to estimate preconditioners (Götz & Anzt, 2018; Stanaityte, 2020; Ichimura et al., 2020) and initial guesses for iterative methods (Luna et al., 2021; Um et al., 2020; Ackmann et al., 2020). Tompson et al. (2017) and Yang et al. (2016) develop non-iterative machine learning approximations of the inverse of discrete Poisson equations from incompressible flow. + +This paper develops a novel conjugate gradients-style iterative method, enabled by deep learning, for approximating the solution of SPD linear systems, which we call the deep conjugate direction method (DCDM). CG iteratively adds $A$ -conjugate search directions while minimizing the matrix norm of the error. We instead use a convolutional neural network (CNN) as an approximation of the inverse of the matrix in order to generate more efficient search directions. We only ask that our network approximate the inverse up to an unknown scaling since this decreases the degree of nonlinearity and since it does not affect the quality of the search direction (which is scale independent). The network is similar to a preconditioner, but it is not a linear function, and our DCDM method is designed to accommodate this nonlinearity. We use self-supervised learning to train our network with a loss function equal to the $L^2$ difference between an input vector and a scaling of $A$ times the output + +of our network. To account for this unknown scaling during training, we choose the scale of the output of the network by minimizing the matrix norm of the error. Our approach allows for efficient training and generalization to problems unseen (new matrices $A$ and new right-hand sides $b$ ). We benchmark our algorithm using the ubiquitous pressure Poisson equation (discretized on regular voxelized domains) and compare against FluidNet (Tompson et al., 2017), which is the state-of-the-art learning-based method for these types of problems. + +DCDM can be viewed as an improved version of Tompson et al. (2017), because unlike the non-iterative approaches of Tompson et al. (2017) and Yang et al. (2016), our method can reduce the linear system residuals arbitrarily. We showcase our approach with examples that have over 16 million degrees of freedom. + +# 2. Related Work + +Several papers have focused on enhancing the solution of linear systems (arising from discretized PDEs) using learning. For instance, Götz & Anzt (2018) generate sparsity patterns for block-Jacobi preconditioners using convolutional neural networks, and Stanaityte (2020) use a CNN to predict non-zero patterns for ILU-type preconditioners for the Navier-Stokes equations (though neither work designs fundamentally new preconditioners). Ichimura et al. (2020) develop a neural-network based preconditioner where the network is used to predict approximate Green's functions (which arise in the analytical solution of certain PDEs) that in turn yield an approximate inverse of the linear system. Hsieh et al. (2019) learn an iterator that solves linear systems, performing competitively with classical solvers like multigrid-preconditioned MINRES (Paige & Saunders, 1975). Luz et al. (2020) and Greenfeld et al. (2019) use machine learning to estimate algebraic multigrid (AMG) parameters. They note that AMG approaches rely most fundamentally on effectively chosen (problem-dependent) prolongation sparse matrices and that numerous methods have attempted to automatically create them from the matrix $\mathbf{A}$ . They train a graph neural network to learn (in an unsupervised fashion) a mapping from matrices $\mathbf{A}$ to prolongation operators. Grebhahn et al. (2016) note that geometric multigrid solver parameters can be difficult to choose to guarantee parallel performance on different hardware platforms. They use machine learning to create a code generator to help achieve this. + +Several works consider accelerating the solution of linear systems by learning an initial guess that is close to the true solution or otherwise helpful to descent algorithms for finding the true solution. In order to solve the discretized Poisson equation, Luna et al. (2021) accelerate the convergence of GMRES (Saad & Schultz, 1986) with an initial + +guess that is learned in real-time (i.e., as a simulation code runs) with no prior data. Um et al. (2020) train a network (incorporating differentiable physics, based on the underlying PDEs) in order to produce high-quality initial guesses for a CG solver. In a somewhat similar vein, Ackmann et al. (2020) use a simple feedforward neural network to predict pointwise solution components, which accelerates the conjugate residual method used to solve a relatively simple shallow-water model (a more sophisticated network and loss function are needed to handle more general PDEs and larger-scale problems). + +At least two papers (Ruelmann et al., 2018; Sappl et al., 2019) have sought to learn a mapping between a matrix and an associated sparse approximate inverse. In their investigation, Ruelmann et al. (2018) propose training a neural network using matrix-inverse pairs as training data. Although straightforward to implement, the cost of generating training data, let alone training the network, is prohibitive for large-scale 3D problems. Sappl et al. (2019) seek to learn a mapping between linear system matrices and sparse (banded) approximate inverses. Their loss function is the condition number of the product of the system matrix and the approximate inverse; the minimum value of the condition number is one. Although this framework is quite simple, evaluating the condition number of a matrix is asymptotically costly $(O(n^{3}))$ , and in general, the inverse of a sparse matrix can be quite dense. Accordingly, the method is not efficient or accurate enough for the large-scale 3D problems that arise in real-world engineering problems. + +DCDM can also be viewed as a novel learning to optimize (L2O) method. L2O methods use learning to devise continuous optimization algorithms; for example, Andrychowicz et al. (2016) learn a gradient descent algorithm, Li & Malik (2016) provide a general reinforcement learning framework for learning optimization algorithms, Shen et al. (2019) apply L2O to minimax problems, and Liao et al. (2022) perform online meta-learning of quasi-Newton optimization methods. We refer the reader to Chen et al. (2022) for a recent review of L2O techniques. + +Most relevant to the present work is FluidNet (Tompson et al., 2017). FluidNet uses a highly-tailored CNN architecture to predict the solution of a linear projection operation (specifically, for the discrete Poisson equation) given a matrix and right-hand side. The authors demonstrate fluid simulations where the linear solve is replaced by evaluating their network. Because their network is relatively lightweight and is only evaluated once per time step, their simulations run efficiently. However, their design allows the network only one opportunity to reduce the residual for the linear solve; in practice, we observe that FluidNet is able to reduce the residual by no more than about one order of magnitude. However, in computer graphics applications, at least four + +![](images/b76307a75d775f1f3fe8502011aa50acdbd57d40d7b486bec3612394527d7923.jpg) +Figure 1. (a) We illustrate a sample flow domain $\Omega \subset (0,1)^2$ (in 2D for ease of illustration) with internal boundaries (blue lines). (b) We voxelize the domain with a regular grid: white cells represent interior/fluid, and blue cells represent boundary conditions. (c) We train using the matrix $A^{\mathrm{train}}$ from a discretized domain with no interior boundary conditions, where $d$ is the dimension. This creates linear system with $n = (n_c + 1)^d$ unknowns, where $n_c$ is the number of grid cells on each direction. (d) We illustrate the non-zero entries in an example matrix $A^{\Omega}$ from the voxelized and labeled (white vs. blue) grid for three example interior cells (green, magenta, and brown). Each case illustrates the non-zero entries in the row associated with the example cell. All entries of $A^{\Omega}$ in rows corresponding to boundary/blue cells are zero. + +![](images/96388fd211f0932ac6a22d732101f757f8ab402876c90b9abcd188fdbadf3ee1.jpg) + +![](images/14757ec19903ff1a3dc5cd68d132a28d306c76c993fb0dd32c26cfffef02dc5e.jpg) + +![](images/3c0360877f3e4d2536ca725c533634e3887e890d42adc0389178a2e81aac9769.jpg) + +orders of magnitude in residual reduction are usually required for visual fidelity, while in scientific and engineering applications, practitioners prefer solutions that reduce the residual by eight or more orders of magnitude (i.e., to within machine precision). Accordingly, FluidNet's lack of convergence stands in stark contrast to classical, convergent methods like CG. Our method resolves this gap. + +# 3. Motivation: Incompressible Flow + +We demonstrate the efficacy of our approach with the linear systems that arise in incompressible flow applications. Specifically, we use our algorithm to solve the Poisson equation discretized on a regular grid, following the pressure projection equations that arise in Chorin's splitting technique (Chorin, 1967) for the inviscid, incompressible Euler equations. These equations are + +$$ +\rho \left(\frac {\partial \boldsymbol {u}}{\partial t} + \frac {\partial \boldsymbol {u}}{\partial \boldsymbol {x}} \boldsymbol {u}\right) + \nabla p = \boldsymbol {f} ^ {e x t}, \quad \nabla \cdot \boldsymbol {u} = 0 \tag {2} +$$ + +where $\pmb{u}$ is fluid velocity, $p$ is pressure, $\rho$ is density, and $\pmb{f}^{ext}$ accounts for external forces like gravity. The equations are assumed at all positions $\pmb{x}$ in the spatial fluid flow domain $\Omega$ and for time $t > 0$ . The first equation in Equation 2 enforces conservation of momentum in the absence of viscosity, and the second enforces incompressibility and conservation of mass. These equations are subject to initial conditions $\rho (\pmb {x},0) = \rho^0$ and $\pmb {u}(\pmb {x},0) = \pmb{u}^{0}(\pmb {x})$ , as well as boundary conditions $\pmb {u}(\pmb {x},t)\cdot \pmb {n}(\pmb {x}) = u^{\partial \Omega}(\pmb {x},t)$ on the boundary of the domain $\pmb {x}\in \partial \Omega$ (where $\pmb{n}$ is the unit outward pointing normal at position $\pmb{x}$ on the boundary). + +Equation 2 is discretized in both time and space. Temporally, we split the advection $\frac{\partial\boldsymbol{u}}{\partial t} +\frac{\partial\boldsymbol{u}}{\partial\boldsymbol{x}}\boldsymbol {u} = 0$ and body forces + +terms $\rho \frac{\partial \boldsymbol{u}}{\partial t} = \boldsymbol{f}^{ext}$ , and finally enforce incompressibility via the pressure projection $\frac{\partial \boldsymbol{u}}{\partial t} + \frac{1}{\rho} \nabla p = 0$ such that $\nabla \cdot \boldsymbol{u} = 0$ ; this is the standard advection-projection scheme proposed by Chorin (1967). Using finite differences in time, we can summarize this as + +$$ +\begin{array}{l} \rho^ {0} \left(\frac {\boldsymbol {u} ^ {*} - \boldsymbol {u} ^ {n}}{\Delta t} + \frac {\partial \boldsymbol {u} ^ {n}}{\partial \boldsymbol {x}} \boldsymbol {u} ^ {n}\right) = \boldsymbol {f} ^ {e x t} (3) \\ - \nabla \cdot \frac {1}{\rho^ {0}} \nabla p ^ {n + 1} = - \nabla \cdot \boldsymbol {u} ^ {*} (4) \\ - \frac {1}{\rho^ {0}} \nabla p ^ {n + 1} \cdot \boldsymbol {n} = \frac {1}{\Delta t} \left(u ^ {\partial \Omega} - \boldsymbol {u} ^ {*} \cdot \boldsymbol {n}\right). (5) \\ \end{array} +$$ + +For the spatial discretization, we use a regular marker-and-cell (MAC) grid (Harlow & Welch, 1965) with cubic voxels whereby velocity components are stored on the face of voxel cells, and scalar quantities (e.g., pressure $p$ or density $\rho$ ) are stored at voxel centers. We use backward semi-Lagrangian advection (Fedkiw et al., 2001; Gagniere et al., 2020) for Equation 3. All spatial partial derivatives are approximated using finite differences. Equations 4 and 5 describe the pressure Poisson equation with Neumann conditions on the boundary of the flow domain. We discretize the left-hand side of Equation 4 using a standard 7-point finite difference stencil. The right-hand side is discretized using the MAC grid discrete divergence finite difference stencils as well as contributions from the boundary condition terms in Equation 5. We refer the reader to Bridson (2008) for more in-depth implementation details. Equation 5 is discretized by modifying the Poisson stencil to enforce Neumann boundary conditions. We do this using a simple labeling of the voxels in the domain. For simplicity, we assume $\Omega \subset (0,1)^3$ is a subset of the unit cube, potentially with internal boundaries. We label cells in the domain as either liquid or boundary. This simple classification is enough to define the discrete Poisson operators (with appropriate Neumann boundary conditions at domain boundaries) that we focus on in the present work; we illustrate the details in Figure 1. + +We use the following notation to denote the discrete Poisson equations associated with Equations 4-5: + +$$ +\boldsymbol {A} ^ {\Omega} \boldsymbol {x} = \boldsymbol {b} ^ {\nabla \cdot \boldsymbol {u} ^ {*}} + \boldsymbol {b} ^ {u ^ {\partial \Omega}}, \tag {6} +$$ + +where $A^{\Omega}$ is the discrete Poisson matrix associated with the voxelized domain, $x$ is the vector of unknown pressure, and $b^{\nabla \cdot u^{*}}$ and $b^{u^{\partial \Omega}}$ are the right-hand side terms from Equations 4 and 5, respectively. $A^{\Omega}$ in Equation 6, is a large, sparse, SPD linear system. The computational complexity of solving Equation 6 strongly depends on data (e.g., internal boundary conditions in the flow domain, see Figure 1). + +We define a special case of the matrix involved in this discretization to be the Poisson matrix $\mathbf{A}^{\mathrm{train}}$ associated with + +$\Omega = (0,1)^3$ , i.e., a full fluid domain with no internal boundaries. We use this matrix for training, yet demonstrate that our network generalizes to all other matrices arising from more complicated flow domains. To be clear, the implication of this is that by training DCDM one time—which we have already done, and we release our pre-trained models and source code along with this paper—practitioners can immediately apply DCDM to any Poisson system (regardless of internal boundary conditions, etc.). Although there is a clear limitation that we only train our network to solve Poisson problems, this is a major advantage over state-of-the-art methods like FluidNet (Tompson et al., 2017), which require highly diverse training data (matrices from many fluid simulations, all with different types of obstacles and boundary conditions) in order to train a network with sufficient generalization; we only ever leverage a single training matrix (i.e., a single set of boundary conditions) $A^{\mathrm{train}}$ . + +# 4. Deep Conjugate Direction Method + +We present our method for the deep learning acceleration of iterative approximations to the solution of linear systems of the form seen in Equation 6. We first briefly discuss relevant details of search direction methods, particularly the choice of line search directions1. We then present a deep learning technique for improving the quality of these search directions that ultimately reduces iteration counts required to achieve satisfactory residual reduction. Lastly, we outline the training procedures for our deep CNN. + +Our approach iteratively improves approximations to the solution $\pmb{x}$ of Equation 6. We build on the method of CG, which requires the matrix $A^{\Omega}$ in Equation 6 to be SPD. SPD matrices $A^{\Omega}$ give rise to the matrix norm $\| y\|_{A^{\Omega}} = \sqrt{y^{T}A^{\Omega}y}$ . CG can be derived in terms of iterative line search improvement based on optimality in this norm. That is, an iterate $x_{k - 1}\approx x$ is updated along search direction $d_{k}$ by a step size $\alpha_{k}$ that is chosen to minimize the matrix norm of the error between the updated iterate and $\pmb{x}$ : + +$$ +\begin{array}{l} \alpha_ {k} = \arg \min _ {\alpha} \frac {1}{2} \| \boldsymbol {x} - (\boldsymbol {x} _ {k - 1} + \alpha \boldsymbol {d} _ {k}) \| _ {\boldsymbol {A} ^ {\Omega}} ^ {2} \\ = \frac {\boldsymbol {r} _ {k - 1} ^ {T} \boldsymbol {d} _ {k}}{\boldsymbol {d} _ {k} ^ {T} \boldsymbol {A} ^ {\Omega} \boldsymbol {d} _ {k}}, \tag {7} \\ \end{array} +$$ + +where $\pmb{r}_{k - 1} = \pmb {b} - \pmb{A}^{\Omega}\pmb{x}_{k - 1}$ is the $(k - 1)^{\mathrm{th}}$ residual (see Appendix A.2 for details). Different search directions $\pmb{d}_k$ result in different algorithms. A natural choice is the negative gradient of the matrix norm of the error (evaluated at the current iterate), $\pmb{d}_k = -\frac{1}{2}\nabla \| \pmb{x}_{k - 1}\|_{\pmb{A}^\Omega}^2 = \pmb{r}_{k - 1}$ , since this will point in the direction of steepest decrease. This is the gradient descent method (GD). Unfortunately, this approach requires many iterations in practice. CG modifies GD into a + +more effective strategy by instead choosing directions that are $\mathbf{A}$ -orthogonal (i.e., $d_i^T\mathbf{A}^\Omega d_j = 0$ for $i \neq j$ ). More precisely, the search direction $\mathbf{d}_k$ is chosen as follows: + +$$ +\pmb {d} _ {k} = \pmb {r} _ {k - 1} - \sum_ {i = 1} ^ {k - 1} h _ {i k} \pmb {d} _ {i}, \qquad h _ {i k} = \frac {\pmb {d} _ {i} ^ {T} \pmb {A} ^ {\Omega} \pmb {r} _ {k - 1}}{\pmb {d} _ {i} ^ {T} \pmb {A} ^ {\Omega} \pmb {d} _ {i}}, +$$ + +which guarantees A-orthogonality. The magic of CG is that $h_{ik} = 0$ for $i < k - 1$ , hence this iteration can be performed without the need to store all previous search directions $d_i$ and without the need for computing all previous $h_{ik}$ . + +While the residual is a natural choice for generating $\mathbf{A}$ -orthogonal search directions (since it points in the direction of the steepest local decrease), it is not the optimal search direction. Optimality is achieved when $d_k$ is parallel to $(\mathbf{A}^\Omega)^{-1}\mathbf{r}_{k-1}$ , whereby $\mathbf{x}_k$ will be equal to $\mathbf{x}$ since $\alpha_k$ (computed from Equation 7) will step directly to the solution. We can see this by considering the residual and its relation to the search direction: + +$$ +\begin{array}{l} \boldsymbol {r} _ {k} = \boldsymbol {b} - \boldsymbol {A} ^ {\Omega} \boldsymbol {x} _ {k} = \boldsymbol {b} - \boldsymbol {A} ^ {\Omega} \boldsymbol {x} _ {k - 1} - \alpha_ {k} \boldsymbol {A} ^ {\Omega} \boldsymbol {d} _ {k} \\ = \boldsymbol {r} _ {k - 1} - \alpha_ {k} \boldsymbol {A} ^ {\Omega} \boldsymbol {d} _ {k}. \\ \end{array} +$$ + +In light of this, we use deep learning to create an approximation $f(c, r)$ to $(A^{\Omega})^{-1}r$ , where $c$ denotes the network weights and biases. This is analogous to using a preconditioner in PCG; however, our network is not SPD (nor even a linear function). We simply use this data-driven approach as our means of generating better search directions $d_k$ . Furthermore, we only need to approximate a vector parallel to $(A^{\Omega})^{-1}r$ since the step size $\alpha_k$ will account for any scaling in practice. In other words, $f(c, r) \approx s_r(A^{\Omega})^{-1}r$ , where the scalar $s_r$ is not defined globally; it only depends on $r$ and the model does not learn it. Lastly, as with CG, we enforce $A$ -orthogonality, yielding search directions + +$$ +\boldsymbol {d} _ {k} = \boldsymbol {f} (\boldsymbol {c}, \boldsymbol {r} _ {k - 1}) - \sum_ {i = 1} ^ {k - 1} h _ {i k} \boldsymbol {d} _ {i}, h _ {i k} = \frac {\boldsymbol {f} (\boldsymbol {c} , \boldsymbol {r} _ {k - 1}) ^ {T} \boldsymbol {A} ^ {\Omega} \boldsymbol {d} _ {i}}{\boldsymbol {d} _ {i} ^ {T} \boldsymbol {A} ^ {\Omega} \boldsymbol {d} _ {i}}. +$$ + +We summarize our approach in Algorithm 1. Note that we introduce the variable $i_{\mathrm{start}}$ . To guarantee $A$ -orthogonality between all search directions, we must have $i_{\mathrm{start}} = 1$ . However, this requires storing all prior search directions, which can be costly. We found that using $i_{\mathrm{start}} = k - 2$ worked nearly as well as $i_{\mathrm{start}} = 1$ in practice (in terms of our ability to iteratively reduce the residual of the system). We demonstrate this in Figure 4c. + +# Algorithm 1 DCDM + +1: $\boldsymbol{r}_0 = \boldsymbol{b} - \boldsymbol{A}^\Omega \boldsymbol{x}_0$ +2: $k = 1$ +3: while $\| r_{k - 1}\| \geq \epsilon$ do +4: $\pmb{d}_k = \pmb{f}(\pmb{c}, \frac{\pmb{r}_{k-1}}{\|\pmb{r}_{k-1}\|})$ +5: for $i_{\mathrm{start}} \leq i < k$ do +6: $h_{ik} = \frac{\pmb{d}_k^T\pmb{A}^\Omega\pmb{d}_i}{\pmb{d}_i^T\pmb{A}^\Omega\pmb{d}_i}$ +7: $\pmb{d}_{k} = h_{ik}\pmb{d}_{i}$ +8: end for +9: $\alpha_{k} = \frac{\boldsymbol{r}_{k - 1}^{T}\boldsymbol{d}_{k}}{\boldsymbol{d}_{k}^{T}\boldsymbol{A}^{\Omega}\boldsymbol{d}_{k}}$ +10: $\pmb{x}_k = \pmb{x}_{k-1} + \alpha_k \pmb{d}_k$ +1: $\pmb{r}_k = \pmb{b} - \pmb{A}^\Omega \pmb{x}_k$ +12: $k = k + 1$ +13: end while + +# 5. Model Architecture, Datasets, and Training + +Efficient performance of our method requires effective training of our deep convolutional network for weights and biases $c$ such that $f(c, r) \approx s_r (A^\Omega)^{-1} r$ (for arbitrary scalar $s_r$ ). We design a model architecture, loss function, and self-supervised training approach to achieve this. Our approach has modest training requirements and allows for effective residual reduction while generalizing well to problems not seen in the training data. + +# 5.1. Loss Function and Self-supervised Learning + +Although we generalize to arbitrary matrices $\mathbf{A}^{\Omega}$ from Equation 6 that correspond to domains $\Omega \subset (0,1)^3$ that have internal boundaries (see Figure 1), we train using just the matrix $\mathbf{A}^{\mathrm{train}}$ from the full cube domain $(0,1)^3$ . "the full cube domain $(0,1)^{3}$ " is just the unit cube discretized on regular intervals, see e.g. Figure 1(c). + +In contrast, other similar approaches (Tompson et al., 2017; Yang et al., 2016) train using matrices $A^{\Omega}$ and right-hand sides $\boldsymbol{b}^{\nabla \cdot \boldsymbol{u}^{*}} + \boldsymbol{b}^{u^{\partial \Omega}}$ that arise from flow in many domains with internal boundaries. We train our network by minimizing the $L^2$ difference $\| \boldsymbol{r} - \alpha \boldsymbol{A}^{\mathrm{train}} \boldsymbol{f}(\boldsymbol{c}, \boldsymbol{r}) \|_2$ , where $\alpha = \frac{\boldsymbol{r}^T \boldsymbol{f}(\boldsymbol{c}, \boldsymbol{r})}{\boldsymbol{f}(\boldsymbol{c}, \boldsymbol{r})^T \boldsymbol{A}^{\mathrm{train}} \boldsymbol{f}(\boldsymbol{c}, \boldsymbol{r})}$ from Equation 7. This choice of $\alpha$ accounts for the unknown scaling in the approximation of $\boldsymbol{f}(\boldsymbol{c}, \boldsymbol{r})$ to $(\boldsymbol{A}^{\mathrm{train}})^{-1} \boldsymbol{r}$ . We use a self-supervised approach and train the model by minimizing + +$$ +\operatorname {L o s s} (\boldsymbol {f}, \boldsymbol {c}, \mathcal {D}) = \frac {1}{| \mathcal {D} |} \sum_ {\boldsymbol {r} \in \mathcal {D}} \| \boldsymbol {r} - \frac {\boldsymbol {r} ^ {T} \boldsymbol {f} (\boldsymbol {c} , \boldsymbol {r})}{\boldsymbol {f} (\boldsymbol {c} , \boldsymbol {r}) ^ {T} \boldsymbol {A} ^ {\text {t r a i n}} \boldsymbol {f} (\boldsymbol {c} , \boldsymbol {r})} \boldsymbol {A} ^ {\text {t r a i n}} \boldsymbol {f} (\boldsymbol {c}, \boldsymbol {r}) \| _ {2} +$$ + +for a given dataset $\mathcal{D}$ consisting of training vectors $\pmb{b}^i$ . In Algorithm 1, the normalized residuals $\frac{\pmb{r}_k}{\|\pmb{r}_k\|}$ are passed as inputs to the model. Unlike in e.g. FluidNet (Tompson et al., 2017), only the first residual $\frac{\pmb{r}_0}{\|\pmb{r}_0\|}$ is directly related to the problem-dependent original right-hand side $\pmb{b}$ . Hence we consider a broader range of training vectors than those + +expected in a given problem of interest, e.g., incompressible flows. We observe that generally the residuals $\boldsymbol{r}_k$ in Algorithm 1 are skewed to the lower end of the spectrum of the matrix $A^\Omega$ . Since $A^\Omega$ is a discretized elliptic operator, lower end modes are of lower frequency of spatial oscillation. We create our training vectors $\boldsymbol{b}^i \in \mathcal{D}$ using $m \ll n$ approximate eigenvectors of the training matrix $A^{\mathrm{train}}$ . We use the Rayleigh-Ritz method to create approximate eigenvectors $q_i$ , $0 \leq i < m$ . This approach allows us to effectively approximate the full spectrum of $A^{\mathrm{train}}$ without computing the full eigendecomposition, which can be expensive ( $O(n^{3})$ ) at high resolution. Note that generating the dataset has $O(m^{2}N)$ complexity, $N$ being the resolution (e.g., $64^{3}$ or $128^{3}$ ), due to reorthogonalization of Lanczos vectors (see Appendix A.4). Hence we tried values like $m = 1\mathrm{K}$ , 5K, 10K, and 20K, and chose the smallest value ( $m = 10,000$ ) that gave a viable model after training. + +The Rayleigh-Ritz vectors are orthonormal and satisfy $Q_{m}^{T}A^{\mathrm{train}}Q_{m} = \Lambda_{m}$ , where $\Lambda_{m}$ is a diagonal matrix with nondecreasing diagonal entries $\lambda_{i}$ referred to as Ritz values (approximate eigenvalues) and $Q_{m} = [q_{0}, q_{1}, \ldots, q_{m-1}] \in \mathbb{R}^{n \times m}$ . We pick $b^{i} = \frac{\sum_{j=0}^{m-1} c_{j}^{i} q_{j}}{\left\| \sum_{j=0}^{m-1} c_{j}^{i} q_{j} \right\|}$ , where the coefficients $c_{j}^{i}$ are picked from a standard normal distribution + +$$ +c _ {j} ^ {i} = \left\{ \begin{array}{l l} 9 \cdot \mathcal {N} (0, 1) & \text {i f} \tilde {j} \leq j \leq \frac {m}{2} + \theta \\ \mathcal {N} (0, 1) & \text {o t h e r w i s e} \end{array} \right. +$$ + +where $\theta$ is a small number (we used $\theta = 500$ ), and $\tilde{j}$ is the first index that $\lambda_{\tilde{j}} > 0$ . This choice creates $90\%$ of $b^{i}$ from the lower end of the spectrum, with the remaining $10\%$ from the higher end. The Riemann-Lebesgue Lemma states the Fourier spectrum of a continuous function will decay at infinity, so this specific choice of $b_{i}$ 's is reasonable for the training set. In practice, we also observed that the right-hand sides of the pressure system that arose in flow problems (in the empty domain) tended to be at the lower end of the spectrum. Notably, even though this dataset only uses Rayleigh-Ritz vectors from the training matrix $A^{\mathrm{train}}$ , our network can be effectively generalized to flows in irregular domains, e.g., smoke flowing past a rotating box and flow past a bunny (see Figure 3). + +We generate the Rayleigh-Ritz vectors by first tridiagonalizing the training matrix $\mathbf{A}^{\mathrm{train}}$ with $m$ Lanczos iterations (Lanczos, 1950) to form $T^{m} = Q_{m}^{L^{T}} A^{\mathrm{train}} Q_{m}^{L} \in \mathbb{R}^{m \times m}$ . We then diagonalize $T^{m} = \hat{Q}^{T} \Lambda_{m} \hat{Q}$ . While asymptotically costly, we note that this algorithm is performed on the comparably small $m \times m$ matrix $T^{m}$ (rather than on the $A^{\mathrm{train}} \in \mathbb{R}^{n \times n}$ ). This yields the Rayleigh-Ritz vectors as the columns of $Q_{m} = Q_{m}^{L} \hat{Q}$ . The Lanczos vectors are the columns of the matrix $Q_{m}^{L}$ and satisfy a three-term recurrence whereby the next Lanczos vector can be iteratively + +![](images/36b177e466d1a88213257519633a6f35138135e74a150a6103eb14a18430c6d4.jpg) +Figure 2. Architecture for training with $A^{\mathrm{train}}$ on a $128^{3}$ grid. + +computed from previous two as + +$$ +\beta_ {j} \pmb {q} _ {j + 1} ^ {L} = \pmb {A} ^ {\mathrm {t r a i n}} \pmb {q} _ {j} ^ {L} - \beta_ {j - 1} \pmb {q} _ {j - 1} ^ {L} - \alpha_ {j} \pmb {q} _ {j} ^ {L}, +$$ + +where $\alpha_{j}$ and $\beta_{j}$ are diagonal and subdiagonal entries of $T^{k}$ . $\beta_{j}$ is computed so that $q_{j + 1}^{L}$ is a unit vector, and $\alpha_{j + 1} = q_{j + 1}^{T}A^{\mathrm{train}}q_{j + 1}$ . We initialize the iteration with a random $q_0^L\in \operatorname {span}(A^{\mathrm{train}})$ . The Lanczos algorithm can be viewed as a modified Gram-Schmidt technique to create an orthonormal basis for the Krylov space associated with $q_0^L$ and $A^{\mathrm{train}}$ , and it therefore suffers from rounding error sensitivities manifested as loss of orthonormality with vectors that do not appear in the recurrence. We found that the simple strategy described in Paige (1971) of orthogonalizing each iterate with respect to all previous Lanczos vectors to be sufficient for our training purposes. Dataset creation takes 5-7 hours for a $64^{3}$ computational grid, and 2-2.5 days for a $128^{3}$ grid (see Appendix A.4 for more detail). + +We reiterate that since DCDM generalizes to various Poisson systems (see Sections 5.2 and 6) despite only using data corresponding to an empty fluid domain, practitioners do not need to generate new data in order to apply our method. Moreover, we show in the examples that it is possible to use trained model weights from a lower-resolution grid for higher-resolution problems, so practitioners may not need to generate new data even if running problems at different resolutions than what we consider. + +# 5.2. Model Architecture + +The internal structure of our CNN architecture for a $128^{3}$ grid is shown in Figure 2. It consists of a series of convolutional layers with residual connections. The upper left of Figure 2 ( $K$ Residual Blocks) shows our use of multiple blocks of residually connected layers. Notably, within each block, the first layer directly affects the last layer with an addition operator. All non-input or output convolutions use a $3 \times 3 \times 3$ filter, and all layers consist of 16 feature maps. In the middle of the first level, a layer is downsampled (via the average pooling operator with $(2 \times 2 \times 2)$ pool size) + +and another set of convolutional layers is applied with residual connection blocks. The last layer in the second level is upscaled and added to the layer that is downsampled. The last layer in the network is dense with a identity function. The activation functions in all convolutional layers are ReLU, except for the first convolution, which uses a linear activation function. + +Initially we tried a simple deep feedforward convolutional network with residual connections (motivated by He et al. (2016)). Although such a simple model works well for DCDM, it requires a high number of layers, which results in higher training and inference times. We found that creating parallel layers of CNNs with downsampling reduced the number of layers required. In summary, our goal was to first identify the simplest network architecture that provided adequate accuracy for our target problems, and subsequently, we sought to make architectural changes to minimize training and inference time. We are interested in a more thorough investigation of potential network architectures, filter sizes, etc., to better characterize the tradeoff curves between accuracy and efficiency; as a first step in this direction, we included a brief ablation study in Appendix A.4. + +Differing resolutions use differing numbers of convolutions, but the fundamental structure remains the same. More precisely, the number of residual connections is changed for different resolutions. For example, a $64^{3}$ grid uses one residual block on the left, two on the right on the upper level, and three on the lower level. Furthermore, the weights trained on a lower resolution grid can be used effectively with higher resolutions. Figure 4d shows convergence results for a $256^{3}$ grid, using a model trained for a $64^{3}$ grid and a $128^{3}$ grid. The model that we use for $256^{3}$ grids in our final examples was trained on a $128^{3}$ grid; however, as the shown in the figure, even training with a $64^{3}$ grid allows for efficient residual reduction. Table 1 shows results for three different resolutions, where DCDM uses $64^{3}$ and $128^{3}$ trained models. Since we can use the same weights trained over a $64^{d}$ domain and/or $128^{d}$ domain, the number of parameters does not depend on the spatial fidelity. It depends on $d$ for the kernel size. + +# 5.3. Training + +Using the procedure explained in Section 5.1, we create the training dataset $\mathcal{D} \in \operatorname{span}(\mathbf{A}^{\mathrm{train}}) \cap S^{n-1}$ of size 20,000 generated from 10,000 Rayleigh-Ritz vectors. $S^{n-1}$ is the unit sphere, i.e., all training vectors are scaled to have unit length. We train our model with TensorFlow (Abadi et al., 2015) on a single NVIDIA RTX A6000 GPU with 48GB memory. Training is done with standard deep learning techniques—more precisely, back-propagation and the ADAM optimizer (Kingma & Ba, 2015) (with starting learning rate 0.0001). Training takes approximately 10 minutes and 1 hour per + +![](images/d7c7587f0d2997d3fc23263b4ac70db8e6daa8d81e134aada280a2e72719e3b3.jpg) +Figure 3. DCDM for simulating a variety of incompressible flow examples. Left: smoke plume at $t = 6.67, 13.33, 20$ seconds. Middle: smoke passing a bunny at $t = 5, 10, 15$ seconds. Right: smoke passing a spinning box (time-dependent Neumann boundary conditions) at $t = 2.67, 6, 9.33$ seconds. + +![](images/6faffd5ec05a2c0c7fbb627444f6335205066c9da16bd792dbd1dfb8be08f813.jpg) + +![](images/0eec32b731496d10b3252be481faea09f7145523b4578375b48055a32a6f481b.jpg) + +epoch for grid resolutions $64^{3}$ and $128^{3}$ , respectively. We trained our model for 50 epochs; however, the model from the thirty-first epoch was optimal for $64^{3}$ , and the model from the third epoch was optimal for $128^{3}$ . + +# 6. Results and Analysis + +We demonstrate DCDM on three increasingly difficult examples and provide numerical evidence for the efficient convergence of our method. All examples were run on a workstation with dual stock AMD EPYC 75F3 processors, and an NVIDIA RTX A6000 GPU with 48GB memory. The grid resolutions we evaluate are the same as used in e.g. Tompson et al. (2017) and are common for graphics papers. + +Figure 3 showcases DCDM for incompressible smoke simulations. In each simulation, inlet boundary conditions are set in a circular portion of the bottom of the cubic domain, whereby smoke flows around potential obstacles and fills the domain. We show a smoke plume (no obstacles), flow past a complex static geometry (the Stanford bunny), and flow past a dynamic geometry (a rotating cube). Visually plausible and highly-detailed results are achieved for each simulation (see supplementary material for larger videos). The plume example uses a computational grid with resolution $128^{3}$ , while the other two uses grids with resolution $256^{3}$ (representing over 16 million unknowns). For each linear solve, DCDM was run until the residual was reduced by four orders of magnitude $^{2}$ . In our experience, production-grade solvers (e.g., 3D smoke simulators for movie visual effects) use resolutions of $128^{3}$ or more, and as computing resources improve we are seeing more problems solved at huge scales like $512^{3}$ and above, where a learning-enhanced + +method like DCDM will have a more dramatic impact. + +![](images/771b1343d770ad4f553f1d99e4e507b523157375b9ca4a8dbfb9fc903ecc2804.jpg) +Figure 4. Convergence data for the bunny example (see also Table 1). (a) Mean and std. dev. (over all 400 frames in the simulation) of residual reduction during linear solves (with $128^{3}$ and $256^{3}$ grids) using FluidNet (FN) and DCDM. (b) Residual plots with ICPCG, CG, FN, DCDM, and Deflated CG at frame 150. Dashed and solid lines represent results for $128^{3}$ and $256^{3}$ , respectively. (c) Decrease in residuals with varying degrees of $\mathbf{A}$ -orthogonalization ( $i_{s} = i_{\mathrm{start}}$ ) in the $128^{3}$ case. (d) Reduction in residuals when the network is trained with a $64^{3}$ or $128^{3}$ grid for the $256^{3}$ grid simulation shown in Figure 3 Middle. + +For the bunny example, Figures 4a-b demonstrate how residuals decrease over the course of a linear solve, comparing DCDM with other methods. Figure 4a shows the mean results (with standard deviations) over the course of 400 simulation frames, while in Figure 4b, we illustrate behavior on a particular frame (frame 150). For FluidNet, we use the optimized implementation provided by fluidnetsc22 (2022). This implementation includes pre-trained models that we use without modification. In both subfigures, it is evident that the FluidNet residual never changes, since the + +method is not iterative; FluidNet reduces the initial residual by no more than one order of magnitude. On the other hand, with DCDM, we can continually reduce the residual (e.g., by four orders of magnitude) as we apply more iterations of our method, just as with classical CG. In Figure 4b, we also visualize the convergence of three other classical methods, CG, Deflated CG (Saad et al., 2000), and incomplete Cholesky preconditioned CG (ICPCG); clearly, DCDM reduces the residual in the fewest number of iterations (e.g., approximately one order of magnitude fewer iterations than ICPCG). Since FluidNet is not iterative and lacks a notion of residual reduction, we treat $r_0$ for FluidNet as though an initial guess of zero is used (as is done in our solver). + +To clarify these results, Table 1 reports convergence statistics for DCDM compared to standard iterative techniques, namely, CG, Deflated CG, and ICPCG. For all $64^{3}$ , $128^{3}$ , and $256^{3}$ grids with the bunny example, we measure the time $t_{r}$ and the number of iterations $n_{r}$ required to reduce the initial residual on a particular time step of the simulation by four orders of magnitude. DCDM achieves the desired results in by far the fewest number of iterations at all resolutions. At $256^{3}$ , DCDM performs approximately 6 times faster than CG, suggesting a potentially even wider performance advantage at higher resolutions. Inference is the dominant cost in an iteration of DCDM; the other linear algebra computations in an iteration of DCDM are comparable to those in CG. The nice result of our method is that despite the increased time per iteration, the number of required iterations is reduced so drastically that DCDM materially outperforms classical methods like CG. Although ICPCG successfully reduces number of iterations (Figure 4b), we found the runtime to scale prohibitively with grid resolution. We used SciPy's (Virtanen et al., 2020) sparse.linalg.spsolve_TRIANGLE function for forward and back substitution in our ICPCG implementation, and we also used a precomputed $L$ that is not accounted for in the table results (though this took no more than 4 seconds at the highest resolution); Appendix A.3 includes further details on ICPCG. + +Notably, even though Deflated CG and DCDM are based on approximate Ritz vectors, DCDM performs far better, indicating the value of using a neural network. + +We performed three additional sets of tests. First, we tried low resolutions, $16^{3}$ and $32^{3}$ , which are such small problems that we would expect CG to win due to the relatively high overhead of evaluating a neural network: indeed, DCDM and CG take 0.377sec/15iter and 0.008sec/48iter at $16^{3}$ , respectively, and 0.717sec/16iter and 0.063sec/53iter at $32^{3}$ . Note that we used the model (and parameters) tailored for $64^{3}$ resolution to obtain these results; a lighter model, trained specifically for $16^{3}$ and $32^{3}$ resolutions, would give better timings, though likely still behind CG. Second, we + +
Method643Grid1283Grid2563Grid
trnrtprtrnrtprtrnrtpr
DCDM-642.71s160.169s22s270.814 s261s584.50s
DCDM-1285.37s190.283 s26s251.083s267s446.07s
CG1.77s1680.0105s26s4650.0559s1548s10461.479s
Deflated CG771.6s1176.594s3700s27713.357s21030s48943.00s
ICPCG164s433.813s2877s9430.60s54714s218250.98s
+ +Table 1. Timing and iteration comparison for different methods on the bunny example. $t_r$ , $n_r$ and $tp_r$ represents time, iteration and time per iteration. DCDM-\{64,128\} calls a model whose parameters are trained over a \{64^3, 128^3\} grid. All computations are done using only CPUs; model inference does not use GPUs. All implementation is done in Python. See Appendix A.3 for convergence plots. + +tested cases where $d = 2$ , at resolutions $256^2$ and $512^2$ . For this setup, running the smoke plume test (2D analogue of 3 Left) at $256^2$ , DCDM and CG take 2.18sec/64iter and 0.59sec/536iter, respectively. Again, since the system for this resolution is much smaller than those reported in Table 1, we expect CG to be more efficient. However, at $512^2$ , the system is big enough where we actually do outperform CG in time as well: 3.87sec/126iter for DCDM vs. 5.60sec/1146iter for CG. Third, we performed comparisons between DCDM and a more recent work, Sappl et al. (2019). Since Sappl et al. (2019) requires many asymptotically expensive computations (see Section 2), we expected a significant performance advantage with DCDM. For the $256^2$ smoke plume example, using matrices from frame 10 of the simulation, Sappl et al. (2019) requires 1024 iterations for convergence (15.41s), vs. only 50 for DCDM (1.50s). + +# 7. Conclusions + +We presented DCDM, incorporating CNNs into a CG-style algorithm that yields efficient, convergent behavior for solving linear systems. Our method effectively acts as a preconditioner, albeit a nonlinear one3. Our method is evaluated on linear systems with over 16 million degrees of freedom and converges to a desired tolerance in merely tens of iterations. Furthermore, despite training the underlying network on a single domain (per resolution) without obstacles, our network is able to successfully predict search directions that enable efficient linear solves on domains with complex and dynamic geometries. Moreover, the training data for our network does not require running fluid simulations or solving linear systems ahead of time; our Rayleigh-Ritz vector approach enables us to quickly generate very large training datasets, unlike other works. We release our code, data, and pre-trained models so users can immediately apply + +DCDM to Poisson systems without further dataset generation or training, especially due to the feasibility of pretrained weights for inference at different grid resolutions: https://github.com/ayano721/2023_DCDM. + +Our network was designed for and trained exclusively using data related to the discrete Poisson matrix, which likely limits the generalizability of our present model. However, we believe our method is readily applicable to other classes of PDEs (or general problems with graph structure) that give rise to large, sparse, symmetric linear systems. To that end, we briefly applied DCDM to matrices arising from discretized heat equations (a similar class of large, sparse matrices; hence expected to work well with DCDM). We found that we can achieve convergence (reducing the initial residual by four orders of magnitude) using DCDM trained only on Poisson matrices—even though our test heat equation used Dirichlet boundary conditions, unlike the Neumann boundary conditions used with the Poisson equation systems we solved before. For a heat equation matrix at $N = 64$ , DCDM can converge in only 14 iterations. Future work will extend this analysis. We note that our method is unlikely to work well for matrices that have high computational cost to evaluate $A * x$ (e.g., dense matrices), since training relies on efficient $A * x$ evaluations. An interesting question is how well our method and current models would apply to discrete Poisson matrices arising from non-uniform grids, e.g., quadtrees or octrees (Losasso et al., 2004). + +# Acknowledgements + +This work is supported by DARPA through contract number FA8750-20-C-0537. Any opinions, findings, conclusions, or recommendations expressed are those of the authors and do not necessarily reflect the views of the sponsor. This research was supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration. This research was also supported under DoE ORNL contract 4000171342. We would like to thank Professor Shigeo Morishima for his advice. + +# References + +Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., and Zheng, X. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from tensorflow.org. +Ackmann, J., Duben, P. D., Palmer, T. N., and Smolarkiewicz, P. K. Machine-learned preconditioners for linear solvers in geophysical fluid flows. arXiv preprint arXiv:2010.02866, 2020. +Andrychowicz, M., Denil, M., Gomez, S., Hoffman, M. W., Pfau, D., Schaul, T., Shillingford, B., and de Freitas, N. Learning to learn by gradient descent by gradient descent. Advances in Neural Information Processing Systems, pp. 29, 2016. +Brandt, A. Multi-level adaptive solutions to boundary-value problems. Math Comp, 31(138):333-390, 1977. +Bridson, R. Fluid simulation for computer graphics. Taylor & Francis, 2008. +Chen, J., Kala, V., Marquez-Razon, A., Gueidon, E., Hyde, D. A. B., and Teran, J. A momentum-conserving implicit material point method for surface tension with contact angles and spatial gradients. ACM Trans. Graph., 40(4), jul 2021. ISSN 0730-0301. doi: 10.1145/3450626.3459874. URL https://doi.org/10.1145/3450626.3459874. +Chen, T., Chen, X., Chen, W., Heaton, H., Liu, J., Wang, Z., and Yin, W. Learning to optimize: a primer and a benchmark. Journal of Machine Learning Research 23, pp. 8562-8620, 2022. +Chorin, A. A numerical method for solving incompressible viscous flow problems. J Comp Phys, 2(1):12-26, 1967. +Fedkiw, R., Stam, J., and Jensen, H. Visual simulation of smoke. In SIGGRAPH, pp. 15-22. ACM, 2001. +fluidnetsc22. fluidnetsc22/fluidnet_sc22: v0.0.1, April 2022. URL https://doi.org/10.5281/ zenodo.6424901. doi: 10.5281/zenodo.6424901, URL: https://doi.org/10.5281/zenodo.6424901. + +Gagniere, S., Hyde, D., Marquez-Razon, A., Jiang, C., Ge, Z., Han, X., Guo, Q., and Teran, J. A hybrid Lagrangian/Eulerian collocated velocity advection and projection method for fluid simulation. Computer Graphics Forum, 39(8):1-14, 2020. doi: https://doi.org/10.1111/cgf.14096. +Golub, G. and Loan, C. V. Matrix computations, volume 3. JHU Press, 2012. +Götz, M. and Anzt, H. Machine learning-aided numerical linear algebra: Convolutional neural networks for the efficient preconditioner generation. In 2018 IEEE/ACM 9th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems (scalA), pp. 49-56. IEEE, 2018. +Grebhahn, A., Siegmund, N., Kostler, H., and Apel, S. Performance prediction of multigrid-solver configurations. In Software for Exascale Computing-SPPEXA 2013-2015, pp. 69-88. Springer, 2016. +Greenfeld, D., Galun, M., Basri, R., Yavneh, I., and Kimmel, R. Learning to optimize multigrid PDE solvers. In Int Conf Mach Learn, pp. 2415-2423. PMLR, 2019. +Harlow, F. and Welch, E. Numerical calculation of time dependent viscous flow of fluid with a free surface. Phys Fluid, 8(12):2182-2189, 1965. +He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016. doi: 10.1109/CVPR.2016.90. +Hestenes, M. R. and Stiefel, E. Methods of conjugate gradients for solving linear systems. Journal of research of the National Bureau of Standards, 49(6):409, 1952. +Hsieh, J.-T., Zhao, S., Eismann, S., Mirabella, L., and Ermon, S. Learning neural PDE solvers with convergence guarantees, 2019. URL https://arxiv.org/abs/1906.01200. +Ichimura, T., Fujita, K., Hori, M., Maddegedara, L., Ueda, N., and Kikuchi, Y. A fast scalable iterative implicit solver with Green's function-based neural networks. In 2020 IEEE/ACM 11th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems (ScalA), pp. 61-68, 2020. doi: 10.1109/ScalA51936.2020.00013. +Kershaw, D. The incomplete Cholesky conjugate gradient method for the iterative solution of systems of linear equations. J Comp Phys, 26(1):43-65, 1978. +Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2015. + +Lanczos, C. An iteration method for the solution of the eigenvalue problem of linear differential and integral operators. 1950. +Li, K. and Malik, J. Learning to optimize. arXiv preprint, 2016. +Liao, I., Dangovski, R. R., Foerster, J. N., and Soljacic, M. Learning to optimize quasi-newton methods. arXiv preprint, 2022. +Losasso, F., Gibou, F., and Fedkiw, R. Simulating water and smoke with an octree data structure. ACM Trans. Graph., 23(3):457-462, 2004. +Losasso, F., Fedkiw, R., and Osher, S. Spatially adaptive techniques for level set methods and incompressible flow. Computers & Fluids, 35(10):995-1010, 2006. +Luna, K., Klymko, K., and Blaschke, J. P. Accelerating GMRES with deep learning in real-time, 2021. URL https://arxiv.org/abs/2103.10975. +Luz, I., Galun, M., Maron, H., Basri, R., and Yavneh, I. Learning algebraic multigrid using graph neural networks. In Int Conf Mach Learn, pp. 6489-6499. PMLR, 2020. +McAdams, A., Sifakis, E., and Teran, J. A parallel multigrid poisson solver for fluids simulation on large grids. In Proc 2010 ACM SIGGRAPH/Eurograph Symp Comp Anim, pp. 65-74. Eurographics Association, 2010. +Nussbaumer, H. The fast Fourier transform. In Fast Fourier Transform and Convolution Algorithms, pp. 80-111. Springer, 1981. +Paige, C. C. The computation of eigenvalues and eigenvectors of very large sparse matrices. PhD thesis, University of London, 1971. +Paige, C. C. and Saunders, M. A. Solution of sparse indefinite systems of linear equations. SIAM journal on numerical analysis, 12(4):617-629, 1975. +Paluszny, A. and Zimmerman, R. W. Numerical simulation of multiple 3d fracture propagation using arbitrary meshes. Computer Methods in Applied Mechanics and Engineering, 200(9):953-966, 2011. ISSN 0045-7825. doi: https://doi.org/10.1016/j.cma.2010.11.013. URL https://www.sciencedirect.com/science/article/pii/S0045782510003373. +Panuelos, J., Goldade, R., Grinspun, E., Levin, D., and Batty, C. PolyStokes: A polynomial model reduction method for viscous fluid simulation. ACM Trans Graph (TOG), 42(4), 2023. + +Ruelmann, H., Geveler, M., and Turek, S. On the prospects of using machine learning for the numerical simulation of PDEs: Training neural networks to assemble approximate inverses, 2018. URL http://dx.doi.org/10.17877/DE290R-18778. +Saad, Y. Iterative Methods for Sparse Linear Systems. Society for Industrial and Applied Mathematics, USA, 2nd edition, 2003. ISBN 0898715342. +Saad, Y. and Schultz, M. GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J Sci Stat Comp, 7(3):856-869, 1986. +Saad, Y., Yeung, M., Erhel, J., and Guyomarc'h, F. A deflated version of the conjugate gradient algorithm. SIAM Journal on Scientific Computing, 21:1909-1926, 2000. +Sappl, J., Seiler, L., Harders, M., and Rauch, W. Deep learning of preconditioners for conjugate gradient solvers in urban water related problems, 2019. URL https://arxiv.org/abs/1906.06925. +Shen, J., Chen, X., Heaton, H., Chen, T., Liu, J., Yin, W., and Wang, Z. Learning a minimax optimizer: A pilot study. International Conference on Learning Representations, 2019. +Stanaityte, R. ILU and Machine Learning Based Preconditioning For The Discretized Incompressible Navier-Stokes Equations. PhD thesis, University of Houston, 2020. +Stiefel, E. Über eineige methoden der relaxationsrechnung. Zeitschrift für angewandte Mathematik und Physik ZAMP, 3(1):1-33, 1952. +Tompson, J., Schlachter, K., Sprechmann, P., and Perlin, K. Accelerating Eulerian fluid simulation with convolutional networks. In Precup, D. and Teh, Y. (eds.), Proc 34th Int Conf Mach Learn, volume 70 of Proc Mach Learn Res, pp. 3424-3433. PMLR, 06-11 Aug 2017. +Trefethen, L. and Bau, D. Numerical Linear Algebra, volume 50. SIAM, 1997. +Um, K., Brand, R., Fei, Y., Holl, P., and Thuerey, N. Solver-in-the-loop: Learning from differentiable physics to interact with iterative PDE-solvers. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 6111-6122. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/43e4e6a6f341e00671e123714de019a8-Paper.pdf. +Virtanen, P., Gommers, R., Oliphant, T. E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., + +Weckesser, W., Bright, J., van der Walt, S. J., Brett, M., Wilson, J., Millman, K. J., Mayorov, N., Nelson, A. R. J., Jones, E., Kern, R., Larson, E., Carey, C. J., Polat, I., Feng, Y., Moore, E. W., VanderPlas, J., Laxalde, D., Perktold, J., Cirmrnan, R., Henriksen, I., Quintero, E. A., Harris, C. R., Archibald, A. M., Ribeiro, A. H., Pedregosa, F., van Mulbregt, P., and SciPy 1.0 Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17:261-272, 2020. doi: 10.1038/s41592-019-0686-2. +Yang, C., Yang, X., and Xiao, X. Data-driven projection method in fluid simulation. Comp Anim Virt Worlds, 27 (3-4):415-424, 2016. + +# A. Appendix + +# A.1. Conjugate Gradients Method + +In this appendix, we provide a review of the conjugate gradients (CG) method, which is the inspiration for DCDM as presented in this paper. The conjugate gradients method is a special case of the line search method, where the search directions are $A$ -orthogonal to each other. It can also be viewed as a modification of gradient descent (GD) where the search direction is chosen as the component of the residual (equivalently, the negative gradient of the matrix norm of the error) that is $A$ -orthogonal to all previous search directions: + +$$ +\boldsymbol {d} _ {k} = \boldsymbol {r} _ {k - 1} - \sum_ {i = 1} ^ {k - 1} h _ {i k} \boldsymbol {d} _ {i}, \quad h _ {i k} = \frac {\boldsymbol {d} _ {i} ^ {T} \boldsymbol {A} \boldsymbol {r} _ {k - 1}}{\boldsymbol {d} _ {i} ^ {T} \boldsymbol {A} \boldsymbol {d} _ {i}}. +$$ + +With this choice, the search directions form a basis for $\mathbb{R}^n$ so that the initial error can be written as $e_0 = x - x_0 = \sum_{i=1}^{n} e_i d_i$ , where $e_i$ are the components of the initial error written in the basis. Furthermore, since the search directions are $\pmb{A}$ -orthogonal, the optimal step sizes $\alpha_k$ at each iteration satisfy + +$$ +\begin{array}{l} \alpha_ {k} = \frac {\boldsymbol {r} _ {k - 1} ^ {T} \boldsymbol {d} _ {k}}{\boldsymbol {d} _ {k} ^ {T} \boldsymbol {A} \boldsymbol {d} _ {k}} = \frac {\boldsymbol {d} _ {k} ^ {T} \boldsymbol {A} \boldsymbol {e} _ {k - 1}}{\boldsymbol {d} _ {k} ^ {T} \boldsymbol {A} \boldsymbol {d} _ {k}} \\ = \frac {\boldsymbol {d} _ {k} ^ {T} \boldsymbol {A} \left(\sum_ {i = 1} ^ {n} e _ {i} \boldsymbol {d} _ {i} - \sum_ {j = 1} ^ {k - 1} \alpha_ {j} \boldsymbol {d} _ {j}\right)}{\boldsymbol {d} _ {k} ^ {T} \boldsymbol {A} \boldsymbol {d} _ {k}} = e _ {k}. \\ \end{array} +$$ + +That is, the optimal step sizes are chosen to precisely eliminate the components of the error on the basis defined by the search directions. Thus, convergence is determined by the (at most $n$ ) non-zero components $e_i$ in the initial error. Although rounding errors prevent this from happening exactly in practice, this property greatly reduces the number of required iterations (Golub & Loan, 2012). + +Furthermore, $h_{ik} = 0$ for $i < k - 1$ , and thus iteration can be performed without the need to store all previous search directions $\pmb{d}_i$ and without the need for computing all previous $h_{ik}$ . To see this, it is sufficient to show $\pmb{d}_i^T\pmb{A}\pmb{r}_{k - 1} = 0$ . + +Lemma In the CG method, residuals are orthogonal, i.e., $\boldsymbol{r}_k^T\boldsymbol{r}_j = 0$ for all $j < k$ . + +# Proof + +$$ +\begin{array}{l} \boldsymbol {r} _ {k} ^ {T} \boldsymbol {r} _ {j} = \left(\boldsymbol {r} _ {k - 1} - \alpha_ {k} \boldsymbol {A} \boldsymbol {d} _ {k}\right) ^ {T} \boldsymbol {r} _ {j} \\ = \boldsymbol {r} _ {k - 1} ^ {T} \boldsymbol {r} _ {j} - \alpha_ {k} \boldsymbol {d} _ {k} ^ {T} \boldsymbol {A} \boldsymbol {r} _ {j} \\ = \boldsymbol {r} _ {k - 1} ^ {T} \boldsymbol {r} _ {j} - \alpha_ {k} \boldsymbol {d} _ {k} ^ {T} \boldsymbol {A} \left(\boldsymbol {d} _ {j + 1} + \sum_ {i = 1} ^ {j} h _ {i (j + 1)} \boldsymbol {d} _ {i}\right) \\ \end{array} +$$ + +For $j < k - 1$ , $\boldsymbol{r}_{k - 1}^T\boldsymbol{r}_j = 0$ follows from induction. $\boldsymbol{d}_k^T\boldsymbol {A}(\boldsymbol{d}_{j + 1} + \sum_{i = 1}^j h_{i(j + 1)}\boldsymbol {d}_i) = \boldsymbol{d}_k^T\boldsymbol {A}\boldsymbol{d}_{j + 1} + \sum_{i = 1}^j h_{i(j + 1)}\boldsymbol {d}_k^T\boldsymbol {A}\boldsymbol {d}_i = 0$ + +because $d_{i}$ are $\pmb{A}$ -orthogonal by their definition. For $j = k - 1$ + +$$ +\begin{array}{l} \boldsymbol {r} _ {k} ^ {T} \boldsymbol {r} _ {k - 1} = \boldsymbol {r} _ {k - 1} ^ {T} \boldsymbol {r} _ {k - 1} - \alpha_ {k} \boldsymbol {d} _ {k} ^ {T} \boldsymbol {A} \boldsymbol {r} _ {k - 1} \\ = \boldsymbol {r} _ {k - 1} ^ {T} \boldsymbol {r} _ {k - 1} - \frac {\boldsymbol {r} _ {k - 1} ^ {T} \boldsymbol {d} _ {k}}{\boldsymbol {d} _ {k} ^ {T} \boldsymbol {A} \boldsymbol {d} _ {k}} \boldsymbol {d} _ {k} ^ {T} \boldsymbol {A} \boldsymbol {r} _ {k - 1} \\ = \boldsymbol {r} _ {k - 1} ^ {T} \boldsymbol {r} _ {k - 1} - \frac {\boldsymbol {r} _ {k - 1} ^ {T} \boldsymbol {d} _ {k}}{\boldsymbol {d} _ {k} ^ {T} \boldsymbol {A} \boldsymbol {d} _ {k}} \boldsymbol {d} _ {k} ^ {T} \boldsymbol {A} \left(\boldsymbol {d} _ {k} + \sum_ {i = 1} ^ {k - 1} h _ {i} \boldsymbol {d} _ {k}\right) \\ = \boldsymbol {r} _ {k - 1} ^ {T} \boldsymbol {r} _ {k - 1} - \boldsymbol {r} _ {k - 1} ^ {T} \boldsymbol {d} _ {k} \quad (\text {b y} \boldsymbol {A} \text {- o r t h o g o n a l i t y o f} \boldsymbol {d} _ {k}) \\ = \boldsymbol {r} _ {k - 1} ^ {T} (\boldsymbol {r} _ {k - 1} - \boldsymbol {d} _ {k}) \\ = \boldsymbol {r} _ {k - 1} ^ {T} \left(\sum_ {i = 1} ^ {k - 1} h _ {i k} \boldsymbol {d} _ {i}\right) \\ = \sum_ {i = 1} ^ {k - 1} h _ {i k} \boldsymbol {r} _ {k - 1} ^ {T} \boldsymbol {d} _ {i} \\ \end{array} +$$ + +So proving $\boldsymbol{r}_{k-1}^T \boldsymbol{d}_i = 0$ for $i < k$ would finish the proof. However, by the definition of $\boldsymbol{d}_i = \boldsymbol{r}_{i-1} - \sum_{j=1}^{i-1} h_{ij} \boldsymbol{d}_k$ , induction proves $\boldsymbol{d}_i \in \operatorname{span}(\boldsymbol{r}_1, \boldsymbol{r}_2, \dots, \boldsymbol{r}_{i-1})$ . Hence, $\boldsymbol{r}_{k-1}^T \boldsymbol{d}_i = 0$ for all $i \leq k-1$ , which proves the lemma. + +Claim In the CG method, search directions are $\mathbf{A}$ -orthogonal to all previous residuals, i.e., $\pmb{d}_i^T\pmb{A}\pmb{r}_{k - 1} = 0$ for all $i < k - 1$ . + +Proof $\pmb{r}_i^T\pmb{r}_{k-1} = (\pmb{r}_{i-1}^T - \alpha_i\pmb{A}\pmb{d}_i)^T\pmb{r}_{k-1}$ , hence $\pmb{d}_i\pmb{A}\pmb{r}_{k-1} = \pmb{r}_{i-1}^T\pmb{r}_{k-1} - \pmb{r}_i^T\pmb{r}_{k-1} = 0$ for all $i < k-1$ , using the lemma above. + +This proves the sparsity of the $h_{ik}$ . As discussed in the main body of the paper, this "memoryless" property of CG is inherited by DCDM and enables the efficiency of our method. + +# A.2. Choice of $\alpha$ + +Line search is an iterative method to find a local minimum of an objective function $h: \mathbb{R}^n \to \mathbb{R}$ . In the context of variationally solving $\mathbf{A}\mathbf{x} = \mathbf{b}$ , $h(\mathbf{x}) = \frac{1}{2}\mathbf{x}^T\mathbf{A}\mathbf{x} - \mathbf{x}^T\mathbf{b}$ , and the $k^{\mathrm{th}}$ iterate is computed by + +$$ +\mathbf {x} _ {k} = \mathbf {x} _ {k - 1} + \alpha_ {k} \mathbf {d} _ {k}. +$$ + +One desires a step size $\alpha_{k}$ that yields $h(\pmb{x}_k) < h(\pmb{x}_{k-1})$ . More specifically, the optimal choice is + +$$ +\alpha_ {k} = \underset {\alpha} {\arg \min } h (\mathbf {x} _ {k - 1} + \alpha \mathbf {d} _ {k}) = \frac {\boldsymbol {r} _ {k - 1} ^ {T} \boldsymbol {d} _ {k}}{\boldsymbol {d} _ {k} ^ {T} \boldsymbol {A} \boldsymbol {d} _ {k}}, +$$ + +where $\pmb{r}_{k - 1} = \pmb {b} - \pmb{A}\pmb{x}_{k - 1}$ is the $(k - 1)^{\mathrm{th}}$ residual. To see that this choice of $\alpha_{k}$ is indeed the minimizer, we can define the objective function as $g(\alpha)$ and write + +$$ +\begin{array}{l} g (\alpha) = h \left(\boldsymbol {x} _ {k - 1} + \alpha \boldsymbol {d} _ {k}\right) \\ = \frac {1}{2} \left(\boldsymbol {x} _ {k - 1} + \alpha \boldsymbol {d} _ {k}\right) ^ {T} \boldsymbol {A} \left(\boldsymbol {x} _ {k - 1} + \alpha \boldsymbol {d} _ {k}\right) - \boldsymbol {b} ^ {T} \left(\boldsymbol {x} _ {k - 1} + \alpha \boldsymbol {d} _ {k}\right) \\ = \frac {1}{2} \alpha^ {2} \boldsymbol {d} _ {k} ^ {T} \boldsymbol {A} \boldsymbol {d} _ {k} + \alpha (\boldsymbol {d} _ {k} ^ {T} \boldsymbol {A} \boldsymbol {x} _ {k - 1} - \boldsymbol {d} _ {k} ^ {T} \boldsymbol {b}) + \left(\frac {1}{2} \boldsymbol {x} _ {k - 1} ^ {T} \boldsymbol {A} \boldsymbol {x} _ {k - 1} + \boldsymbol {x} _ {k - 1} ^ {T} \boldsymbol {b}\right) \\ = \frac {1}{2} \alpha^ {2} \boldsymbol {d} _ {k} ^ {T} \boldsymbol {A} \boldsymbol {d} _ {k} - \alpha \boldsymbol {d} _ {k} ^ {T} \underbrace {\boldsymbol {r} _ {k - 1}} _ {\boldsymbol {b} - \boldsymbol {A} \boldsymbol {x} _ {k - 1}} + (\text {c o n s t a n t t e r m s}). \\ \end{array} +$$ + +Taking the derivative with respect to $\alpha$ , we have $g'(\alpha) = \alpha d_k^T A d_k - d_k^T r_{k-1} = 0$ , yielding $\alpha = \frac{r_{k-1}^T d_k}{d_k^T A d_k}$ as the minimizer of $g(\alpha)$ . + +# A.3. Additional Convergence Results + +We include additional convergence results, similar to those shown in Figure 4b, in Figure 5. Specifically, these plots show the convergence of all the methods reported in Table 1 at each of the resolutions reported there. The figure visually demonstrates the significant reduction in iteration count achieved by DCDM. + +![](images/6741de86362aba1bc2ee148d573f1ade060fe59f01cd3697317def47612a0915.jpg) +(a) $N = 64$ + +![](images/80a815f36a32f1d5f9e6534b1bf30a61deedc23206f2318a686d791c6a4a6856.jpg) +(b) $N = 128$ +Figure 5. Convergence of different methods on the 3D bunny example for $N = 64$ , 128, 256; summary results, as well as timings, are reported in Table 1. DCDM-{64,128} calls a model whose parameters are trained over a $\{64^3$ , $128^3\}$ grid. + +![](images/0793c011e4bb294772e0a779bf1d6daeeafe4be461f1a8af21c2d0efea8533aa.jpg) +(c) $N = 256$ + +We remark on ICPCG since it is a popular preconditioner and closest in performance to DCDM. When using ICPCG with matrices that arise in a domain with moving internal boundaries (such as our bunny examples), the approximate factorization of $A$ must be recomputed. Recomputation is also required in the approach of Tompson et al. (2017) in examples like these. Moreover, as Figure 4d shows, DCDM does not require full A-orthogonality. Hence the algorithm only stores two previous vectors, just like CG, and unlike the much more significant memory requirements of ICPCG. For example, the $L$ and $D$ matrices for $128^3$ take about 18.7MB in scipysparse format, while our network can be stored in less than 500KB. + +# A.4. Ablation Study and Runtime Analysis + +
MethodDCDMModel 1Model 2Model 3Model 4U-Net
Number of Parameters97,45797,45797,45797,45724,5373,527,505
+ +Table 2. Number of parameters for each network architecture considered in the ablation study. + +Here, we provide results of a small ablation study on network architecture in order to justify some of the architectural choices we made in constructing the DCDM network. We considered a few different models (Figure 8a to Figure 8e), several of which are modifications of the model we ultimately used to generate our results (Figure 8a). The models we considered include one without ResNet connections (Figure 8b), one with simple downsampling and upsampling (a U-Net-like structure) (Figure 8c), a minimal CNN (Figure 8d), and a model with different filter sizes of the blocks (Figure 8e). We compared how these models perform on the same bunny example considered in the main part of the paper (at resolution $64^{3}$ ). Figure 6 shows that the architecture we ultimately selected for DCDM yields the best results. + +Each model's parameter count is listed in Table 2. Compared to a basic CNN or U-Net architecture (like the one used in Tompson et al. (2017)), our DCDM network is actually quite light. For example, the U-Net architecture in Tompson et al. (2017) uses 3,527,505 parameters (at $N = 64$ in 3D), while our network (at the same resolution) requires only 97,457 parameters (a 36x reduction). In addition, one advantage of our method is that DCDM only needs to be trained once (and data only generated once) per problem class (and possibly size). So if a user desired to solve Poisson systems (which are quite common in computer graphics and engineering), they could use our pre-trained models off the shelf; though we readily concede that new classes of matrices or new resolutions could require new data generation or retraining. + +Dataset generation is a key step in using the DCDM model we selected. We found that we needed to include orthogonalization to previous vectors in the Lanczos problem in practice (a well-known limitation of the method). This causes the creation of a dataset (cf. Section 5.1) to take $O(n^{3}m^{2})$ time, where $m$ is the number of Lanczos vectors to be created and $n$ is the + +![](images/db520c7b4eb616ab6a3950dc6df9a629e2b110130e9220988635ce3bb668329a.jpg) +Figure 6. Residual plot for the bunny example at $N = 64$ with each trained model. The dashed line represents a four-orders-of-magnitude reduction in residual, which is the convergence criterion we use throughout our examples. + +resolution. Hence increasing resolution from $64^{3}$ to $128^{3}$ increases the time by a factor of 8, which scales 5-7 hours to 2-2.5 days. (However, since we can use low resolution models on higher resolution problems, this scaling can be mitigated, cf. Section 5.2.) In addition, the orthogonalization step makes dataset generation have complexity $O(n^{3}m^{2})$ , instead of the $O(n^{3}m)$ complexity of classical Lanczos processes. If we can find any other solution for the numerical problems of classical Lanczos iteration besides orthogonalization, we can drastically reduce the time to generate the dataset (such a task is outside the scope of the present work). We note that storing the training dataset has asymptotic cost $O(kn^{3})$ ; for instance, the dataset of $k = 20,000$ synthetic data takes 23GB and 159GB of storage for resolutions $64^{3}$ and $128^{3}$ , respectively. + +# A.5. Model training + +Figure 7 shows the decrease in training and validation losses observed when training the neural network used for DCDM. As mentioned in Section 5.3, for DCDM, we selected the model after epoch 31 for $N = 64$ and epoch 3 for $N = 128$ . The plots clearly demonstrate that training and validation loss seem to decrease after these epochs. However, we found that our epoch selections yielded the best performance on our test data, namely, the examples we showed in Section 6. Accordingly, we conjecture that our model overfit relatively quickly to both training and validation data, and that perhaps training and validation data were much more similar to each other compared to the test data. We are interested in exploring this further in future work. Of course, philosophically, choosing a model by comparing its performance from different epochs on test data essentially makes that test data part of the validation data, but this is a broader discussion for the learning community. + +![](images/5df182c384a6e1519f0255e14a7f48135b3fce8f2bd4ab2a6570f1d16804e4dc.jpg) +(a) $N = 64$ + +![](images/5d79e57daa299d2e38dc92588cf052840a4846437833aec48005062ab4b22462.jpg) +(b) $N = 128$ +Figure 7. Training and validation loss for the networks used in DCDM at resolutions $N = 64$ and $N = 128$ . + +![](images/89f90d764438714a0d79c38dd5128f370f0612ef1a7b6c709c6c9030542e427f.jpg) +(a) DCDM (our model) + +![](images/0eb4b5dee10ea6407dae6702f508f60596ce76666fadf4a37f07de61e5f04388.jpg) +(b) Model 1 + +![](images/6eae4259d696ef53fce263776517b72f04ffc0c2e63ad445319e359f110d0f9a.jpg) +(c) Model 2 + +![](images/b7f2c92b6513fb40bc50e2f2851f2c9148bcf6068ecba3342f783050e803ce33.jpg) +(d) Model 3 + +![](images/0223459c29be55d5514e1eb04406508a2d744005be32572c6eadbc55c3a48ecc.jpg) +(e) Model 4 +Figure 8. Network architectures considered for our ablation study. \ No newline at end of file diff --git a/adeepconjugatedirectionmethodforiterativelysolvinglinearsystems/images.zip b/adeepconjugatedirectionmethodforiterativelysolvinglinearsystems/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..f2b21f696bd55c619440eca9a8881c9f56931b75 --- /dev/null +++ b/adeepconjugatedirectionmethodforiterativelysolvinglinearsystems/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6009a5f5f36d2404ee43b7621279032fb1a637c42b1ec183c8a88dc35a5c8393 +size 636593 diff --git a/adeepconjugatedirectionmethodforiterativelysolvinglinearsystems/layout.json b/adeepconjugatedirectionmethodforiterativelysolvinglinearsystems/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..89ec44fd93a9ba7225b15b734ae0d7189a2ca4d9 --- /dev/null +++ b/adeepconjugatedirectionmethodforiterativelysolvinglinearsystems/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a10a830d5ff14ab055882f98561394bc6400d5d7e348a0a3e65ff27ec0015730 +size 718699 diff --git a/adistributionoptimizationframeworkforconfidenceboundsofriskmeasures/1f6a6348-13a1-4693-87b3-859e3ff69687_content_list.json b/adistributionoptimizationframeworkforconfidenceboundsofriskmeasures/1f6a6348-13a1-4693-87b3-859e3ff69687_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..749d72b89d6af5e4f30e16f85b7c31766f44b0f5 --- /dev/null +++ b/adistributionoptimizationframeworkforconfidenceboundsofriskmeasures/1f6a6348-13a1-4693-87b3-859e3ff69687_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c7ec49833120ba3830fbfe82978a6aff6459f41c439307ebfbf5a8006b22f238 +size 209364 diff --git a/adistributionoptimizationframeworkforconfidenceboundsofriskmeasures/1f6a6348-13a1-4693-87b3-859e3ff69687_model.json b/adistributionoptimizationframeworkforconfidenceboundsofriskmeasures/1f6a6348-13a1-4693-87b3-859e3ff69687_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5da7db171cf017f1e3c72607fe593db180cdd313 --- /dev/null +++ b/adistributionoptimizationframeworkforconfidenceboundsofriskmeasures/1f6a6348-13a1-4693-87b3-859e3ff69687_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f5f7b7d7f5a42ae65e46cbf64880a3c11c8754444c61d3e0fcc3a554abe5fbf +size 239544 diff --git a/adistributionoptimizationframeworkforconfidenceboundsofriskmeasures/1f6a6348-13a1-4693-87b3-859e3ff69687_origin.pdf b/adistributionoptimizationframeworkforconfidenceboundsofriskmeasures/1f6a6348-13a1-4693-87b3-859e3ff69687_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..973164e67fe51562c93add76a62f47bf48f5a375 --- /dev/null +++ b/adistributionoptimizationframeworkforconfidenceboundsofriskmeasures/1f6a6348-13a1-4693-87b3-859e3ff69687_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a6e2e51831f631f9564477bf25145f7b9742ba57edcced99a921e9e113d0832 +size 1172289 diff --git a/adistributionoptimizationframeworkforconfidenceboundsofriskmeasures/full.md b/adistributionoptimizationframeworkforconfidenceboundsofriskmeasures/full.md new file mode 100644 index 0000000000000000000000000000000000000000..fe248ffd95d4cf14cb3e176633749c091d788647 --- /dev/null +++ b/adistributionoptimizationframeworkforconfidenceboundsofriskmeasures/full.md @@ -0,0 +1,1301 @@ +# A Distribution Optimization Framework for Confidence Bounds of Risk Measures + +Hao Liang12 Zhi-quan Luo12 + +# Abstract + +We present a distribution optimization framework that significantly improves confidence bounds for various risk measures compared to previous methods. Our framework encompasses popular risk measures such as the entropic risk measure, conditional value at risk (CVaR), spectral risk measure, distortion risk measure, equivalent certainty, and rank-dependent expected utility, which are well established in risk-sensitive decision-making literature. To achieve this, we introduce two estimation schemes based on concentration bounds derived from the empirical distribution, specifically using either the Wasserstein distance or the supremum distance. Unlike traditional approaches that add or subtract a confidence radius from the empirical risk measures, our proposed schemes evaluate a specific transformation of the empirical distribution based on the distance. Consequently, our confidence bounds consistently yield tighter results compared to previous methods. We further verify the efficacy of the proposed framework by providing tighter problem-dependent regret bound for the CVaR bandit. + +# 1. Introduction + +The conventional machine learning literature primarily relies on the expected value or mean of a random variable as the performance metric for a given algorithm. However, in certain critical applications such as finance or medical treatment, the decision-maker's focus extends beyond the expected value and emphasizes other characteristics of the distribution. For instance, a risk-averse portfolio manager may place greater importance on tail behavior than expected value. To capture this risk-aware perspective, the decision + +$^{1}$ School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen $^{2}$ Shenzhen Research Institute of Big Data. Correspondence to: Hao Liang . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +maker selects a risk measure (RM) as an alternative to the expected value, effectively representing their specific attitude towards risk. + +In practice, however, it is often infeasible to directly evaluate the risk measure of the unknown underlying distribution. Instead, we must rely on constructing the point estimator based on finite samples. Consequently, the confidence interval that quantifies the coverage of the true risk measure becomes crucial in the risk-sensitive setting, as it certifies a trustworthy range for the decision maker. + +In this paper, we aim to derive confidence bounds for several classes of risk measures: the Conditional Value at Risk (CVaR), the spectral risk measure (SRM), the distortion risk measure (DRM), the entropic risk measure (ERM), the certainty equivalent (CE), and the rank-dependent expected utility (RDEU). In safety-critical applications, such as medical treatment, CVaR is widely used, which represents the expected value within a fraction of the worst outcomes. Despite its practical utility, CVaR exhibits limitations in terms of expressing various risk preferences, as it assigns equal weight to all losses beyond a certain threshold. To address this, the SRM offers a notable generalization by incorporating a non-constant weighting function, enhancing flexibility in risk assessment. DRM came from insurance problems and later applied to investment risks. It encompasses CVaR as a special case and has gained attention in various fields. ERM is a well-known risk measure in mathematical finance and Markovian decision processes. Furthermore, CE serves as a generalization of ERM by replacing the exponential utility function with a more flexible function. This adaptation enhances the model's capability to capture a broader range of risk preferences. RDEU contributes to understanding decision-making under uncertainty and has been widely applied in diverse domains such as finance, psychology, and health economics1. + +In the existing literature, the confidence interval is commonly obtained through the concentration inequality, which bounds the deviation between the point estimator and the true risk with high probability. This deviation, referred to as the confidence radius, depends on the sample size and confi + +dence level. Conventionally, the upper or lower confidence bound is determined by adding or subtracting the confidence radius from the point estimator. In this paper, we present two innovative approaches that construct confidence bounds for risk measures without relying on concentration inequalities. Our main contribution is summarized as follows. + +(1) We propose a unified framework to obtain refined confidence bounds for several classes of risk measures, specifically for bounded distributions. We recast the problem of determining the confidence bound for risk measures based on finite samples as a constrained optimization problem. In particular, we optimize the value of risk measure over a confidence ball of distributions centered around the empirical distribution function (EDF). Furthermore, we obtain the closed-form solution that can be viewed as a transformation of the EDF. We set the confidence bound as the optimal solution's risk measure value. Notably, the computational overhead increases only marginally. + +(2) We introduce a new baseline approach that leverages the local Lipschitz constant of a risk measure over the confidence ball, which may be of independent interest. In contrast, the previous bounds rely on the global Lipschitz constant over the entire space of bounded distributions. In addition, we suggest a systematic way to compute the local Lipschitz constant and show that our bounds outperform the new baseline approach in certain scenarios. + +(3) As a minor contribution, we propose a meta-algorithm that handles generic risk measures. Specifically, the meta-algorithm specializes in the CVaR-UCB algorithm (Tamkin et al., 2019) for CVaR bandit problems. Interestingly, Tamkin et al. (2019) empirically observes that CVaR-UCB outperforms the global Lipschitz constant-based algorithm U-UCB (Cassel et al., 2018) with an order of magnitude improvement. Still, they only provide a regret bound that matches that of U-UCB. We fill this gap by providing an improved regret upper bound, quantifying the magnitude of improvement. + +# 1.1. Related Work + +Confidence bounds of risk measures The concentration of CVaR has been extensively explored in the literature, cf. Brown (2007); Wang & Gao (2010); Thomas & Learned-Miller (2019); Kolla et al. (2019); Prashanth et al. (2020); LA & Bhat (2022). The first three references primarily focus on the bounded distributions, while the remaining references consider unbounded distributions, including the sub-Gaussian, sub-exponential and heavy tail distributions. Pandey et al. (2019); LA & Bhat (2022) provide tail bounds for bounded, sub-Gaussian, or sub-exponential distributions. The concentration bounds for DRM, CE, and RDEU are presented in LA & Bhat (2022). + +Lipschitz constant-based methods Kock et al. (2021); LA & Bhat (2022) relate the estimation error to the Wasserstein distance between the true and empirical distributions and then use concentration bounds for the latter. LA & Bhat (2022) establishes concentration bounds for empirical estimates for a broad class of risk measures, including CVaR, SRM, DRM, RDEU, etc. They derive the concentration bounds via the global Lipschitz constant of the risk measure over the Wasserstein distance for bounded, sub-Gaussian, and sub-exponential distributions. Our bounds only apply to bounded distributions, but we demonstrate that our bounds are tighter than their results whenever they are valid. The computation of the global Lipschitz constant can be challenging, particularly for highly nonlinear risk measures. In many cases, one may only obtain its upper bound as a surrogate, which further loosens the resulting bounds. In contrast, our framework does not require knowledge of the Lipschitz constant. Kock et al. (2021) obtain the concentration bounds for general functionals using the supremum distance instead of the Wasserstein distance. While their work primarily focuses on inequality, poverty, and welfare measures, their methodology can be extended to encompass the risk measures mentioned above. The resulting bounds apply to bounded distributions and are looser than ours. In addition, Liang & Luo (2023) focuses on risk-sensitive reinforcement learning with dynamic risk measures and leverages the Lipschitz property of risk measures to derive regret upper bounds. By quantifying the Lipschitz constants, Liang & Luo (2023) provide regret bounds that depend on these constants. + +Off-policy risk evaluation Chandak et al. (2021); Huang et al. (2021) study the off-policy evaluation of functionals of reward or return distribution in bandit or RL setting. Chandak et al. (2021) formulates the problem of interval estimation for various functionals as a constrained optimization problem over a confidence band, which bears similarity to Formulation 6 in our paper. Meanwhile, our work differs from Chandak et al. (2021) in two aspects. First, Chandak et al. (2021) focuses on various functionals and derives the optimal solution for different functionals by a case-by-case geometric analysis. In particular, their method applies to the mean, variance, quantiles, inter-quantile range, CVaR, and entropy. In contrast, our framework focuses on general risk measures, including but not limited to ERM, CVaR, SRM, DRM, CE, and RDEU. We leverage the intrinsic property of risk measures, namely monotonicity, to derive closed-form optimal solutions that are common across different risk measures. In particular, our derivation for confidence bounds of CVaR differs from that in Chandak et al. (2021). Notably, our work is complementary to Chandak et al. (2021) in terms of the applicability of functionals. Our framework can handle arbitrary risk measures using a common optimal solution, while Chandak et al. (2021) provides confidence + +bounds for CVaR and other functionals that are not risk measures, where the optimal solution depends on the specific functional. Huang et al. (2021) deal with the off-policy evaluation of Lipschitz risk measures based on their global Lipschitz constant with respect to the supremum distance. + +The rest of the paper is organized as follows. We introduce some basic concepts and notations in section 2. We present our new framework under the Wasserstein distance and the supremum distance in section 3, and suggest the closed-form solution in Section 4. We then provide a new baseline method, which bridges our framework and the previous global Lipschitz constant-based method in Section 5. We validate the proposed framework by applying it to the risk-sensitive bandit problems in Section 6, and provide numerical experiments in Section 7. Finally, we provide the concluding remarks in Section 8. + +# 2. Preliminaries + +We introduce some notations here. Let $a < b$ be two real numbers. We denote by $\mathcal{D}([a, b])$ and $\mathcal{D}$ the space of all cumulative distribution functions (CDFs) supported on $[a, b]$ and the space of all CDFs on reals respectively. For a CDF $F \in \mathcal{D}$ , let $X_1, X_2, \dots, X_n$ be $n$ i.i.d. samples from $F$ . We denote by $F_n$ the empirical distribution function corresponding to these samples: + +$$ +F _ {n} (\cdot) \triangleq \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {I} \{X _ {i} \leq \cdot \} = \frac {1}{n} \sum_ {i = 1} ^ {n} \delta_ {X _ {i}}, +$$ + +where $\mathbb{I}$ is the indicator function and $\delta$ is the Dirac measure. We denote by $F^{-1}:(0,1]\mapsto \mathbb{R}$ is the inverse distribution function (IDF) of $F$ , i.e., the quantile function $F^{-1}(y)\triangleq \inf \{x\in \mathbb{R}:F(x)\geq y\}$ . + +Supremum distance For two CDFs $F,G\in \mathcal{D}$ , the supremum distance between them is defined as + +$$ +\| F - G \| _ {\infty} \triangleq \sup _ {x \in \mathbb {R}} | F (x) - G (x) |. +$$ + +The DKW inequality (Dvoretzky et al., 1956; Massart, 1990) bounds the deviation of the empirical distribution from the true distribution in terms of the supremum distance with high probability. + +Fact 1 (Two-sided DKW inequality). Let $\delta \in (0,1]$ , then the following holds with probability at least $1 - \delta$ + +$$ +\left\| F - F _ {n} \right\| _ {\infty} \leq c _ {n} ^ {\infty} \triangleq \sqrt {\frac {\log (2 / \delta)}{2 n}}, \tag {1} +$$ + +where $c_{n}^{\infty}$ is the concentration radius. + +The DKW inequality holds for any distribution, including discrete and unbounded distributions. + +Wasserstein distance For CDFs $F, G \in \mathcal{D}$ , the Wasserstein distance between them is defined as + +$$ +W _ {1} (F, G) \triangleq \int_ {- \infty} ^ {\infty} | F (x) - G (x) | d x. +$$ + +$W_{1}(F,G)$ can be expressed as the $\ell_1$ norm between $F$ and $G$ . Therefore we also write $W_{1}(F,G) = \| F - G\|_{1}$ . Fournier & Guillin (2015) establishes the concentration bounds on the Wasserstein distance between the EDF and the underlying one without explicit constants. LA & Bhat (2022) gives the concentration results for sub-Gaussian distributions with explicit constants. As a corollary, Fact 2 provides the concentration bound for bounded distributions. Fact 2. Let $F\in \mathcal{D}([a,b])$ . With probability at least $1 - \delta$ , for every $n\geq \log (1 / \delta)$ + +$$ +\left\| F - F _ {n} \right\| _ {1} \leq c _ {n} ^ {1} \triangleq \frac {2 5 6 (b - a)}{\sqrt {n}} + 8 (b - a) \sqrt {\frac {e \log (1 / \delta)}{n}} \tag {2} +$$ + +where $c_{n}^{1}$ is the concentration radius. + +Risk measure In this paper, we interpret the random variable as a loss instead of a reward. For two random variables $X \sim F$ and $Y \sim G$ , we say that $Y$ dominates $X$ if $\forall x \in \mathbb{R}, F(x) \geq G(x)$ , and we write $Y \succeq X$ . A risk measure $\mathbf{T}$ is defined as a functional mapping from a set of r.v.s $\mathcal{X}$ to the reals that satisfy the following conditions (Föllmer & Schied, 2010; Weber, 2006) + +- Monotonicity: $X \preceq Y \Rightarrow \mathbf{T}(X) \leq \mathbf{T}(Y)$ +- Translation-invariance: $\mathbf{T}(X + c) = \mathbf{T}(X) + c, c \in \mathbb{R}$ + +A risk measure $\mathrm{T}$ is said to be distribution-invariant if $\mathbf{T}(X) = \mathbf{T}(Y)$ when $X$ and $Y$ follow the same distribution (Acerbi, 2002; Weber, 2006). In this paper, we only consider distribution-invariant risk measures. We write $\mathbf{T}(F) = \mathbf{T}(X)$ for simplicity. We remark that there are other functionals mapping a r.v. to a real number, e.g., the inequality measures (Kock et al., 2021) that do not satisfy the monotonicity. In this paper, we derive the confidence bounds for several classes of risk measures. It turns out that the monotonicity of risk measures plays an essential role in our optimization framework. + +Table 1 summarizes the relevant risk measures considered in this paper. These risk measures are grouped into classes, namely SRM, DRM, CE, and RDEU. CVaR and ERM belong to the SRM and CE classes, respectively, based on specific choices of the weighting function $\phi$ and the utility function $u$ . The specific conditions related to the definitions of these risk measures are listed below. Please refer to Appendix B for detailed descriptions. + +- SRM: $\phi : [0,1] \to [0,\infty)$ is increasing and satisfying $\int_0^1 \phi(y) dy = 1$ . + +Table 1: List of risk measures + +
RMNotationDefinition
SRMMφ(F)∫01φ(y)F-1(y)dy
DRMρg(F)∫0∞g(1-F(x))dx
CEEu(F)u-1{∫R u(x)dF(x)}
RDEUV(F)∫ab v(x)dw(F(x))
CVaRCα(F)infν∈R{ν+1/1-αE×F[(X-ν)+]}
ERMUβ(F)1/β log{∫R exp(βx)dF(x)}
+ +- DRM: $g:[0,1] \to [0,1]$ is a continuous, concave and increasing function with $g(0) = 0$ and $g(1) = 1$ . +- CE: $u$ is a continuous, convex, and strictly increasing function. +- RDEU: $w:[0,1] \to [0,1]$ is an increasing weight function with $w(0) = 0$ and $w(1) = 1$ ; $v:\mathbb{R} \to \mathbb{R}$ is an (unbounded) increasing differentiable function with $u(0) = 0$ . +- CVaR: an instance of SRM with $\phi(y) = \frac{1}{\alpha} \mathbb{I}\{y \geq 1 - \alpha\}$ +- ERM: an instance of CE with $u(x) = \exp(\beta x)$ . + +It is more convenient to represent some risk measures using IDF, e.g., SRM $M_{\phi}(F) = \int_0^1 \phi(y) F^{-1}(y) dy$ . For this reason, we overload notation and write $\mathbf{T}(F^{-1}) = \mathbf{T}(F)$ whenever convenient for some $\mathbf{T}$ . + +# 3. Distribution Optimization Framework + +# 3.1. Global Lipschitz Constant-based Approach + +Fact 2 and Fact 1 present the concentration bound of the empirical distribution in terms of the Wasserstein distance and the supremum distance, respectively. They can be written in a unified way: with probability at least $1 - \delta$ , we have + +$$ +\left\| F - F _ {n} \right\| _ {p} \leq c _ {n} ^ {p}, \tag {3} +$$ + +where $p = 1$ indicates the Wasserstein distance and $p = \infty$ indicates the supremum distance. To relate the concentration bound of EDF to that of risk measure, Kock et al. (2021); Bhat & LA (2019) use the Lipschitz property of the risk measure, i.e., for any two CDFs $F,G\in \mathcal{D}([a,b])$ , there exists $L_{p}(\mathbf{T}) > 0$ such that the risk measure $\mathbf{T}$ satisfies + +$$ +| \mathbf {T} (F) - \mathbf {T} (G) | \leq L _ {p} (\mathbf {T}) \| F - G \| _ {p}. \tag {4} +$$ + +$L_{p}(\mathbf{T})$ is called the global Lipschitz constant (GLC) of $\mathbf{T}$ w.r.t. $\| \cdot \| _p$ since the inequality holds for all possible pairs of CDFs. Combining Equation 3 and Equation 4, Kock + +et al. (2021); Bhat & LA (2019) establish the concentration bounds of a class of Lipschitz functionals + +$$ +\mathbf {T} (F _ {n}) - L _ {p} (\mathbf {T}) c _ {n} ^ {p} \leq \mathbf {T} (F) \leq \mathbf {T} (F _ {n}) + L _ {p} (\mathbf {T}) c _ {n} ^ {p}. +$$ + +The quality of the above bounds relies on the tightness of $L_{p}(\mathbf{T})$ , so the finest bounds one can get fall back on identifying the tightest GLC + +$$ +L _ {p} (\mathbf {T}) \triangleq \sup _ {G, G ^ {\prime} \in \mathcal {D} ([ a, b ])} \frac {\mathbf {T} (G) - \mathbf {T} (G ^ {\prime})}{\| G - G ^ {\prime} \| _ {p}}, +$$ + +where we overload the notation of $L_{p}(\mathbf{T})$ . The GLC-based approach suffers from several limitations. The GLC may not be easy to compute, especially for some highly nonlinear risk measures. In most cases, one may only obtain its upper bound as a surrogate. Meanwhile, the concentration bounds are far from optimal. The confidence bounds are set to be the product of the GLC and the confidence radius. However, the GLC is loose since it is evaluated over the whole space of bounded distributions. + +# 3.2. Local Lipschitz Constant-based Approach + +Before introducing our framework as a remedy, we propose a new baseline approach that improves the previous bounds. Observe that Equation 3 together with the boundedness of $F$ can be written as the norm ball constraint + +$$ +B _ {p} \left(F _ {n}, c _ {n} ^ {p}\right) \triangleq \left\{F \mid \| F - F _ {n} \| _ {p} \leq c _ {n} ^ {p}, F \in \mathcal {D} ([ a, b ]) \right\}. +$$ + +Define the local Lipschitz constant (LLC) over $B_{p}(F_{n},c_{n}^{p})$ + +$$ +\begin{array}{l} L _ {p} \left(\mathbf {T}; F _ {n}, c _ {n} ^ {p}\right) \triangleq \sup _ {G, G ^ {\prime} \in B _ {p} \left(F _ {n}, c _ {n} ^ {p}\right)} \frac {\mathbf {T} (G) - \mathbf {T} \left(G ^ {\prime}\right)}{\| G - G ^ {\prime} \| _ {p}} \\ \leq \sup _ {G, G ^ {\prime} \in \mathcal {D} ([ a, b ])} \frac {\mathbf {T} (G) - \mathbf {T} \left(G ^ {\prime}\right)}{\| G - G ^ {\prime} \| _ {p}} = L _ {p} (\mathbf {T}). \\ \end{array} +$$ + +For simplicity, we drop $\mathbf{T}$ from the Lipschitz constants. We thus obtain the tighter upper/ lower confidence bound (UCB/LCB) + +$$ +\mathbf {T} (F _ {n}) + (-) L _ {p} (F _ {n}, c _ {n} ^ {p}) c _ {n} ^ {p} \leq (\geq) \mathbf {T} (F _ {n}) + L _ {p} c _ {n} ^ {p}. +$$ + +As sample size increases, the confidence radius $c_{n}^{p}$ shrinks, leading to smaller LLC and sharper bounds. In contrast, the previous bounds do not adapt to the sample size. + +# 3.3. Distribution Optimization Framework + +We now propose our unified framework to derive confidence bounds for a broad range of risk measures. The idea is quite simple and intuitive. Given a risk measure, we maximize/minimize the risk measure value over the confidence ball and set the maximal/minimal value as the UCB/LCB. By recasting the problem of finding the confidence bounds + +to a constrained optimization problem, we obtain the optimal bounds from Equation 3. Different choices of distances lead to two types of frameworks: + +$$ +\begin{array}{l l} \max _ {G \in \mathcal {D} ([ a, b ])} & \mathbf {T} (G) \\ \text {s . t .} & \| G - F _ {n} \| _ {1} \leq c _ {n} ^ {1} \end{array} \tag {5} +$$ + +and + +$$ +\begin{array}{l l} \max _ {G \in \mathcal {D} ([ a, b ])} & \mathbf {T} (G) \\ \text {s . t .} & \| G - F _ {n} \| _ {\infty} \leq c _ {n} ^ {\infty} \end{array} \tag {6} +$$ + +We can obtain the LCB by reverting the maximization formulation to a minimization formulation. Denote by $\overline{F_n^p} (\underline{F_n^p})$ the optimal solution, then the UCB and LCB are set to be $\mathbf{T}\left(\overline{F_n^p}\right)$ and $\mathbf{T}\left(\underline{F_n^p}\right)$ . In the sequel, we may drop $p$ when the statement holds for either $p = 1$ or $p = \infty$ . + +To demonstrate the optimality of our framework, observe that $\overline{F_n} \in B(F_n, c_n)$ , therefore + +$$ +\mathbf {T} \left(\bar {F} _ {n}\right) \leq \mathbf {T} \left(F _ {n}\right) + L \left(F _ {n}, c _ {n}\right) c _ {n} \leq \mathbf {T} \left(F _ {n}\right) + L c _ {n}. \tag {7} +$$ + +$\mathbf{T}\left(\overline{F_n}\right)$ is tighter than the bound derived from the tightest LLC, our new baseline approach, which already improves the previous bounds. + +One may wonder whether $\overline{F_n}$ and $\underline{F}_n$ are easy to obtain. Fortunately, we will show that they admit analytic form for almost all risk measures introduced in Section 2 in the next section. Moreover, we will use Equation 7 to quantify the tightness of our confidence bounds in Section 5. For ease of notation, we will omit $\mathcal{D}([a,b])$ . + +# 4. Closed-form Solution + +The following theorems present the closed-form solutions to Formulation 5-6. The proofs are deferred to Appendix C. + +Theorem 4.1. For any risk measure satisfying the monotonicity, the optimal solution to Formulation 6 is given by + +$$ +\overline {{F _ {n} ^ {\infty}}} = \mathbf {P} _ {\boldsymbol {c} _ {n} ^ {\infty}} ^ {\infty} F _ {n}, \underline {{F _ {n} ^ {\infty}}} = \mathbf {N} _ {\boldsymbol {c} _ {n} ^ {\infty}} ^ {\infty} F _ {n}, \tag {8} +$$ + +where $\mathbf{P}_c^\infty /\mathbf{N}_c^\infty : \mathcal{D}([a,b]) \to \mathcal{D}([a,b])$ is the positive/negative operator with coefficient $c > 0$ for the supremum distance, which is defined as follows + +$$ +\left(\mathbf {P} _ {c} ^ {\infty} F\right) (x) \triangleq \max \left\{F (x) - c \mathbb {I} \{x < b \}, 0 \right\}, +$$ + +$$ +\left(\mathbf {N} _ {c} ^ {\infty} F\right) (x) \triangleq \min \left\{F (x) + c \mathbb {I} \{x \geq a \}, 1 \right\}. +$$ + +The supremum ball $B_{\infty}(F_n, c_n^{\infty})$ consists of the CDFs within the area sandwiched by $\mathbf{P}_{c_n^\infty}^\infty F_n$ and $\mathbf{N}_{c_n^\infty}^\infty F_n$ (see Figure 1). Since any risk measure $\mathbf{T}$ is monotonic, and + +$$ +\mathbf {P} _ {\boldsymbol {c} _ {n} ^ {\infty}} ^ {\infty} F _ {n} (x) \leq G (x) \leq \mathbf {N} _ {\boldsymbol {c} _ {n} ^ {\infty}} ^ {\infty} F _ {n}, \forall x \in \mathbb {R}, \forall G \in B _ {\infty} (F _ {n}, c _ {n} ^ {\infty}) +$$ + +then $\mathbf{P}_{c_n^\infty}^\infty F_n$ and $\mathbf{N}_{c_n^\infty}^\infty F_n$ are the maximizer and the minimizer respectively. Another interpretation is that $\mathbf{P}_{c_n^\infty}^\infty$ + +![](images/50d3805b0a6b6f50e5eea77699fea260703af1b3f7e7ab3e178fd780fb3b27b2.jpg) +Figure 1: $F_{n}$ (black), $\mathbf{P}_{\mathbf{c}_n^\infty}^\infty F_n$ (blue) and $\mathbf{N}_{\mathbf{c}_n^\infty}^\infty F_n$ (red). + +![](images/8908b8d5546e2ebf09b844aaf1b03b105d7f8255e7320fdfee6fc0814f00e100.jpg) +Figure 2: $F_{n}$ (black) and $\mathbf{P}_{\mathbf{c}_n}^1 F_n$ (red). $\mathbf{P}_{\mathbf{c}_n}^1 F_n$ overlaps $F_{n}$ for $x < X_{(n^{+})}$ , and it has only two jumps at $X_{(n^{+})}$ and $b$ for $x \geq X_{(n^{+})}$ . + +transports the leftmost atoms of $F_{n}$ with total mass of $c_{n}^{\infty}$ to the maximally possible atom $b$ , while $\mathbf{P}_{c_n^\infty}^\infty$ transports the rightmost atoms of $F_{n}$ with total mass of $c_{n}^{\infty}$ to the minimally possible atom $a$ . Although we can explicitly represent the optimal solutions in the PMF form, it is more convenient to work with the CDF form. Please refer to Appendix F for more details. + +Remark 4.2. The positive operator in Equation 8 reduces to the optimistic operator introduced in the CVaR bandit/RL (Tamkin et al., 2019; Keramati et al., 2020). However, they only consider the case of CVaR, and we generalize it to arbitrary risk measures. + +Theorem 4.3. For the risk measures except RDEU in Section 2, the optimal solution to Formulation 5 is given by + +$$ +\overline {{F _ {n} ^ {1}}} = \mathbf {O} _ {c _ {n} ^ {1}} ^ {\mathbf {1}} F _ {n}, \underline {{F _ {n} ^ {1}}} = \mathbf {P} _ {c _ {n} ^ {1}} ^ {\mathbf {1}} F _ {n}, \tag {9} +$$ + +where $\mathbf{P}_c^1 / \mathbf{N}_c^1 : \mathcal{D}([a, b]) \to \mathcal{D}([a, b])$ is called the positive/negative operator for CDF with coefficient $c$ for the Wasserstein distance, which is defined as follows. + +Fix $F_{n}$ and $c_{n}^{1} > 0$ . Let $X_{(1)} \leq X_{(2)} \cdots \leq X_{(n)}$ be the order statistic of $\{X_{i}\}$ . For $i \in [n]$ , we recursively define + +$$ +S _ {1} ^ {+} \triangleq \frac {1}{n} (b - X _ {(n)}), S _ {i} ^ {+} \triangleq S _ {i - 1} ^ {+} + \frac {1}{n} (b - X _ {(n + 1 - i)}). +$$ + +The geometric interpretation of $S_{i}^{+}$ is the area sandwiched between $F_{n}$ and the horizontal line $1 - \frac{i}{n}$ (see Figure 2). Define $i^{+} \triangleq \min \{i : S_{i}^{+} \geq c_{n}^{1}\}$ as the first index that $S_{i}^{+}$ exceeds $c_{n}^{1}$ . Let $n^{+} \triangleq n + 1 - i^{+}$ . Then $\mathbf{P}_{c_n^1}^1 F_n$ is a categorical distribution with atoms $\{X_{(i)}\}_{i \in [n^+]} \cup \{b\}$ . The probability mass of $X_{(n^+)}$ and $b$ are assigned to be + +$$ +p _ {n ^ {+}} \triangleq \frac {1}{b - X _ {n ^ {+}}} (S _ {i ^ {+}} ^ {+} - c _ {n} ^ {1}), p _ {b} \triangleq \frac {i ^ {+}}{n} - p _ {n ^ {+}}, +$$ + +meanwhile the probability mass of the first $n^+ -1$ atoms $\left\{X_{(i)}\right\}_{i\in [n^{+} - 1]}$ remains $\frac{1}{n}$ .To be more precise, $\mathbf{P}_{c_n^1}^1 F_n$ is described by the following probability mass function (PMF) + +$$ +\frac {1}{n} \sum_ {i = 1} ^ {n ^ {+} - 1} \delta_ {X _ {(i)}} + p _ {n ^ {+}} \cdot \delta_ {X _ {(n ^ {+})}} + p _ {b} \cdot b. +$$ + +The way of transforming $F_{n}$ into $\mathbf{P}_{c_n^1}^1 F_n$ resembles the well-known water-filling algorithm (Telatar, 1999) in wireless commutations in the opposite direction. Imagine that the gravity is reversed to the upward direction, and we fill the water of amount $c_{n}^{1}$ to a tank enclosed by $F_{n}$ and the vertical line $b$ . The water is sequentially filled in the bins from right to left, in which the $i$ -th bin corresponds to the $X_{(n + 1 - i)}$ until the water is filled up at the $i^{+}$ bin. By a volume argument, the water level is $\frac{n^{+} - 1}{n} + p_{n+}$ . We then recover the analytic form via the shape of $\mathbf{P}_{c_n^1}^1 F_n$ . + +Another interpretation is that $\mathbf{P}_{c_n^1}^1$ replaces the probability mass $\frac{1}{n} - p_{n+}$ of $X_{(n+)}$ and all the atoms to its right $\left\{X_{(i)}\right\}_{n^{+} < i \leq n}$ by the upper bound $b$ . For convenience, we let $X_{(0)} = a$ . We recursively define for $i \in [n]$ + +$$ +S _ {1} ^ {-} \triangleq \frac {X _ {(n)} - X _ {(n - 1)}}{n}, S _ {i} ^ {-} \triangleq S _ {i - 1} ^ {-} + \frac {i (X _ {(n + 1 - i)} - X _ {(n - i)})}{n}. +$$ + +Now $S_{i}^{-}$ represents the area sandwiched between $F_{n}$ and the vertical line $X_{(n - i)}$ (see Figure 3). Define $i^{-} \triangleq \min \{i : S_{i}^{-} \geq c_{n}^{1}\}$ as the first index that $S_{i}^{-}$ exceeds $c_{n}^{1}$ . Let $n^{-} \triangleq n + 1 - i^{-}$ . Then $\mathbf{N}_{c_n^1}^1 F_n$ is a categorical distribution with atoms $\{X_{(i)}\}_{i \in [n^{-} - 1]} \cup \{b^{-}\}$ , where $b^{-}$ is given by + +$$ +b ^ {-} \triangleq X _ {(n ^ {-} - 1)} + \frac {n}{i ^ {-}} (S _ {n} ^ {-} - c _ {n} ^ {1}). +$$ + +$\mathbf{N}_{c_n^1}^1 F_n$ is given by the PMF + +$$ +\frac {1}{n} \sum_ {i = 1} ^ {n - 1} \delta_ {X _ {(i)}} + \frac {i ^ {-}}{n} \cdot b ^ {-}. +$$ + +$\mathbf{N}_{c_n^1}^1 F_n$ mirrors the water-filling, but we now fill the water rightward until it is filled out with water level $b^{-}$ . It can also be interpreted as replacing the atoms to the right of $X_{(n^{-})}$ with a total mass of $\frac{i^{-}}{n}$ by a single atom $b^{-} < X_{(n^{-})}$ . + +![](images/04583923478b2bc4a1efb333f9ec5fa610911f3e3c4242cea3ed9ea333bb534f.jpg) +Figure 3: $F_{n}$ (black) and $\mathbf{N}_{\pmb{c}_n^1}^1 F_n$ (blue). $\mathbf{N}_{\pmb{c}_n^1}^1 F_n$ overlaps $F_{n}$ for $x < b^{-}$ and has a single jump at $b^{-}$ with height $\frac{i^{-}}{n}$ . + +Computational issue. We present the algorithms to actualize Equation 5-6 in practice in Appendix F. We demonstrate that their computational complexity increases slightly more than the LC-based methods. + +Remark 4.4. For both distances, we only require $F$ to be bounded above by a known constant $b$ to perform $\mathbf{P}_{c_n}F_n$ , and require $F$ to be bounded below by a known constant $a$ to perform $\mathbf{N}_{c_n}F_n$ . Thus we only require $F$ to be bounded on one side to obtain the one-sided confidence bound. + +# 5. Improvement of Confidence Bounds + +# 5.1. Derivation of the LLC + +We present a systematic way of computing the LLC over the confidence ball $B(F_n, c_n)$ , which bridges our framework and the GLC-based method. Define $\psi(t; F, G) \triangleq \mathrm{T}((1 - t)F + tG)$ for $F, G \in \mathcal{D}([a, b])$ and $t \in [0, 1]$ . For simplicity, we may drop $F, G$ and write $\psi(t)$ if it is clear from the context. Note that $\psi(0) = \mathrm{T}(F)$ , $\psi(1) = \mathrm{T}(G)$ and $(1 - t)F + tG \in \mathcal{D}([a, b])$ for all $t \in [0, 1]$ . It can be shown that $\psi$ is continuously differentiable under some mild conditions on $\mathbf{T}$ . Observe that + +$$ +\begin{array}{l} L(\mathbf{T};F_{n},c_{n}) = \sup_{F,G\in B(F_{n},c_{n})}\frac{\mathbf{T}(F) - \mathbf{T}(G)}{\|F - G\|} \\ = \sup _ {F, G \in B _ {p} \left(F _ {n}, c _ {n}\right)} \frac {\psi (1 ; F , G) - \psi (0 ; F , G)}{\| F - G \|} \\ \leq \sup _ {F, G \in B (F _ {n}, c _ {n}), t \in [ 0, 1 ]} \frac {\psi^ {\prime} (t ; F , G)}{\| F - G \|} \\ \leq \sup _ {F, G \in B \left(F _ {n}, c _ {n}\right), t \in [ 0, 1 ]} \boldsymbol {v} ((1 - t) F + t G) \\ = \sup _ {F \in B (F _ {n}, c _ {n})} \boldsymbol {v} (F), \\ \end{array} +$$ + +where $\pmb{v}$ is a functional that satisfies for any $F,G$ + +$$ +\boldsymbol {\psi} ^ {\prime} (t; F, G)) \leq \boldsymbol {v} ((1 - t) F + t G) \| F - G \|. +$$ + +Note that $\pmb{v}$ implicitly depends on $p$ and the risk measure $\mathbf{T}$ . Consequently, we can obtain an upper bound on the LLC + +by bounding the last term. We can obtain the upper bound on the GLC by removing the ball constraint. For interested readers, please refer to in Table 4 in Appendix D for the functional $\upsilon$ for different risk measures. We will use SRM as an example to illustrate the procedure. + +# 5.1.1. AN EXAMPLE: SRM + +Here we consider an alternative form of SRM $M_{\phi}(F) = \int_{a}^{b}\phi (F(x))xdF(x)$ . $\psi$ is continuously differentiable with derivative + +$$ +\begin{array}{l} \psi^ {\prime} (t) = \frac {d}{d t} \int_ {a} ^ {b} \phi ((1 - t) F + t G) (x)) x d ((1 - t) F + t G) (x) \\ = - \int_ {a} ^ {b} (G - F) (x) \phi ((1 - t) F + t G) (x)) d x \\ \leq \left\| G - F \right\| _ {p} \left\| \phi (F + t H) \right\| _ {q}. \\ \end{array} +$$ + +We omit the details in the second equality and leave the full derivations to Appendix D. Since + +$$ +\frac {\psi^ {\prime} (t ; F , G)}{\| F - G \| _ {p}} \leq \| \phi ((1 - t) F + t G) \| _ {q}, +$$ + +where $\| \cdot \| _q$ is the dual norm of $\| \cdot \| _p$ , we obtain $\pmb {v}(F) =$ $\| \phi (F)\| _q$ . Consider the case $p = \infty$ . Since $\underline{F_n^\infty}\preceq F$ for $F\in B_{\infty}(F_n,c_n^{\infty})$ , then $\phi \left(\underline{F_n^\infty} (x)\right)\geq \phi (F(x)),\forall F\in$ $B_{\infty}(F_n,c_n^{\infty}),\forall x\in [a,b]$ . It holds that + +$$ +\begin{array}{l} \max _ {F \in B _ {\infty} \left(F _ {n}, c _ {n} ^ {\infty}\right)} \| \phi (F) \| _ {1} = \max _ {F \in B _ {\infty} \left(F _ {n}, c _ {n} ^ {\infty}\right)} \int_ {a} ^ {b} \phi (F (x)) d x \\ = \int_ {a} ^ {b} \phi (\underline {{F _ {n} ^ {\infty}}} (x)) d x = \left\| \phi (\underline {{F _ {n} ^ {\infty}}}) \right\| _ {1}. \\ \end{array} +$$ + +In contrast, the GLC can be bounded by choosing $F = \delta_{a}$ + +$$ +\max _ {F \in \mathcal {D} ([ a, b ])} \int_ {a} ^ {b} \phi (F (x)) d x = (b - a) \| \phi \| _ {\infty} = (b - a) \phi (1). +$$ + +Following such principle, we obtain the LLCs for other risk measures (cf. Table 2). + +# 5.2. Improvement of Distribution Optimization Framework + +Equation 7 qualitatively establishes that the confidence bounds derived from our framework are tighter than that based on the LLC + +$$ +\mathbf {T} \left(\overline {{F _ {n}}}\right) - \mathbf {T} (F _ {n}) \leq L (F _ {n}, c _ {n}) c _ {n}. +$$ + +Furthermore, we can quantitatively show the improvement + +$$ +\begin{array}{l} \mathbf {T} \left(\overline {{F _ {n}}}\right) - \mathbf {T} (F _ {n}) = \psi \left(1; F _ {n}, \overline {{F _ {n}}}\right) - \psi \left(0; F _ {n}, \overline {{F _ {n}}}\right) \\ \leq \max _ {t \in [ 0, 1 ]} \boldsymbol {\psi} ^ {\prime} (t; F _ {n}, \overline {{F _ {n}}}) \\ \leq \max _ {t \in [ 0, 1 ]} \boldsymbol {v} \left((1 - t) F _ {n} + t \cdot \overline {{F _ {n}}}\right) c _ {n}, \\ \end{array} +$$ + +where the second to the last inequality follows from the definition of $\pmb{v}$ . The following also holds + +$$ +\mathbf {T} (F _ {n}) - \mathbf {T} \left(\underline {{F _ {n}}}\right) \leq \max _ {t \in [ 0, 1 ]} \boldsymbol {v} \left((1 - t) \underline {{F _ {n}}} + t F _ {n}\right) c _ {n}. +$$ + +Therefore, it is more convenient to compare $\mathbf{T}\left(\overline{F_n}\right) - \mathbf{T}(F_n)$ and $L(F_{n},c_{n})c_{n}$ , which is shown for the supremum distance in Table 3. For convenience, we normalize them by $c_{n}$ and state the results for general CDF $F$ . Our UCBs are strictly and consistently tighter than the LLC-based bounds for the supremum distance. Due to the space limit, the results for the Wasserstein distance are shown in Appendix D. + +# 5.3. Illustrating example: CVaR + +We use CVaR to illustrate Table 2 and Table 3. Table 2 compares the LLC with GLC for different risk measures. The second row in Table 2 shows that the LLC and GLC of CVaR for the Wasserstein distance are identical. In addition, the GLC of CVaR for supremum distance is $\frac{b - a}{\alpha}$ , which is larger than its LLC $\frac{b - F^{-1}((1 - \alpha - c)^{+})}{\alpha}$ . + +Table 3 presents the improvement of our bound for the supremum distance. The second column in Table 3 implies + +$$ +\begin{array}{l} \frac {C _ {\alpha} \left(\overline {{F ^ {\infty}}}\right) - C _ {\alpha} (F)}{c} = \frac {b - F ^ {- 1} (1 - \alpha)}{1 - \alpha} \\ < L _ {\infty} \left(\boldsymbol {C} _ {\boldsymbol {\alpha}}; F, c\right) = \frac {b - F ^ {- 1} (1 - \alpha - c)}{\alpha} \\ < L _ {\infty} (\boldsymbol {C} _ {\alpha}) = \frac {b - a}{\alpha}. \\ \end{array} +$$ + +Our upper bound is strictly tighter than the LLC-based and GLC-based bound. The improvement depends on the distribution $F$ and $\alpha$ . For small $\alpha$ and distribution $F$ with non-fat upper tail, $b - F^{-1}(1 - \alpha) \ll b - a$ , leading to a much finer bound. In particular, consider a uniform distribution + +$$ +\begin{array}{l} \frac {C _ {\boldsymbol {\alpha}} \left(\overline {{F ^ {\infty}}}\right) - C _ {\boldsymbol {\alpha}} (F)}{c} = \frac {\alpha (b - a)}{\alpha} \\ < L _ {\infty} \left(\boldsymbol {C} _ {\boldsymbol {\alpha}}; F, c\right) = \frac {(\alpha + c) (b - a)}{\alpha} \\ < L _ {\infty} (\boldsymbol {C} _ {\boldsymbol {\alpha}}) = \frac {b - a}{\alpha}. \\ \end{array} +$$ + +Our bound for uniform distribution considerably improves by a factor of $1 / \alpha$ and $(\alpha + c) / \alpha$ compared to the GLC-based and LLC-based bound, respectively. + +# 6. Application to Risk-sensitive Bandits + +Now we consider the risk-sensitive multi-armed bandit (MAB) problems. The quality of each arm is measured by the risk measure value of its loss distribution. The loss distribution of the $i$ -th arm is denoted by $F_{i}$ , and the risk value associated with $\mathbf{T}$ is $\mathbf{T}(F_i)$ . The algorithm interacts + +Table 2: Comparison between the LLC and the GLC + +
RMLocal (p=1)Global (p=1)ImprovementLocal (p=∞)Global (p=∞)Improvement
CVaR1/α1/αXb-Fn-1((1-α-c)+)/αb-a/α
SRMφ(1)φ(1)X||φ(Fn∞)||1(b-a)φ(1)
DRM||g'||∞||g'||∞X||g'(1-Fn∞)||1(b-a) ||g'||∞
ERMexp(βb)/∫ab exp(βx)dFn1(x))exp(β(b-a))exp(βb)-exp(βa)/β ∫ab exp(βx)dFn∞(x))exp(β(b-a))-1/β
RDEUN/A2||w'||∞ ||u'||∞N/A||w'(Fn∞)u'||1||w'||∞ ||u'||1
+ +Table 3: Improvement of confidence bounds for supremum distance over LLC + +
RMCVaRSRMDRMERMRDEU
L∞(T; F, c)b-F-1(1-α-c)/1-α||φ(F∞)||1||g'(1-F∞)||1exp(βb)-exp(βa)/∫axb exp(βx)dF∞(x)||w'(F∞)u'||1
T(F∞)-T(F)/cb-F-1(1-α)/1-α||φ(F)||1||g'(1-F)||1exp(βb)-exp(βa)/β∫ab exp(βx)dF(x)||w'(F)u'||1
Improvement
+ +with a bandit instance $\nu = (F_{i})_{i\in [K]}$ for $N$ rounds. In each round $t\in [N]$ , the algorithm $\pi$ chooses an arm $I_{t}\in [K]$ and observes the loss $X_{t}\sim F_{I_{t}}$ . The performance of an algorithm $\pi$ is measured by the cumulative regret + +$$ +\operatorname {R e g r e t} (\pi , \nu , N) \triangleq \mathbb {E} \left[ \sum_ {t \in [ N ]} \mathbf {T} \left(F _ {I _ {t}}\right) - \min _ {i \in [ K ]} \mathbf {T} \left(F _ {i}\right) \right]. +$$ + +While the UCB-type algorithms (Auer et al., 2002) are widely applied to the risk-neutral MAB problems, they rely on the concentration bound of the mean. We propose a meta-algorithm (cf. Algorithm 1) to deal with generic risk measures, where the LCB is derived from our framework. The algorithm maintains the EDF $\hat{F}_{i,t}$ for each arm $i$ + +$$ +\hat {F} _ {i, t} \triangleq \frac {1}{s _ {i} (t)} \sum_ {t ^ {\prime} = 1} ^ {t - 1} \mathbb {I} \left\{X _ {t ^ {\prime}} \leq \cdot , I _ {t ^ {\prime}} = i \right\}, \tag {10} +$$ + +where $s_i(t) \triangleq \sum_{t' = 1}^{t - 1}\mathbb{I}\{I_{t'} = i\}$ is the number of times of pulling arm $i$ up to time $t$ . For convenience, we assume the first arm is the optimal arm, i.e., $\mathbf{T}(F_1) < \mathbf{T}(F_i), i \neq 1$ . When specializing the risk measure to the CVaR, we obtain a distribution-dependent regret upper bound. + +Proposition 6.1. The expected regret of Algorithm 1 on a instance $\nu \in \mathcal{D}([a,\infty])$ with $\mathbf{T} = C_{\alpha}$ is bounded as + +$$ +\begin{array}{l} R e g r e t (L C B, \nu , N) \\ \leq \frac {4 \log (\sqrt {2} N)}{\alpha^ {2}} \sum_ {i > 1} ^ {K} \frac {\left(b - F _ {i} ^ {- 1} (1 - \alpha - 2 c _ {i} ^ {*})\right) ^ {2}}{\Delta_ {i}} + 3 \sum_ {i = 1} ^ {K} \Delta_ {i}, \\ \end{array} +$$ + +where the sub-optimality gap $\Delta_i \triangleq C_\alpha(F_i) - C_\alpha(F_1) <$ + +# Algorithm 1 Lower Confidence Bound + +1: Input: $N, a$ +2: for round $t \in [K]$ do +3: Pull arm $I_{t} \gets t$ +4: end for +5: for round $t = K + 1, K + 2, \dots, N$ do +6: Compute $\hat{F}_{i,t}$ via Equation 10 +7: Set $c_{i}(t) \gets \sqrt{\frac{\log(2KN^{2})}{s_{i}(t)}}$ for all $i \in [K]$ . +8: $\underline{F}_{i,t}\gets \mathbf{N}_{\boldsymbol {c}_i(\boldsymbol {t})}^\infty \hat{F}_{i,t}$ +9: $I_{t}\gets \arg \min_{i\in [K]}\mathbf{T}\left(\underline{F_{i,t}}\right)$ +10: end for + +$b - a$ , and $c_{i}^{*}$ is a $F_{i}$ -dependent constant that solves the equation $2\frac{b - F_i^{-1}(1 - \alpha - 2c)}{\alpha} c = \Delta_i$ . + +The proof is deferred to Appendix E. + +Remark 6.2. The meta-algorithm for CVaR reduces to the CVaR-UCB algorithm (Tamkin et al., 2019). Interestingly, Tamkin et al. (2019) empirically observes that CVaR-UCB outperforms the GLC-based algorithm U-UCB (Cassel et al., 2018) with an order of magnitude improvement, but they only provide a regret bound of + +$$ +\frac {4 \log (\sqrt {2} N)}{\alpha^ {2}} \sum_ {i > 1} ^ {K} \frac {(b - a) ^ {2}}{\Delta_ {i}} + 3 \sum_ {i = 1} ^ {K} \Delta_ {i}, +$$ + +which matches that of U-UCB. We fill this gap by quantify + +ing the improvement in the magnitude + +$$ +\sum_ {i > 1} \frac {\left(b - F _ {i} ^ {- 1} (1 - \alpha - 2 c _ {i} ^ {*})\right) ^ {2}}{\Delta_ {i} ^ {2}} / \sum_ {i > 1} \frac {(b - a) ^ {2}}{\Delta_ {i} ^ {2}} < 1. +$$ + +Remark 6.3. Baudry et al. (2021) introduces a Thompson Sampling algorithm B-CVTS for CVaR bandit with bounded rewards, which is the first asymptotic optimal CVaR bandit algorithm. Notably, our main contribution in this paper is the framework to improve confidence bounds rather than designing optimal CVaR bandit algorithms. Additionally, CVaR-UCB has several advantages. CVaR-UCB can perform incremental updates to compute the CVaR values, but B-CVTS needs to maintain and sample from the posterior of distributions. Thus the time and space complexity of CVaR-UCB is quite low. Moreover, CVaR-UCB can be applied to semi-unbounded distributions, e.g., log-normal distribution, while B-CVTS assumes bounded rewards. + +# 7. Numerical Experiments + +To better visualize the benefits of our framework relative to those of LLC and GLC, we conducted a series of empirical comparisons. Details and complete figures are deferred to Appendix G. + +Confidence bounds. We consider five different beta distributions and two risk measures: CVaR and ERM. Due to space limitations, we provide the results for one typical beta distribution and one particular distance in Figure 4. Our bounds are consistently tighter than LC-based ones for various risk measures and varying sample sizes. + +CVaR bandits. We compare CVaR-UCB with the UCB algorithm using GLC (GLC-UCB) and using LLC (LLC-UCB) in Figure 5. It shows that CVaR-UCB outperforms LLC-UCB and GLC-UCB. + +# 8. Conclusion + +We propose a distribution optimization framework to obtain improved confidence bounds of several risk measures. By viewing the solutions as certain transformations of the EDF, we design efficient algorithms to compute the confidence bounds. The tightness of our bounds is further illustrated via comparisons with the new baseline method. + +The major limitation is that our framework only deals with bounded distribution in general. However, it is applicable to semi-unbounded distributions for CVaR, SRM, DRM, and RDEU. It would be interesting to study the distribution optimization framework under more general assumptions, e.g., sub-Gaussian or sub-exponential distributions. Another promising future direction is to generalize the framework to the multivariate setting. One may apply the multivariate DKW inequality to the multivariate risk measures. + +![](images/57281a535a1bfc97dfa55c78bcb9da6f6035cd725027202f773b5b0c5a13a7bf.jpg) +(a) CVaR UCB via supremum distance + +![](images/db10fccbbd9ed994a388be459ebf21c5033468ea0b5d67dfce2eebf31a91b477.jpg) +(b) CVaR LCB via supremum distance + +![](images/5a8578dd3b4d191a92ed08878811c733851a3981c744bd9eb909a4e022e1ba81.jpg) +(c) ERM UCB via Wasserstein distance + +![](images/f306799336645ecfd432229f777c8e882eefe5804b2ecf724b646f9a462407cd.jpg) +(d) ERM LCB via Wasserstein distance + +![](images/a91e6a1106e043b8d62f1f3e541bcfde76c498b11ca8da45c513f9100e33386a.jpg) +Figure 4: Comparisons of CIs for CVaR and ERM with varying sample sizes. +Figure 5: Cumulative CVaR-regret of CVaR-UCB (red), LLC-UCB (blue), and GLC-UCB (green). + +# Acknowledgements + +We thank all the anonymous reviewers for their helpful comments and suggestions. The work of Zhi-quan Luo was supported in part by the National Key Research and Development Project under grant 2022YFA1003900 and in part by the Guangdong Provincial Key Laboratory of Big Data Computing. + +# References + +Acerbi, C. Spectral measures of risk: A coherent representation of subjective risk aversion. Journal of Banking & Finance, 26(7):1505-1518, 2002. +Acerbi, C. and Tasche, D. On the coherence of expected shortfall. Journal of Banking & Finance, 26(7):1487-1503, 2002. +Auer, P., Cesa-Bianchi, N., and Fischer, P. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2):235-256, 2002. +Baudry, D., Gautron, R., Kaufmann, E., and Maillard, O. Optimal thompson sampling strategies for support-aware cvar bandits. In International Conference on Machine Learning, pp. 716-726. PMLR, 2021. +Bäuerle, N. and Rieder, U. More risk-sensitive markov decision processes. Mathematics of Operations Research, 39(1):105-120, 2014. +Bhat, S. P. and LA, P. Concentration of risk measures: A Wasserstein distance approach. Advances in Neural Information Processing Systems, 32, 2019. +Brown, D. B. Large deviations bounds for estimating conditional value-at-risk. Operations Research Letters, 35(6): 722-730, 2007. +Cassel, A., Mannor, S., and Zeevi, A. A general approach to multi-armed bandits under risk criteria. In Conference On Learning Theory, pp. 1295–1306. PMLR, 2018. +Chandak, Y., Niekum, S., da Silva, B., Learned-Miller, E., Brunskill, E., and Thomas, P. S. Universal off-policy evaluation. Advances in Neural Information Processing Systems, 34:27475-27490, 2021. +Dvoretzky, A., Kiefer, J., and Wolfowitz, J. Asymptotic minimax character of the sample distribution function and of the classical multinomial estimator. The Annals of Mathematical Statistics, pp. 642-669, 1956. +Föllmer, H. and Schied, A. Convex and coherent risk measures. Encyclopedia of Quantitative Finance, pp. 355-363, 2010. +Föllmer, H. and Schied, A. Stochastic finance. In *Stochastic Finance*. de Gruyter, 2016. +Fournier, N. and Guillin, A. On the rate of convergence in Wasserstein distance of the empirical measure. Probability Theory and Related Fields, 162(3):707-738, 2015. +Huang, A., Leqi, L., Lipton, Z., and Azizzadenesheli, K. Off-policy risk assessment in contextual bandits. Advances in Neural Information Processing Systems, 34: 23714-23726, 2021. + +Keramati, R., Dann, C., Tamkin, A., and Brunskill, E. Being optimistic to be conservative: Quickly learning a cvar policy. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 4436-4443, 2020. +Kock, A. B., Preinerstorfer, D., and Veliyev, B. Functional sequential treatment allocation. Journal of the American Statistical Association, pp. 1-13, 2021. +Kolla, R. K., Prashanth, L., Bhat, S. P., and Jagannathan, K. Concentration bounds for empirical conditional value-at-risk: The unbounded case. *Operations Research Letters*, 47(1):16-20, 2019. +Krokhmal, P., Palmquist, J., and Uryasev, S. Portfolio optimization with conditional value-at-risk objective and constraints. Journal of risk, 4:43-68, 2002. +LA, P. and Bhat, S. P. A wasserstein distance approach for concentration of empirical risk estimates. J. Machine Learn. Res, 23(238):1-61, 2022. +Liang, H. and Luo, Z.-Q. Regret bounds for risk-sensitive reinforcement learning with lipschitz dynamic risk measures. arXiv preprint arXiv:2306.02399, 2023. +Massart, P. The tight constant in the dvoretzky-kiefer-wolfowitz inequality. The annals of Probability, pp. 1269-1283, 1990. +Pandey, A. K., Prashanth, L., and Bhat, S. P. Estimation of spectral risk measures. arXiv preprint arXiv:1912.10398, 2019. +Prashanth, L., Jagannathan, K., and Kolla, R. K. Concentration bounds for cvar estimation: The cases of light-tailed and heavy-tailed distributions. In Proceedings of the 37th International Conference on Machine Learning, pp. 5577-5586, 2020. +Quiggin, J. Generalized expected utility theory: The rank-dependent model. Springer Science & Business Media, 2012. +Rockafellar, R. T., Uryasev, S., et al. Optimization of conditional value-at-risk. Journal of risk, 2:21-42, 2000. +Tamkin, A., Keramati, R., Dann, C., and Brunskill, E. Distributionally-aware exploration for cvar bandits. In NeurIPS 2019 Workshop on Safety and Robustness on Decision Making, 2019. +Telatar, E. Capacity of multi-antenna gaussian channels. European transactions on telecommunications, 10(6):585-595, 1999. +Thomas, P. and Learned-Miller, E. Concentration inequalities for conditional value at risk. In International Conference on Machine Learning, pp. 6225-6233. PMLR, 2019. + +Wang, S. Premium calculation by transforming the layer premium density. *ASTIN Bulletin: The Journal of the IAA*, 26(1):71-92, 1996. +Wang, S. S. Cat bond pricing using probability transforms. Geneva Papers: Etudes et Dossiers, 278:19-29, 2004. +Wang, Y. and Gao, F. Deviation inequalities for an estimator of the conditional value-at-risk. Operations Research Letters, 38(3):236-239, 2010. +Weber, S. Distribution-invariant risk measures, information, and dynamic consistency. Mathematical Finance: An International Journal of Mathematics, Statistics and Financial Economics, 16(2):419-441, 2006. +Zhu, S. and Fukushima, M. Worst-case conditional value-at-risk with application to robust portfolio management. Operations research, 57(5):1155-1168, 2009. + +# A. Table of Notation + +
SymbolExplanation
DThe space of all CDFs
D([a,b])The space of all CDFs supported on [a,b]
Bp(F,c)The ||·||p norm ball centered at F with radius c
FnThe empirical distribution function corresponding to n samples from F
cpnThe confidence radius w.r.t. ||·||p for n samples
TRisk measure
Lp(T)The global Lipschitz constant of T w.r.t. ||·||p
Lp(T;F,c)The local Lipschitz constant of T w.r.t. ||·||p over Bp(Fn, cpn)
FnPThe maximizer of Formulation 5 or 6
FnPThe minimizer of Formulation 5 or 6
P1cThe positive operator w.r.t. ||·||1 with coefficient c
N1cThe negative operator w.r.t. ||·||1 with coefficient c
P∞cThe positive operator w.r.t. ||·||∞ with coefficient c
NC∞The negative operator w.r.t. ||·||∞ with coefficient c
νBandit instance
NNumber of total rounds
KNumber of total arms
πBandit algorithm
si(t)The number of times of pulling arm i up to time t
+ +# B. Risk Measures + +Conditional Value at Risk (CVaR) CVaR (Rockafellar et al., 2000) is a popular risk measure in financial portfolio optimization (Krokhmal et al., 2002; Zhu & Fukushima, 2009). Formally, the CVaR value at level $\alpha \in (0,1)$ for a distribution $F$ is defined as + +$$ +C _ {\boldsymbol {\alpha}} (F) \triangleq \inf _ {\nu \in \mathbb {R}} \left\{\nu + \frac {1}{1 - \alpha} \mathbb {E} _ {X \sim F} [ (X - \nu) ^ {+} ] \right\}. +$$ + +Acerbi & Tasche (2002) showed that when $F$ is a continuous distribution, $C_{\alpha}(F) = \mathbb{E}_{X\sim F}[X|X\geq F^{-1}(1 - \alpha)]$ + +Spectral risk measure (SRM) SRM is a generalization of CVaR that adopts a non-constant weighting function (Acerbi, 2002). The SRM of $F$ is defined as + +$$ +\boldsymbol {S} _ {\phi} (F) \triangleq \int_ {0} ^ {1} \phi (y) F ^ {- 1} (y) d y, +$$ + +where $\phi :[0,1]\to [0,\infty)$ is weighting function. $\phi$ is said to be admissible if it is increasing and satisfies that $\int_0^1\phi (y)dy = 1$ . Acerbi (2002) showed that $S_{\phi}(F)$ is a coherent risk measure if $\phi$ is admissible. SRM can be viewed as a weighted average of the quantiles $F^{-1}$ , with weight specified by $\phi (y)$ . In fact, $S_{\phi}(F)$ specializes in $C_\alpha (F)$ for $\phi (y) = \frac{1}{1 - \alpha}\mathbb{I}\{y\geq 1 - \alpha \}$ . + +Distortion risk measure (DRM) DRM is originally from the insurance problems and later applied to investment risks (Wang, 1996; 2004). For a distribution $F \in \mathcal{D}([0,\infty))$ , the DRM $\rho_g(F)$ is defined as + +$$ +\boldsymbol {\rho} _ {\boldsymbol {g}} (F) \triangleq \int_ {0} ^ {\infty} g (1 - F (x)) d x, +$$ + +where $g:[0,1] \to [0,1]$ is a continuous increasing function with $g(0) = 0$ and $g(1) = 1$ . We refer to $g$ as the distortion function. Similar to SRM, DRM can also recover CVaR by setting $g(y) = \min \left(\frac{x}{1 - \alpha}, 1\right)$ . + +Entropic risk measure (ERM) ERM is a well-known risk measure in risk-sensitive decision-making, including mathematical finance (Föllmer & Schied, 2016), Markovian decision processes (Bäuerle & Rieder, 2014). The ERM value of $F$ + +with coefficient $\beta \neq 0$ is defined as + +$$ +\boldsymbol {U} _ {\boldsymbol {\beta}} (F) \triangleq \frac {1}{\beta} \log (\mathbb {E} _ {X \sim F} [ \exp (\beta X) ]) = \frac {1}{\beta} \log \left(\int_ {\mathbb {R}} \exp (\beta x) d F (x)\right). +$$ + +Certainty equivalent Certainty equivalent can be viewed as a generalization of ERM, which replace the exponential utility function with a more general function. Let $u$ be a continuous and strictly increasing function such that its inverse $u^{-1}$ exists, then the certainty equivalent $C_u(F)$ of $F$ associated with a function $u$ is given by + +$$ +\boldsymbol {C} _ {\boldsymbol {u}} (F) \triangleq u ^ {- 1} \left(\mathbb {E} _ {X \sim F} [ u (X) ]\right) = u ^ {- 1} \left(\int_ {\mathbb {R}} u (x) d F (x)\right). +$$ + +It is trivial that the certainty equivalent $C_u(F)$ reduces to the ERM when $u(x) = \exp(\beta x)$ . + +Rank dependent expected utility (RDEU) RDEU value (Quiggin, 2012) of $F \in \mathcal{D}([a,b])$ is defined as + +$$ +\boldsymbol {V} (F) \triangleq \int_ {a} ^ {b} v (x) d w (F (x)), +$$ + +where $w:[0,1] \to [0,1]$ is an increasing weight function such that $w(0) = 0$ and $w(1) = 1$ , and $v:\mathbb{R} \to \mathbb{R}$ be an (unbounded) increasing differentiable function with $v(0) = 0$ . + +# C. Proof of Theorems + +For CDFs $F,G\in \mathcal{D}$ , the Wasserstein distance between them can be represented by their IDSf + +$$ +W _ {1} (F, G) = \int_ {- \infty} ^ {\infty} | F (x) - G (x) | d x = \int_ {0} ^ {1} \left| F ^ {- 1} (y) - G ^ {- 1} (y) \right| d y, +$$ + +With slight abuse of notation, we write $W_{1}(F,G) = \left\| F^{-1} - G^{-1}\right\|_{1}$ . We will prove the theorems for the more general formulations in the following. + +$$ +\max _ {G \in \mathcal {D} ([ a, b ])} \quad \mathbf {T} (G) \tag {11} +$$ + +$$ +\begin{array}{l} \text {s . t .} \quad \| G - F \| _ {1} \leq c \end{array} +$$ + +and + +$$ +\max _ {G \in \mathcal {D} ([ a, b ])} \quad \mathbf {T} (G) \tag {12} +$$ + +$$ +\begin{array}{l l} \text {s . t .} & \| G - F \| _ {\infty} \leq c \end{array} +$$ + +Observe that $\| G - F\| _1 = \| G^{-1} - F^{-1}\| _1$ , thus we can recast Formulation 5 as + +$$ +\begin{array}{l l} \max _ {G \in \mathcal {D} ([ a, b ])} & \mathbf {T} (G ^ {- 1}) \\ \text {s . t .} & \| G ^ {- 1} - F ^ {- 1} \| _ {1} \leq c \end{array} \tag {13} +$$ + +Proposition C.1. For risk measures ERM and CE defined in Section 2, the optimal solution to Formulation 11 is given by + +$$ +\overline {{F ^ {1}}} = \mathbf {P} _ {\mathbf {c}} ^ {1} F, \underline {{F}} ^ {1} = \mathbf {N} _ {\mathbf {c}} ^ {1} F, \tag {14} +$$ + +where $\mathbf{P}_{\mathbf{c}}^{\mathbf{1}} / \mathbf{N}_{\mathbf{c}}^{\mathbf{1}}: \mathcal{D}([a, b]) \to \mathcal{D}([a, b])$ is the positive/negative operator for CDF for the Wasserstein distance, which defined as follows. + +Define $S^{+}(F,x) \triangleq \int_{x}^{b} (F(z) - F(x)) dz$ . Its geometric meaning is the area sandwiched between $F$ and a constant $F(x)$ from $x$ (see Figure 4 (a)). Notice that $S^{+}$ may be discontinuous w.r.t. $x$ since $F$ can be discontinuous w.r.t. $x$ ( $S^{+}(x) < S^{+}(x^{-})$ ). For $c > 0$ , we define $g^{+}(F,c) \triangleq \max \{x \geq a : S^{+}(F,x) \leq c\} \in [a,b)$ . For simplicity, we drop $F$ from the notations if it is clear from the context. Given $F \in \mathcal{D}([a,b])$ as input, $\mathrm{O}_c^1$ outputs a CDF + +$$ +\left(\mathbf {P} _ {\boldsymbol {c}} ^ {\mathbf {1}} F\right) (x) \triangleq \left\{ \begin{array}{l} F (g ^ {+} (c)) - \frac {c - S ^ {+} (g ^ {+} (c))}{b - g ^ {+} (c)}, x \in [ g ^ {+} (c), b), \\ F (x), \text {o t h e r w i s e}. \end{array} \right. +$$ + +Analogously, we define $S^{-}(F,x) \triangleq \int_{x}^{b} (1 - F(z)) dz$ and $g^{-}(F,c) \triangleq \max \{x \geq a : S^{-}(F,x) \leq c\} \in [a,b)$ . We omit $F$ for simplicity. Note that $S^{-}$ is continuous w.r.t. $x$ . Hence $g^{-}(c) = \{x : S^{-}(x) = c\}$ . For $F \in \mathcal{D}([a,b])$ , $\mathbf{N}_c^1$ outputs a CDF + +$$ +\left(\mathbf {N} _ {\boldsymbol {c}} ^ {\mathbf {1}} F\right) (x) \triangleq \left\{ \begin{array}{l} 1, x \in [ g ^ {-} (c), b), \\ F (x), \text {o t h e r w i s e}. \end{array} \right. +$$ + +Proposition C.2. For risk measures CVaR, SRM and DRM defined in Section 2, the optimal solution to 13 is given by + +$$ +\left(\overline {{F ^ {1}}} ^ {- 1}\right) = \mathbf {P} _ {c} ^ {\mathbf {1}} F ^ {- 1}, \left(\underline {{F}} ^ {1}\right) ^ {- 1} = \mathbf {N} _ {c} ^ {\mathbf {1}} F ^ {- 1}, \tag {15} +$$ + +where we overload notations to denote by $\mathbf{P}_c^1 / \mathbf{N}_c^1 : (\mathcal{D}([a, b]))^{-1} \to (\mathcal{D}([a, b]))^{-1}$ the positive/negative operator for $IDF$ for the Wasserstein distance. + +We overload notations and define $S^{+}(F^{-1},y) \triangleq \int_{y}^{1}\left(b - F^{-1}(z)\right)dz$ . For $c > 0$ , we define $g^{+}(F^{-1},c) \triangleq \min \{y : S^{+}(F^{-1},y) \geq c\} \in (0,1)$ . For simplicity, we drop $F$ from the notations if it is clear from the context. Given $F^{-1} \in \mathcal{D}([a,b])^{-1}$ as input, $\mathbf{P}_c^1$ outputs a IDF + +$$ +\left(\mathbf {P} _ {\boldsymbol {c}} ^ {\mathbf {1}} F ^ {- 1}\right) (y) \triangleq \left\{ \begin{array}{l} 1, y \in (g ^ {+} (c), 1 ], \\ F ^ {- 1} (y), \text {o t h e r w i s e}. \end{array} \right. +$$ + +Analogously, we define $S^{-}(F^{-1},y) \triangleq \int_y^1 (F^{-1}(z) - F^{-1}(y))dz$ and $g^{-}(F^{-1},c) \triangleq \min \{y:S^{-}(F^{-1},y) \geq c\} \in (0,1)$ for $c > 0$ . Notice that $S^{-}$ may be discontinuous w.r.t. $y$ ( $S^{-}(y+) < S^{-}(y)$ ). Given $F^{-1} \in \mathcal{D}([a,b])^{-1}$ as input, $\mathbf{N}_c^1$ outputs a IDF + +$$ +\left(\mathbf {N} _ {\boldsymbol {c}} ^ {\mathbf {1}} F ^ {- 1}\right) (y) \triangleq \left\{ \begin{array}{l} F ^ {- 1} (g ^ {-} (c)) + \frac {S ^ {- 1} (g ^ {- 1} (c)) - c}{1 - g ^ {-} (c)}, y \in (g ^ {-} (c), 1 ], \\ F ^ {- 1} (y), \text {o t h e r w i s e}. \end{array} \right. +$$ + +Proposition C.3. For any risk measure, the optimal solution to Formulation 12 is given by + +$$ +\overline {{F ^ {\infty}}} = \mathbf {P} _ {c} ^ {\infty} F, \underline {{F}} ^ {\infty} = \mathbf {N} _ {c} ^ {\infty} F, \tag {16} +$$ + +where $\mathbf{P}_c^\infty /\mathbf{N}_c^\infty :\mathcal{D}([a,b])\to \mathcal{D}([a,b])$ is the positive/negative operator with coefficient $c > 0$ for the supremum distance, which is defined as follows + +$$ +\left(\mathbf {P} _ {c} ^ {\infty} F\right) (x) \triangleq \max \left\{F (x) - c \mathbb {I} \{x \in [ a, b), 0 \}, \right. +$$ + +$$ +\left(\mathbf {N} _ {\boldsymbol {c}} ^ {\infty} F\right) (x) \triangleq \min \left\{F (x) + c \mathbb {I} \{x \in [ a, b), 1 \}. \right. +$$ + +Figure 4 illustrates how the operators defined in Proposition C.1-C.3 transform a typical continuous CDF $F \in \mathcal{D}([a, b])$ . + +# C.1. Proof of Proposition C.1 + +We only provide the proof for CE because ERM is a special case of CE by choosing $u(x) = \exp (\beta x)$ . For simplicity, we write the optimization problem as + +$$ +\max _ {G \in \mathcal {D} ([ a, b ])} C _ {u} (G) +$$ + +$$ +\begin{array}{l} \text {s . t .} \quad \| G - F \| _ {1} \leq c \end{array} +$$ + +Proof. We consider the maximization problem first. The objective function can be written as + +$$ +C _ {u} (G) = u ^ {- 1} \left(\int_ {a} ^ {b} u (x) d G (x)\right) = u ^ {- 1} \left(u (b) - \int_ {a} ^ {b} G (x) u ^ {\prime} (x) d x\right). +$$ + +Since $u^{-1}$ is monotonically increasing, we can reformulate the original optimization problem as + +$$ +\min _ {G} \int_ {a} ^ {b} G (x) u ^ {\prime} (x) d x +$$ + +$$ +\begin{array}{l l} \text {s . t .} & G \in B _ {1} (F, c) \end{array} +$$ + +The fact that the ball constraint $G \in B_1(F, c)$ is symmetric about $F$ implies $G^*(x) \leq F(x), \forall x \in [a, b]$ . Suppose $G^*(y) > F(y)$ for $y$ in a set that is a union of disjoint intervals $\cup_i I_i = \cup_i (a_i, b_i)$ , and $G^*(y) \leq F(y)$ otherwise. We can choose + +$$ +H (y) = \left\{ \begin{array}{l} \max \{2 F (y) - G ^ {*} (y), F (a _ {i}) \}, y \in I _ {i}, \\ G ^ {*} (y), \text {o t h e r w i s e}, \end{array} \right. +$$ + +which satisfies the ball constraint. However, $C_u(H) > C_u(G^*)$ since $\int_{I_i} H(x) u'(x) dx \leq \int_{I_i} F(x) u'(x) dx < \int_{I_i} G^*(x) \exp(\beta x) dx$ , which leads to a contradiction. Hence we have $G^*(x) \leq F(x), \forall x \in [a, b]$ . Define + +$$ +\tilde {G} (x) \triangleq \left(\mathbf {P} _ {\mathbf {c}} ^ {\mathbf {1}} F\right) (x) = \left\{ \begin{array}{l} F (g ^ {+} (c)) - \frac {c - S ^ {+} (g ^ {+} (c))}{b - g ^ {+} (c)}, x \in [ g ^ {+} (c), b), \\ F (x), \text {o t h e r w i s e}. \end{array} \right. +$$ + +Notice that the (new) minimization problem has a linear objective and convex ball constraint. Thus, the optimal solution exists in the boundary of the ball constraint. It suffices to consider the following optimization problem + +$$ +\begin{array}{l} \min _ {G} \int_ {a} ^ {b} G (x) u ^ {\prime} (x) d x \\ \begin{array}{l l} \text {s . t .} & \| F - G \| _ {1} = c, \end{array} \\ G \succeq F. \\ \end{array} +$$ + +It is easy to check that $\tilde{G}$ satisfies the constraints. It remains to show that $\int_{a}^{b}\tilde{G}(x)u'(x)dx\leq \int_{a}^{b}G(x)u'(x)dx$ for any feasible $G$ . Consider a feasible $G\neq \tilde{G}$ . It is obvious that $G(b^{-})\geq \tilde{G} (g^{+}(c))$ , otherwise $\| F - G\| _1 = \int_a^b F(x) - G(x)dx > \int_a^b F(x) - \tilde{G} (x)dx = c$ , which contradicts with $G\in B_1(F,c)$ . If $G(b^{-}) = \tilde{G} (g^{+}(c))$ , then we again have $G = \tilde{G}$ . Otherwise $\| F - G\| _1 > c$ . Therefore it holds that $G(b^{-}) > \tilde{G} (g^{+}(c))$ . We have $G(g^{+}(c)) < \tilde{G} (g^{+}(c))$ , since otherwise $\| F - G\| _1 < \left\| F - \tilde{G}\right\| _1 = c$ . Since $G$ is monotonically increasing and right continuous, there exists $g^{\prime}(c)\in (g^{+}(c),b)$ such that $G(x) < \tilde{G} (x) = \tilde{G} (g^{+}(c))$ for $x\in [g^{+}(c),g^{\prime}(c))$ and $G(x) > \tilde{G} (x) = \tilde{G} (g^{+}(c))$ for $x\in (g^{\prime}(c),b)$ . Moreover, $G(x)\leq \tilde{G} (x) = F(x)$ for $x\in [a,g^{+}(c))$ . It follows that + +$$ +\begin{array}{l} \int_ {a} ^ {b} G (x) u ^ {\prime} (x) d x - \int_ {a} ^ {b} \tilde {G} (x) u ^ {\prime} (x) d x \\ = \int_ {a} ^ {g ^ {\prime} (c)} (G (x) - \tilde {G} (x)) u ^ {\prime} (x) d x + \int_ {g ^ {\prime} (c)} ^ {b} (G (x) - \tilde {G} (x)) u ^ {\prime} (x) d x \\ \geq u ^ {\prime} \left(g ^ {\prime} (c)\right) \int_ {g ^ {\prime} (c)} ^ {b} \left(G (x) - \tilde {G} (x)\right) d x - u ^ {\prime} \left(g ^ {\prime} (c)\right) \int_ {a} ^ {g ^ {\prime} (c)} \left(\tilde {G} (x) - G (x)\right) d x \\ = 0. \\ \end{array} +$$ + +The last equality follows from that $\| F - G\| _1 - \left\| F - \tilde{G}\right\| _1 = \int_a^b F(x) - G(x)dx - \int_a^b F(x) - \tilde{G} (x)dx = \int_a^b\tilde{G} (x) -$ $G(x)dx = 0.$ + +Similarly, we can prove that the optimal solution to the minimization problem is given by + +$$ +\left(\mathbf {N} _ {\mathbf {c}} ^ {\mathbf {1}} F ^ {- 1}\right) (y) \triangleq \left\{ \begin{array}{l} F ^ {- 1} (g ^ {-} (c)) + \frac {S ^ {- 1} (g ^ {- 1} (c)) - c}{1 - g ^ {-} (c)}, y \in (g ^ {-} (c), 1 ], \\ F ^ {- 1} (y), \text {o t h e r w i s e}. \end{array} \right. +$$ + +# C.2. Proof of Proposition C.2 + +We only provide proof for SRM and DRM because CVaR is a special case of SRM or DRM. + +# C.2.1. PROOF FOR SRM + +Proof. The objective function is given by + +$$ +M _ {\phi} (G) = \int_ {0} ^ {1} \phi (y) G ^ {- 1} (y) d y, +$$ + +Table 4: $\mathbf{v}\left( F\right)$ + +
RMv(F)
CVaR1/α||I{F(·)≥1-α}||q
SRM||φ(F)||q
DRM||g'(1-F)||q
ERM||exp(β·)||q/∫a^b u(x)dF(x)
CE||u'||_q(u^-1)'(∫a^b u(x)dF(x))
RDEU||w'(F)u'||_q
+ +which is linear in $G^{-1}$ . Meanwhile, the constraint $\| G^{-1} - F^{-1}\|_1 \leq c$ is also a convex ball constraint w.r.t. $G^{-1}$ . Furthermore, $\phi(y)$ is an increasing function. Using analogous arguments to the proof of Theorem 4.3 completes the proof. + +# C.2.2. PROOF FOR DRM + +Proof. By a change of variable $y = F(x)$ , the DRM can be represented as + +$$ +\begin{array}{l} \rho_ {g} (G) = \int_ {0} ^ {1} g (1 - y) d G ^ {- 1} (y) = g (1 - y) F ^ {- 1} (y) | _ {0} ^ {1} - \int_ {0} ^ {1} G ^ {- 1} (y) d g (1 - y) \\ = - a + \int_ {0} ^ {1} G ^ {- 1} (y) g ^ {\prime} (1 - y) d y. \\ \end{array} +$$ + +Again, the objective function is linear in $G^{-1}$ , and the constraint is also a convex ball constraint. Besides, $g'(1 - y)$ is increasing in $y$ since $g$ is concave. Using analogous arguments to the proof of Theorem 4.3 completes the proof. + +# C.3. Proof of Proposition C.3 + +It is easy to verify that $\mathbf{P}_c^\infty F\in B_\infty (F,c)$ . Consider an arbitrary $F\in \mathcal{D}([a,b])$ and $c > 0$ . For $x\in [a,b)$ , we have + +$$ +\left(\mathbf {P} _ {c} ^ {\infty} F\right) (x) = \max \{F (x) - c, 0 \} \leq G (x), \forall G \in B _ {\infty} (F, c). +$$ + +Besides, $\left(\mathbf{P}_c^\infty F\right)(b) = G(b) = 1$ for any $G \in B_{\infty}(F, c)$ . Therefore $G \preceq \mathbf{P}_c^\infty F$ for any $G \in B_{\infty}(F, c)$ . The result follows from the monotonicity of $\mathbf{T}$ . + +# D. Derivations of Results in Section 5 + +# D.1. Identification of $\nu$ + +We list functional $\pmb{\nu}$ for different risk measures in Table 4. The functional $\pmb{\nu}$ is crucial to compute the LLC and to show the tightness of our method. We provide detailed derivations of $\pmb{\nu}$ in the following. Since the CVaR (ERM) is a special case of SRM (CE), we omit the derivation of CVaR and ERM. Recall that $v((1 - t)F + tG, p)$ is a functional satisfying that for any $F, G$ + +$$ +\psi^ {\prime} \left(t; F, G\right) \leq v \big ((1 - t) F + t G, p \big) \| F - G \| _ {p}. +$$ + +# D.1.1. SRM + +Consider $M_{\phi}$ in the form of + +$$ +M _ {\phi} (F) = \int_ {a} ^ {b} \phi (F (x)) x d F (x), +$$ + +where $\phi$ is increasing and integrates to 1. $\psi$ is continuously differentiable with derivative + +$$ +\begin{array}{l} \psi^ {\prime} (t; F, G) = \frac {d}{d t} \int_ {a} ^ {b} \phi ((1 - t) F + t G) (x)) x d ((1 - t) F + t G) (x) \\ = \frac {d}{d t} \left[ \int_ {a} ^ {b} \phi ((1 - t) F + t G) (x)) x d F (x) + t \int_ {a} ^ {b} \phi ((1 - t) F + t G) (x)) x d (G - F) (x) \right] \\ = \underbrace {\int_ {a} ^ {b} \phi^ {\prime} ((1 - t) F + t G) (x)) (G (x) - F (x)) x d F (x)} _ {(a)} + \underbrace {\int_ {a} ^ {b} \phi ((1 - t) F + t G) (x)) x d (G - F) (x)} _ {(b)} \\ + \underbrace {t \int_ {a} ^ {b} \phi^ {\prime} ((1 - t) F + t G) (x)) (G (x) - F (x)) x d (G - F) (x)} _ {(c)}. \\ \end{array} +$$ + +Since + +$$ +\begin{array}{l} (b) = \phi ((1 - t) F + t G) (x)) x (G - F) (x) | _ {a} ^ {b} - \int_ {a} ^ {b} (G - F) (x) d [ \phi ((1 - t) F + t G) (x)) x ] \\ = - \int_ {a} ^ {b} (G - F) (x) d [ \phi ((1 - t) F + t G) (x)) x ] \\ \end{array} +$$ + +and + +$$ +(c) = \int_ {a} ^ {b} (G - F) (x) x d \phi ((1 - t) F + t G) (x)) - \int_ {a} ^ {b} \phi^ {\prime} ((1 - t) F + t G) (x)) (G (x) - F (x)) x d F (x) +$$ + +We have + +$$ +\begin{array}{l} \psi^ {\prime} (t) = (a) + (b) + (c) = - \int_ {a} ^ {b} (G - F) (x) \phi ((1 - t) F + t G) (x)) d x = \langle G - F, - \phi ((1 - t) F + t G) \rangle \\ \leq \| G - F \| _ {p} \| \phi ((1 - t) F + t G) \| _ {q}. \\ \end{array} +$$ + +Hence we can choose $\upsilon (F,p) = \| \phi (F)\| _q$ + +# D.1.2. DRM + +A distortion risk measure associated with distortion function $g$ for a distribution $F$ is + +$$ +\rho_ {g} (F) = \int_ {a} ^ {b} g (1 - F (x)) d x, +$$ + +where $g:[0,1] \to [0,1]$ is a non-decreasing function with $g(0) = 0$ and $g(1) = 1$ . Thus $g'$ is non-negative. $\psi$ is continuously differentiable with derivative + +$$ +\begin{array}{l} \psi^ {\prime} (t; F, G) = \frac {d}{d t} \int_ {a} ^ {b} g (1 - (1 - t) F (x) - t G (x)) d x = \int_ {a} ^ {b} g ^ {\prime} (1 - (1 - t) F (x) - t G (x)) (F (x) - G (x)) d x \\ = \langle F - G, g ^ {\prime} (1 - (1 - t) F - t G) \rangle \\ \leq \| F - G \| _ {p} \| g ^ {\prime} (1 - (1 - t) F - t G) \| _ {q}. \\ \end{array} +$$ + +Hence we can choose $v(F, p) = \| g'(1 - F) \|_q$ . + +# D.1.3. CE + +For a CE $C_u$ , we define $E_{u}(F) \triangleq \int u(x)dF(x)$ . Notice that $E_{u}$ is a linear functional, i.e., + +$$ +E _ {u} ((1 - t) F + t G) = (1 - t) E _ {u} (F) + t E _ {u} (G) +$$ + +for any $F, G \in D([a, b])$ . $\psi$ for $E_{u}$ is continuously differentiable with derivative + +$$ +\begin{array}{l} \psi^ {\prime} (t; F, G) = \frac {d}{d t} E _ {u} ((1 - t) F + t G) = \frac {d}{d t} (1 - t) E _ {u} (F) + t E _ {u} (G) \\ = E _ {u} (G) - E _ {u} (F) \\ = \int_ {a} ^ {b} u (x) d G (x) - \int_ {a} ^ {b} u (x) d F (x) \\ = u (x) G (x) \left| _ {a} ^ {b} - \int_ {a} ^ {b} G (x) d u (x) - u (x) F (x) \right| _ {a} ^ {b} + \int_ {a} ^ {b} F (x) d u (x) \\ = \int_ {a} ^ {b} F (x) - G (x) d u (x) \\ = \left\langle F - G, u ^ {\prime} \right\rangle \\ \end{array} +$$ + +where the last equality follows from that $F(b) = G(b) = 1$ and $F(a) = G(a) = 0$ . The certainty equivalent of a distribution $F$ with utility function $u$ is defined as $C_u(F) = u^{-1}(E_u(F))$ . It follows that + +$$ +\begin{array}{l} \psi^ {\prime} (t; F, G) = \left(u ^ {- 1}\right) ^ {\prime} \left(E _ {u} ((1 - t) F + t G)\right) \cdot \langle F - G, u ^ {\prime} \rangle \\ \leq \left(u ^ {- 1}\right) ^ {\prime} \left(E _ {u} ((1 - t) F + t G)\right) \cdot \| F - G \| _ {p}, \| u ^ {\prime} \| _ {q} \\ \end{array} +$$ + +Hence we can choose $v(F, p) = \left(u^{-1}\right)'(E_u(F))\| u'\|_q$ . + +# D.1.4. RDEU + +Let $w:[0,1] \to [0,1]$ be an increasing weight function such that $w(0) = 0$ and $w(1) = 1$ . Let $u:\mathbb{R} \to \mathbb{R}$ be an (unbounded) increasing differentiable function with $u(0) = 0$ . The RDEU value of $F \in \mathcal{D}([a,b])$ is given by + +$$ +\begin{array}{l} V (F) = \int_ {a} ^ {b} u (x) d w (F (x)) \\ = u (x) w (F (x)) \mid_ {a} ^ {b} - \int_ {a} ^ {b} w (F (x)) d u (x) \\ = u (b) - \int_ {a} ^ {b} w (F (x)) u ^ {\prime} (x) d x. \\ \end{array} +$$ + +We have + +$$ +\begin{array}{l} \psi^ {\prime} (t; F, G) = \frac {d}{d t} \left[ u (b) - \int_ {a} ^ {b} w ((1 - t) F (x) + t G (x)) u ^ {\prime} (x) d x \right] \\ = - \int_ {a} ^ {b} w ^ {\prime} ((1 - t) F (x) + t G (x)) (G (x) - F (x)) u ^ {\prime} (x) d x \\ = \langle F - G, w ^ {\prime} ((1 - t) F + t G) u ^ {\prime} \rangle \\ \leq \| F - G \| _ {p} \| w ^ {\prime} ((1 - t) F + t G) u ^ {\prime} \| _ {q} \\ \end{array} +$$ + +Hence we can choose $v(F, p) = \| w'(F)u' \|_q$ . + +# D.2. Derivation of the LLC + +In Section 5.1, we have shown that + +$$ +\begin{array}{l} L_{p}(\mathbf{T};F_{n},c_{n}^{p}) = \sup_{F,G\in B_{p}(F_{n},c_{n}^{p})}\frac{\mathbf{T}(F) - \mathbf{T}(G)}{\|F - G\|_{p}} \\ = \sup _ {F, G \in B _ {p} \left(F _ {n}, c _ {n} ^ {p}\right)} \frac {\psi (1 ; F , G) - \psi (0 ; F , G)}{\| F - G \| _ {p}} \\ \leq \sup _ {F, G \in B _ {p} \left(F _ {n}, c _ {n} ^ {p}\right), t \in [ 0, 1 ]} \frac {\psi^ {\prime} (t ; F , G)}{\| F - G \| _ {p}} \\ \leq \sup _ {F, G \in B _ {p} \left(F _ {n}, c _ {n} ^ {p}\right), t \in [ 0, 1 ]} v ((1 - t) F + t G, p) \\ = \sup _ {F \in B _ {p} (F _ {n}, c _ {n} ^ {p})} v (F, p). \\ \end{array} +$$ + +We have derived the $\upsilon$ function in the previous subsection, which enables us to obtain an upper bound on $L_{p}(\mathbf{T};F,c)$ . We only provide the derivations for DRM, CE, and RDEU here because SRM is presented as an illustrating example in Section 5.1. We first give a useful fact to deal with the Wasserstein distance. + +Fact 3. Fix real numbers $a < b$ . For any $G \in \mathcal{D}([a, b])$ and any $c$ , there exists a continuous and monotonically increasing distribution $F \in \mathcal{D}([a, b])$ such that $\| F - F_n \|_1 \leq c$ . + +# D.2.1.DRM + +For DRM, $v(F,p) = \| g'(1 - F)\|_q$ . Since $g$ is concave, $g'$ is monotonically decreasing. + +Case $p = 1$ . By Fact 3, there exists a continuous CDF $F \in B_{1}(F_{n}, r_{n}^{1})$ . Such $F$ attains all possible value in [0, 1], hence + +$$ +\max _ {F \in B _ {1} (F _ {n}, c _ {n} ^ {1})} \| g ^ {\prime} (1 - F) \| _ {\infty} = \max _ {F \in B _ {1} (F _ {n}, c _ {n} ^ {1})} \max _ {x \in \mathbb {R}} g ^ {\prime} (1 - F (x)) = \max _ {y \in [ 0, 1 ]} g ^ {\prime} (y) = \| g ^ {\prime} \| _ {\infty}. +$$ + +Case $p = \infty$ Since + +$$ +\max _ {F \in B _ {\infty} (F _ {n}, c _ {n} ^ {\infty})} \| g ^ {\prime} (1 - F) \| _ {1} = \max _ {F \in B _ {\infty} (F _ {n}, c _ {n} ^ {\infty})} \int_ {a} ^ {b} g ^ {\prime} (1 - F (x)) d x, +$$ + +it follows that + +$$ +\max _ {F \in B _ {\infty} (F _ {n}, c _ {n} ^ {\infty})} \int_ {a} ^ {b} g ^ {\prime} (1 - F (x)) d x = \left\| g ^ {\prime} (1 - \mathrm {N} _ {c _ {n} ^ {\infty}} ^ {\infty} F _ {n}) \right\| _ {1} +$$ + +# D.2.2. CE + +For CE, $v(F,p) = \left(u^{-1}\right)'(E_u(F))\| u'\| _q$ . Since $u$ is convex, $\left(u^{-1}\right)'$ is a decreasing function. It follows that + +$$ +\sup _ {F \in B _ {p} (F _ {n}, c _ {n} ^ {p})} \left(u ^ {- 1}\right) ^ {\prime} \left(E _ {u} (F)\right) \| u ^ {\prime} \| _ {q} = \left(u ^ {- 1}\right) ^ {\prime} \left(E _ {u} \left(\mathrm {N} _ {c _ {n} ^ {p}} ^ {p} F _ {n}\right)\right) \| u ^ {\prime} \| _ {q}. +$$ + +# D.2.3. RDEU + +For RDEU, $v(F,p) = \| w'(F)u'\|_q$ . Recall that $w:[0,1] \to [0,1]$ is an increasing weight function with $w(0) = 0$ and $w(1) = 1$ , and $v:\mathbb{R} \to \mathbb{R}$ is an (unbounded) increasing differentiable function with $u(0) = 0$ . We further assume that $w$ is convex so that $w'$ is an increasing function. + +Case $p = 1$ . By Fact 3, there exists a continuous CDF $F \in B_{1}(F_{n}, r_{n}^{1})$ . Such $F$ attains all possible value in [0, 1], hence + +$$ +\max _ {F \in B _ {1} (F _ {n}, c _ {n} ^ {1})} \| w ^ {\prime} (F) u ^ {\prime} \| _ {\infty} = \max _ {F \in B _ {1} (F _ {n}, c _ {n} ^ {1})} \max _ {x \in \mathbb {R}} w ^ {\prime} (F (x)) u ^ {\prime} (x), +$$ + +which might not admit closed form in general. Meanwhile, the GLC (holds without monotonicity) + +$$ +L _ {1} = \max _ {F \in \mathcal {D} ([ a, b ])} \| w ^ {\prime} (F) u ^ {\prime} \| _ {\infty} = \| w ^ {\prime} \| _ {\infty} \| u ^ {\prime} \| _ {\infty}. +$$ + +Case $p = \infty$ . The following holds + +$$ +\begin{array}{l} \max _ {F \in B _ {\infty} (F _ {n}, c _ {n} ^ {\infty})} \| w ^ {\prime} (F) u ^ {\prime} \| _ {1} = \max _ {F \in B _ {\infty} (F _ {n}, c _ {n} ^ {\infty})} \int_ {a} ^ {b} w ^ {\prime} (F (x)) u ^ {\prime} (x) d x \\ = \left\| w ^ {\prime} \left(\mathrm {N} _ {c _ {n} ^ {\infty}} ^ {\infty} F\right) u ^ {\prime} \right\| _ {1}. \\ \end{array} +$$ + +The global Lipschitz constant (holds without monotonicity) + +$$ +L _ {\infty} = \max _ {F \in \mathcal {D} ([ a, b ])} \| w ^ {\prime} (F) u ^ {\prime} \| _ {1} = \| w ^ {\prime} \| _ {\infty} \| u ^ {\prime} \| _ {1}. +$$ + +The last equality uses that for any $d \in [0,1]$ , $F(x) = d$ for any $x \in (a,b)$ if $F = d\psi_{a} + (1 - d)\psi_{b}$ (shifted Bernoulli). + +# D.3. Improved Confidence Bound + +In Section 5.2, we established that + +$$ +\begin{array}{l} \mathbf {T} \left(\mathrm {P} _ {c _ {n} ^ {p}} ^ {p} F _ {n}\right) - \mathbf {T} (F _ {n}) = \psi \left(1; F _ {n}, \mathrm {P} _ {c _ {n} ^ {p}} ^ {p} F _ {n}\right) - \psi \left(0; F _ {n}, \mathrm {P} _ {c _ {n} ^ {p}} ^ {p} F _ {n}\right) \\ \leq \max _ {t \in [ 0, 1 ]} \psi^ {\prime} \left(t; F _ {n}, \mathrm {P} _ {c _ {n} ^ {p}} ^ {p} F _ {n}\right) \\ \leq \max _ {t \in [ 0, 1 ]} v \left((1 - t) F _ {n} + t \mathrm {P} _ {c _ {n} ^ {p}} ^ {p} F _ {n}, p\right) c _ {n} ^ {p} \\ \end{array} +$$ + +as well as + +$$ +\mathbf {T} (F _ {n}) - \mathbf {T} \left(\mathrm {N} _ {c _ {n} ^ {p}} ^ {p} F _ {n}\right) \leq \max _ {t \in [ 0, 1 ]} v \left((1 - t) \mathrm {N} _ {c _ {n} ^ {p}} ^ {p} F _ {n} + t F _ {n}, p\right) c _ {n} ^ {p}. +$$ + +The above inequalities lead to the following bounds. + +# D.3.1. SUPREMUM DISTANCE + +Proposition D.1 (CE). For $F \in \mathcal{D}([a, b])$ , it holds that (assume $\beta > 0$ ) + +$$ +\begin{array}{l} C _ {u} \left(\mathrm {P} _ {c} ^ {\infty} F\right) - C _ {u} (F) \leq \| u ^ {\prime} \| _ {1} (u ^ {- 1}) ^ {\prime} \left(\int_ {a} ^ {b} u (x) d F (x)\right) \cdot c \\ \leq \| u ^ {\prime} \| _ {1} (u ^ {- 1}) ^ {\prime} \left(\int_ {a} ^ {b} u (x) d \mathrm {N} _ {c} ^ {\infty} F (x)\right) \cdot c, \\ \end{array} +$$ + +$$ +C _ {u} (F) - C _ {u} \left(\mathrm {N} _ {c} ^ {\infty} F\right) \leq \| u ^ {\prime} \| _ {1} (u ^ {- 1}) ^ {\prime} \left(\int_ {a} ^ {b} u (x) d \mathrm {N} _ {c} ^ {\infty} F (x)\right) \cdot c. +$$ + +Corollary D.2 (ERM). For $F \in \mathcal{D}([a, b])$ , it holds that (assume $u$ convex) + +$$ +U _ {\beta} \left(\mathrm {P} _ {c} ^ {\infty} F\right) - U _ {\beta} (F) \leq \frac {\exp (\beta b) - \exp (\beta a)}{\beta \int_ {a} ^ {b} \exp (\beta x) d F (x)} \cdot c \leq \frac {L _ {\infty} (U _ {\beta}) c}{\int_ {a} ^ {b} \exp (\beta x) d \mathrm {N} _ {c} ^ {\infty} F (x)}, +$$ + +$$ +U _ {\beta} (F) - U _ {\beta} (\mathrm {P} _ {c} ^ {\infty} F) \leq \frac {L _ {\infty} (U _ {\beta}) c}{\int_ {a} ^ {b} \exp (\beta x) d \mathrm {N} _ {c} ^ {\infty} F (x)}. +$$ + +Proposition D.3 (SRM). For $F \in \mathcal{D}([a, b])$ , it holds that (assume $\phi$ increasing) + +$$ +M _ {\phi} \left(\mathrm {P} _ {c} ^ {\infty} F\right) - M _ {\phi} (F) \leq \int_ {a} ^ {b} \phi (F (x)) d x \cdot c \leq \int_ {a} ^ {b} \phi \left(\mathrm {N} _ {c} ^ {\infty} F (x)\right) d x \cdot c = L _ {\infty} \left(M _ {\phi}; F, c\right) c +$$ + +$$ +M _ {\phi} (F) - M _ {\phi} (\mathrm {N} _ {c} ^ {\infty} F) \leq \int_ {a} ^ {b} \phi (\mathrm {N} _ {c} ^ {\infty} F (x)) d x \cdot c = L _ {\infty} (M _ {\phi}; F, c) c. +$$ + +Proposition D.4 (DRM). For $F \in \mathcal{D}([a, b])$ , it holds that (assume $g$ concave) + +$$ +\rho_ {g} (\mathrm {P} _ {c} ^ {\infty} F) - \rho_ {g} (F) \leq \int_ {a} ^ {b} g ^ {\prime} (1 - F (x)) d x \cdot c \leq \int_ {a} ^ {b} g ^ {\prime} (1 - \mathrm {N} _ {c} ^ {\infty} F (x)) d x \cdot c = L _ {\infty} (\rho_ {g}; F, c) c, +$$ + +$$ +\rho_ {g} (F) - \rho_ {g} (\mathrm {N} _ {c} ^ {\infty} F) \leq \int_ {a} ^ {b} g ^ {\prime} (1 - \mathrm {N} _ {c} ^ {\infty} F (x)) d x \cdot c = L _ {\infty} (\rho_ {g}; F, c) c. +$$ + +Corollary D.5 (CVaR). For $F \in \mathcal{D}([a, b])$ , it holds that + +$$ +C _ {\alpha} (\mathrm {P} _ {c} ^ {\infty} F) - C _ {\alpha} (F) \leq \frac {b - F ^ {- 1} (1 - \alpha)}{\alpha} c \leq \frac {b - F ^ {- 1} (1 - \alpha - c)}{\alpha} c = L _ {\infty} (C _ {\alpha}; F, c) c, +$$ + +$$ +C _ {\alpha} (F) - C _ {\alpha} (\mathbb {N} _ {c} ^ {\infty} F) \leq \frac {b - F ^ {- 1} (1 - \alpha - c)}{\alpha} c = L _ {\infty} (C _ {\alpha}; F, c) c. +$$ + +Proposition D.6 (RDEU). For $F \in \mathcal{D}([a, b])$ , it holds that (assume $w$ convex) + +$$ +V (\mathrm {P} _ {c} ^ {\infty} F) - V (F) \leq \int_ {a} ^ {b} w ^ {\prime} (F (x)) u ^ {\prime} (x) d x \cdot c \leq \int_ {a} ^ {b} w ^ {\prime} (\mathrm {N} _ {c} ^ {\infty} F (x)) u ^ {\prime} (x) d x \cdot c = L _ {\infty} (V; F, c) c, +$$ + +$$ +V (F) - V (\mathrm {N} _ {c} ^ {\infty} F) \leq \int_ {a} ^ {b} w ^ {\prime} (\mathrm {N} _ {c} ^ {\infty} F (x)) u ^ {\prime} (x) d x \cdot c. +$$ + +# D.3.2. WASSERSTEIN DISTANCE + +Proposition D.7 (SRM). For $F \in \mathcal{D}([a, b])$ , it holds that + +$$ +M _ {\phi} \left(\mathrm {P} _ {c} ^ {1} F\right) - M _ {\phi} (F) = \int_ {g ^ {+} (c)} ^ {1} \left(\mathrm {P} _ {c} ^ {1} F ^ {- 1} (y) - F ^ {- 1} (y)\right) \phi (y) d y \leq \phi (1) c = L _ {1} \left(M _ {\phi}; F, c\right) c +$$ + +$$ +M _ {\phi} (F) - M _ {\phi} (\mathrm {N} _ {c} ^ {1} F) = \int_ {g ^ {-} (c)} ^ {1} (F ^ {- 1} (y) - \mathrm {N} _ {c} ^ {1} F ^ {- 1} (y)) \phi (y) d y \leq \phi (1) c. +$$ + +Proposition D.8 (DRM). For $F \in \mathcal{D}([a, b])$ , it holds that + +$$ +\rho_ {g} (\mathrm {P} _ {c} ^ {1} F) - \rho_ {g} (F) = \int_ {g ^ {+} (c)} ^ {1} (\mathrm {P} _ {c} ^ {1} F ^ {- 1} (y) - F ^ {- 1} (y)) g ^ {\prime} (1 - y) d y \leq g ^ {\prime} (0) c = L _ {1} (\rho_ {g}; F, c) c +$$ + +$$ +\rho_ {g} (F) - \rho_ {g} (\mathrm {N} _ {c} ^ {1} F) = \int_ {g ^ {-} (c)} ^ {1} (F ^ {- 1} (y) - \mathrm {N} _ {c} ^ {1} F ^ {- 1} (y)) g ^ {\prime} (1 - y) d y \leq g ^ {\prime} (0) c. +$$ + +Corollary D.9 (CVaR). For $F \in \mathcal{D}([a,b])$ , it holds that + +$$ +C _ {\alpha} (\mathrm {P} _ {c} ^ {1} F) - C _ {\alpha} (F) = \frac {c}{\alpha} = L _ {1} (C _ {\alpha}; F, c) c, +$$ + +$$ +C _ {\alpha} (F) - C _ {\alpha} (\mathrm {N} _ {c} ^ {1} F) = \frac {c}{\alpha} = L _ {1} (C _ {\alpha}; F, c) c. +$$ + +Proposition D.10 (CE). For $F \in \mathcal{D}([a, b])$ , it holds that + +$$ +\begin{array}{l} C _ {u} \left(\mathrm {P} _ {c} ^ {1} F\right) - C _ {u} (F) \leq \| u ^ {\prime} \| _ {\infty} (u ^ {- 1}) ^ {\prime} \left(\int_ {a} ^ {b} u (x) d F (x)\right) \cdot c \\ \leq \| u ^ {\prime} \| _ {\infty} (u ^ {- 1}) ^ {\prime} \left(\int_ {a} ^ {b} u (x) d \mathrm {N} _ {c} ^ {1} F (x)\right) \cdot c, \\ \end{array} +$$ + +$$ +C _ {u} (F) - C _ {u} (\mathrm {N} _ {c} ^ {1} F) \leq \| u ^ {\prime} \| _ {\infty} (u ^ {- 1}) ^ {\prime} \left(\int_ {a} ^ {b} u (x) d \mathrm {N} _ {c} ^ {1} F (x)\right) \cdot c. +$$ + +Corollary D.11 (ERM). For $F \in \mathcal{D}([a, b])$ , it holds that (assume $\beta > 0$ ) + +$$ +U _ {\beta} \left(\mathrm {P} _ {c} ^ {1} F\right) - U _ {\beta} (F) \leq \frac {\exp (\beta b)}{\int_ {a} ^ {b} \exp (\beta x) d F (x)} \cdot c \leq \frac {L _ {\infty} (U _ {\beta}) c}{\int_ {a} ^ {b} \exp (\beta x) d \mathrm {N} _ {c} ^ {1} F (x)}, +$$ + +$$ +U _ {\beta} (F) - U _ {\beta} (\mathrm {N} _ {c} ^ {1} F) \leq \frac {L _ {\infty} (U _ {\beta}) c}{\int_ {a} ^ {b} \exp (\beta x) d \mathrm {N} _ {c} ^ {1} F (x)}. +$$ + +# E. Proof of Proposition 6.1 + +Proof. Observe that the regret decomposes as $\mathrm{Regret}(\mathrm{LCB},\nu ,T) = \sum_{i = 1}^{K}\Delta_{i}\mathbb{E}[N_{i}(T)]$ . We will bound $\mathbb{E}[N_i(T)]$ for each suboptimal arm $i\neq 1$ . Recall that $\underline{F}_{i,t} = \mathrm{N}_{c_i(t)}^{\infty}\hat{F}_{i,t}$ . Denote by $\hat{F}_i^s$ the empirical CDF corresponding to arm $i$ after observing $s$ samples, therefore we have $\hat{F}_{i,t} = \hat{F}_i^{N_i(t)}$ . Without loss of generality, we assume the first arm is the optimal arm, i.e., + +$$ +C _ {\alpha} (F _ {1}) \leq C _ {\alpha} (F _ {i}), \forall i \in [ K ]. +$$ + +Define the good event for arm $i$ as + +$$ +G _ {i} = \{C _ {\alpha} (F _ {1}) > \max _ {t \in [ T ]} C _ {\alpha} (\underline {{F}} _ {1, t}) \} \cap \{C _ {\alpha} (\mathrm {N} _ {\sqrt {\frac {\log (2 / \delta)}{2 u _ {i}}}} ^ {\infty} \hat {F} _ {i} ^ {u _ {i}}) > C _ {\alpha} (F _ {1}) \}. +$$ + +We claim that if $G_{i}$ occurs, then $N_{i}(T) \leq u_{i}$ . The proof follows from a contradiction. Suppose $N_{i}(T) > u_{i}$ , then there exists some round $t \in [T]$ such that $N_{i}(t) = u_{i}$ and $I_{t} = i$ . It follows that + +$$ +\begin{array}{l} C _ {\alpha} (\underline {{F}} _ {i, t}) = C _ {\alpha} (\mathrm {N} _ {c _ {i} (t)} ^ {\infty} \hat {F} _ {i, t}) \\ = C _ {\alpha} (\mathrm {N} ^ {\infty} \sqrt {\frac {\log (2 / \delta)}{2 u _ {i}}} \hat {F} _ {i} ^ {u _ {i}}) \\ > C _ {\alpha} (F _ {1}) \\ > C _ {\alpha} (\underline {{F}} _ {1, t}), \\ \end{array} +$$ + +where the inequalities come from the definition of $G_{i}$ . Hence $I_{t} = \arg \min_{i\in [K]}C_{\alpha}(\underline{F}_{i,t})\neq i$ , which leads to a contradiction. Using the tower property, + +$$ +\mathbb {E} \left[ N _ {i} (T) \right] = \mathbb {E} \left[ N _ {i} (T) \mid G _ {i} \right] \mathbb {P} \left(G _ {i}\right) + \mathbb {E} \left[ N _ {i} (T) \mid G _ {i} ^ {c} \right] \mathbb {P} \left(G _ {i} ^ {c}\right) \leq u _ {i} + T \mathbb {P} \left(G _ {i} ^ {c}\right). +$$ + +Next, we show that $\mathbb{P}(G_i^c)$ is small. By using union bound, we have + +$$ +\mathbb {P} (G _ {i} ^ {c}) \leq \mathbb {P} \left(C _ {\alpha} (F _ {1}) \leq \max _ {t \in [ T ]} C _ {\alpha} (\tilde {F} _ {1, t})\right) + \mathbb {P} \left(C _ {\alpha} \left(\mathrm {N} _ {\sqrt {\frac {\log (2 / \delta)}{2 u _ {i}}}} ^ {\infty} \hat {F} _ {i} ^ {u _ {i}}\right) \leq C _ {\alpha} (F _ {1})\right). +$$ + +By Theorem 4.1, for any $t \in [T]$ , if $F_1 \in B(\hat{F}_{1,t}, c_1(t))$ then $\underline{F}_{1,t} = \mathrm{N}_{c_1(t)}^{\infty} \hat{F}_{1,t} \succeq F_1$ , and $C_{\alpha}(\underline{F}_{1,t}) \leq C_{\alpha}(F_1)$ . Hence the first term on the r.h.s. can be bounded as + +$$ +\begin{array}{l} \mathbb {P} \left(C _ {\alpha} (F _ {1}) \leq \max _ {t \in [ T ]} C _ {\alpha} (\tilde {F} _ {1, t})\right) = \mathbb {P} \left(\exists t \in [ T ]: C _ {\alpha} (F _ {1}) \leq C _ {\alpha} (\underline {{F}} _ {1, t})\right) \\ \leq \mathbb {P} \left(\exists t \in [ T ]: \left\| F _ {1} - \hat {F} _ {1, t} \right\| \geq \sqrt {\frac {\log (2 / \delta)}{2 N _ {i} (t)}}\right) \\ \leq \mathbb {P} \left(\cup_ {s \in [ T ]} \left\{\left\| F _ {1} - \hat {F} _ {1} ^ {s} \right\| \geq \sqrt {\frac {\log (2 / \delta)}{2 s}} \right\}\right) \\ \leq T \delta , \\ \end{array} +$$ + +where the last inequality follows from a union bound and the DKW inequality. Denote by $c_{i} \triangleq \sqrt{\frac{\log(2 / \delta)}{2u_{i}}}$ . By Corollary D.5, we have that $C_{\alpha}(F) - C_{\alpha}(\mathrm{N}_{c}^{\infty}F) \leq \frac{b - F^{-1}(1 - \alpha - c)}{\alpha} c$ . If the event $\left\{\left\| \hat{F}_i^{u_i} - F_i\right\|_\infty < c_i\right\}$ occurs, then + +$$ +C _ {\alpha} (\hat {F} _ {i} ^ {u _ {i}}) - C _ {\alpha} \left(\mathrm {N} _ {c _ {i}} ^ {\infty} \hat {F} _ {i} ^ {u _ {i}}\right) \leq \frac {1 - (\hat {F} _ {i} ^ {u _ {i}}) ^ {- 1} (1 - \alpha - c _ {i})}{\alpha} c _ {i} +$$ + +and + +$$ +C _ {\alpha} (F _ {i}) - C _ {\alpha} (\hat {F} _ {i} ^ {u _ {i}}) \leq C _ {\alpha} (\mathrm {P} _ {c _ {i}} ^ {\infty} \hat {F} _ {i} ^ {u _ {i}}) - C _ {\alpha} (\hat {F} _ {i} ^ {u _ {i}}) \leq \frac {b - (\mathrm {P} _ {c _ {i}} ^ {\infty} \hat {F} _ {i} ^ {u _ {i}}) ^ {- 1} (1 - \alpha - c _ {i})}{\alpha} c _ {i}. +$$ + +Combining these inequalities, + +$$ +\begin{array}{l} C _ {\alpha} \left(\mathrm {P} _ {c _ {i}} \hat {F} _ {i} ^ {u _ {i}}\right) - C _ {\alpha} (F _ {i}) \leq \left(\frac {b - (\hat {F} _ {i} ^ {u _ {i}}) ^ {- 1} (1 - \alpha - c _ {i})}{\alpha} + \frac {b - (\mathrm {N} _ {c _ {i}} \hat {F} _ {i} ^ {u _ {i}}) ^ {- 1} (1 - \alpha - c _ {i})}{\alpha}\right) c _ {i} \\ \leq 2 \frac {b - (\hat {F} _ {i} ^ {u _ {i}}) ^ {- 1} (1 - \alpha - c _ {i})}{\alpha} c _ {i} \\ \leq 2 \frac {b - F _ {i} ^ {- 1} (1 - \alpha - 2 c _ {i})}{\alpha} c _ {i} \\ \end{array} +$$ + +We choose $u_{i}$ such that $\Delta_{i} = C_{\alpha}(F_{i}) - C_{\alpha}(F_{1}) \geq 2\frac{b - F_{i}^{-1}(1 - \alpha - 2c_{i})}{\alpha} c_{i}$ , then the second term on the r.h.s. can be bounded as + +$$ +\begin{array}{l} \mathbb {P} \left(C _ {\alpha} \left(\mathrm {N} _ {c _ {i}} ^ {\infty} \hat {F} _ {i} ^ {u _ {i}}\right) \leq C _ {\alpha} (F _ {1})\right) = \mathbb {P} \left(C _ {\alpha} (F _ {i}) - C _ {\alpha} \left(\mathrm {N} _ {c _ {i}} ^ {\infty} \hat {F} _ {i} ^ {u _ {i}}\right) \geq \Delta_ {i}\right) \\ \leq \mathbb {P} \left(C _ {\alpha} (F _ {i}) - C _ {\alpha} \left(\mathrm {N} _ {c _ {i}} ^ {\infty} \hat {F} _ {i} ^ {u _ {i}}\right) \geq 2 \frac {b - F _ {i} ^ {- 1} (1 - \alpha - 2 c _ {i})}{\alpha} c _ {i}\right) \\ \leq \mathbb {P} \left(\left\| \hat {F} _ {i} ^ {u _ {i}} - F _ {i} \right\| _ {\infty} \geq c _ {i}\right) \\ \leq \delta . \\ \end{array} +$$ + +Hence, the probability of $G_{i}^{c}$ is bounded as $\mathbb{P}(G_i^c)\leq (T + 1)\delta$ . It follows that + +$$ +\mathbb {E} \left[ N _ {i} (T) \right] \leq u _ {i} + T (T + 1) \delta . +$$ + +Define $h_i(c) \triangleq 2\frac{b - F_i^{-1}(1 - \alpha - 2c)}{\alpha} c$ . Let $c_i^* \triangleq h_i^{-1}(\Delta_i)$ be the solution to the equation + +$$ +h _ {i} (c) = 2 \frac {b - F _ {i} ^ {- 1} (1 - \alpha - 2 c)}{\alpha} c = \Delta_ {i}. +$$ + +Note that $c_i^*$ is a distribution-dependent constant. We let $u_i := \left\lceil \frac{\log(2 / \delta)}{2(c_i^*)^2} \right\rceil$ and let $\delta = \frac{1}{T^2}$ : + +$$ +\mathbb {E} [ N _ {i} (T) ] \leq \left\lceil \frac {\log (2 T ^ {2})}{2 (c _ {i} ^ {*}) ^ {2}} \right\rceil + 2 \leq \frac {\log (\sqrt {2} T)}{(c _ {i} ^ {*}) ^ {2}} + 3. +$$ + +Substituting it into the regret decomposition, we get + +$$ +\begin{array}{l} \sum_ {i = 1} ^ {K} \Delta_ {i} \mathbb {E} [ N _ {i} (T) ] \leq \log (\sqrt {2} T) \sum_ {i = 2} ^ {K} \frac {\Delta_ {i}}{\left(c _ {i} ^ {*}\right) ^ {2}} + 3 \sum_ {i = 1} ^ {K} \Delta_ {i} \\ = \frac {4 \log (\sqrt {2} T)}{\alpha^ {2}} \sum_ {i > 1} ^ {K} \frac {\left(b - F _ {i} ^ {- 1} (1 - \alpha - 2 c _ {i} ^ {*})\right) ^ {2}}{\Delta_ {i}} + 3 \sum_ {i = 1} ^ {K} \Delta_ {i}. \\ \end{array} +$$ + +![](images/7d09f8165010d2afdb2c65b85396b4f566f27f8471bd7d38fe2935302fcaae57.jpg) + +# F. Algorithms + +We present several comprehensive algorithms that output the confidence bounds for a given risk measure when given $n$ i.i.d. samples and confidence radius as input. Algorithm 2 and Algorithm 3 compute the UCB and LCB of a risk measure via Wasserstein distance respectively. Meanwhile, Algorithm 4 and Algorithm 5 compute the UCB and LCB of a risk measure via supremum distance respectively. + +Algorithm 2 Wasserstein upper confidence bound +1: Input: $b$ , samples $\mathbf{X} = X_{1},X_{2},\dots ,X_{n}$ , risk measure $\mathbf{T}$ , $c > 0$ +2: Sort the $n$ samples in ascent order $X_{(1)}\leq X_{(2)}\dots \leq X_{(n)}$ +3: Initialize $S_{1} = \frac{b - X_{(n)}}{n}$ +4: for $i = 1:n$ do +5: if $S_{i}\leq c$ then +6: $S_{i + 1} = S_{i} + \frac{1}{n} (b - X_{(n - i)})$ +7: else +8: $n^{\prime} = n + 1 - i$ +9: break +10: end if +11: end for +12: $p_{n^{\prime}} = \frac{S_i - c}{b - X_{(n^{\prime})}}, p_b = \frac{i}{n} - p_{n^{\prime}}$ +13: $\overline{F_n^1} = \frac{1}{n}\sum_{i}^{n' - 1}\mathbb{I}\{X_{(i)}\leq \cdot \} + p_{n'}\mathbb{I}\{X_{(n')} \leq \cdot \} + p_b\mathbb{I}\{b \leq \cdot \}$ +14: Output: $\mathbf{T}\left(\overline{F_n^1}\right)$ + +Algorithm 3 Wasserstein lower confidence bound +1: Input: $a$ , samples $X = X_{1}, X_{2}, \dots, X_{n}$ , risk measure $T$ , $c > 0$ +2: Sort the $n$ samples in ascent order $X_{(1)} \leq X_{(2)} \dots \leq X_{(n)}$ +3: Initialize $S_{1} = \frac{X_{(n)} - X_{(n-1)}}{n}$ +4: for $i = 1:n-1$ do +5: if $S_{i} \leq c$ then +6: $S_{i+1} = S_{i} + \frac{i+1}{n}(X_{(n-i)} - X_{(n-1-i)})$ +7: else +8: $n' = n+1-i$ +9: break +10: end if +11: end for +12: $b^{-} = X_{(n'-1)} + \frac{n(S_{i}-c)}{i}$ +13: $\underline{F_{n}^{1}} = \frac{1}{n}\sum_{i}^{n'-1}\mathbb{I}\{X_{i} \leq \cdot\} + \frac{i}{n}\mathbb{I}\{b^{-} \leq \cdot\}$ +14: Output: $T\left(\underline{F_{n}^{1}}\right)$ + +# F.1. Time Complexity + +We start with Algorithm 2. The sorting of $n$ samples incurs $\mathcal{O}(n\log n)$ . The for-loop costs $\mathcal{O}(n)$ since the cost in each iteration is $\mathcal{O}(1)$ . Therefore the total time complexity is $\mathcal{O}(n\log n + \log n)$ . The time complexity of Algorithm 3-Algorithm 5 is $\mathcal{O}(n\log n + \log n)$ . + +# F.2. Space Complexity + +Consider Algorithm 2. The space complexity of storing the samples is $\mathcal{O}(n)$ . In addition, storing $S_{i}$ , $p_{n'}$ , and $p_{b}$ costs $\mathcal{O}(n)$ . The total space complexity is $\mathcal{O}(n)$ . It is easy to check that the time complexity of Algorithm 3-Algorithm 5 is $\mathcal{O}(n)$ . + +# G. Experiments + +# G.1. Confidence Bounds + +We consider five beta distributions with different parameters. The specific parameters $(A,B)$ is shown above in each figure. Unless otherwise specified, we always use $N = 10^{5}$ samples, $\alpha = 0.05$ , $\beta = 1$ , and $\delta = 0.05$ . For convenience, we use $c_{n}^{1} = (b - a)c_{n}^{\infty}$ . We plot the CIs for ERM and CVaR for varying sample size and varying risk parameter in Figure 6-13. + +
Algorithm 4 Supremum upper confidence bound
1: Input: b, samples X = X1, X2, ..., Xn, risk measure T, c > 0
2: Sort the n samples in ascent order X(1) ≤ X(2) ≤ ... ≤ X(n)
3: Initialize i = 1
4: while i/n ≤ c do
5: i = i + 1
6: l = i
7: end while
8: Fn∞ = (l/n - c)I{X(l) ≤ ·} + 1/n ∑i=l+1n I{X(i) ≤ ·} + cI{b ≤ ·}
9: Output: T (Fn∞)
Algorithm 5 Supremum lower confidence bound
1: Input: a, samples X = X1, X2, ..., Xn, risk measure T, c > 0
2: Sort the n samples in ascent order X(1) ≤ X(2) ≤ ... ≤ X(n)
3: Initialize i = n
4: while i/n + c ≥ 1 do
5: i = i - 1
6: l = i
7: end while
8: Fn∞ = cI{a ≤ ·} + 1/n ∑i=1n I{X(i) ≤ ·} + (1 - l/n - c)I{X(l+1) ≤ ·}
9: Output: T (Fn∞)
+ +# G.2. CVaR Bandit + +We adopt the same bandit instances as Tamkin et al. (2019). The parameters of these distributions are given in Table 1 in Tamkin et al. (2019). The left, middle, and right part of Figure 14 plots the results for easy bandit instance with $\alpha = 0.25$ , hard bandit instance with $\alpha = 0.25$ , and hard bandit instance with $\alpha = 0.05$ . As expected, CVaR-UCB consistently outperforms LLC-UCB and GLC-UCB. + +![](images/800dc5886e012a7875fb2165ee872be2fb37e48002bd6c1cc2b15ebb21303a16.jpg) + +![](images/6cf0cc996a092ba37089fe5d31925c9b206b3a58670f38f4fa27503a8742b6b6.jpg) +Figure 6: CVaR UCB with varying sample size + +![](images/d526d35f7a2edfe20de79e98fe451802c3fda98b4090b67d080c94375f839aad.jpg) + +![](images/586d24b8420b81e552ec9c7d268a4f8963d27df098361d4871cfbaaa59895eee.jpg) + +![](images/72023787cfdc3e8c3278e29b97275ae532899360fc5cabda34b0520b3c05188d.jpg) + +![](images/f56cc0e76072b09557180cd54aaa008e5c53bd3b266c9d4ee3a048a2f0c2ad19.jpg) + +![](images/07cb9e0c07608080f0948a6a8e466e42397053454a34c58d553d38e19e25279e.jpg) +Figure 7: CVaR LCB with varying sample size + +![](images/447dabfa863ad81ab7aa9356358928a3b58d0fad714f7135ebd24ba329eaa048.jpg) + +![](images/d879810d46bae240ec6ce8e514133eb26c273e24f1306de391de87be04364173.jpg) + +![](images/2018ceb97070545abb6be5070061b077547607bf85ce6b50d819b4050a7a5d5c.jpg) + +![](images/64190f64b4c65131e7451628298e66e6dc6a907ad5d13554b91ed3f5256abd8c.jpg) + +![](images/877135096dc7c0c50e2fe4437b1cb6073d2297d296d343e8c1b8ca714be6c02b.jpg) +Figure 8: CVaR UCB with varying $\alpha$ + +![](images/1cdd860cfcdfd4f6754177960d202e710150dac5ef4a5fae543f371b7127f18f.jpg) + +![](images/ef4b20514d33e2aff6fd059e3cebb18e2ce99c34a00c7d83ac73faa048943394.jpg) + +![](images/076bfcf8fab8cb6e99f664cd786ffe230fa0ec9cddb4faa7296f71e81eeac97f.jpg) + +![](images/151d91a5c2fb329f3df35c6b649c49aa1d444d987175c6a043ecc60d9d358e65.jpg) + +![](images/8ce7980160b8b7efdc42ddf0732afb754e314e56bd77d5bc5ad7b51b99594eeb.jpg) +Figure 9: CVaR LCB with varying $\alpha$ + +![](images/4d3116ad01b8be84eeb7b283e7b77a10618585817c4fec98b26cd5ddb5d559c8.jpg) + +![](images/4d156bb9c8ace97a9f6d822d46ebad7911eadcfd73f645605bcc43555563137c.jpg) + +![](images/af49a289757ed718f4e37525ed82397dde39cf75e6d42e2a256f45d7c471e200.jpg) + +![](images/00b995cc898af385f24cc957a2f92feefda6cbd08187adb72bdd575b8f51b317.jpg) + +![](images/330082c295c7e2a2975429a7bae2b20c18c1ecb88453819779f5fe227df40136.jpg) + +![](images/14292d7eeb39506823c0d6ccd0a28b7a49ce69e586d3213fc17e32358c01ce25.jpg) + +![](images/e98d099c8fdac82ba158d8cfdc526b55b7b1061ac05989f1052c289a166d1703.jpg) +Figure 10: ERM CI with varying sample size + +![](images/332683d2a1dd0d81bcd697290a9e56306e6cc873aa5abe2c44d91ab269e36ff5.jpg) + +![](images/04af87bf6a4f381a31aedc94377ada3476b4fe2bf76f268ccf1f2dafdabc94d7.jpg) + +![](images/9730f1b210b4ddd94132bac676af0c492f7bd50e3123de1b35e128e221e2ed32.jpg) + +![](images/3fa9c99d5a34c28af877fafb868369ecc9f970e21daf4e873c4adf1ee3f9c365.jpg) + +![](images/9ee06f7fee9a5efffdbd67c7ac4efd33f38e42092211674093e50fe9ff2620da.jpg) + +![](images/6749793859109aad5ac9efa29ea7cd44556424be0e9af1574d243469206eb82f.jpg) +Figure 11: ERM CI with varying $\beta$ + +![](images/621a465b54fffb54ac65c3b06078814e0357b41eb6f7ffef6f696d6ee558ef90.jpg) + +![](images/435f94c630d2971cff31748f666838744490494ea9cbd9d1a4c6597f1fb2164a.jpg) + +![](images/c4e5519f9dcdfd6700445f1262fdb8632f150586af92c308e104b3b583934b9a.jpg) + +![](images/5c5bfc614bf6897e5b0c6f16659b8a55f0c7de296dcb407f5012fb27179f4edc.jpg) + +![](images/fd123159ee15853c27774e238655c17c1006a40706578293b28e26a11060bb00.jpg) + +![](images/96c1c5b0f597a9e4a54527fef96fea9331b5d8133861a1a9ab3d3e353bd03141.jpg) +Figure 12: ERM CI with varying sample size + +![](images/bc31bf87e00ff9d0b4fe52117899501b2e27b3c5f91ab19322db08f382315a00.jpg) + +![](images/3879aaae40ef5fbafcba854dc4437bea73fd191e82f1174e4e5149cfee500cd2.jpg) + +![](images/c2fdfb06e4ce648587390785fda08d19b98889d492006e078f6e84b67229214d.jpg) + +![](images/d2e0bbcb4d0fdd28f12029443cc8bdd66cf9d02a5651ac48339e6ce5299cdb95.jpg) + +![](images/959adacb4845ee3917eb56fdfdac675a42e81fd02516665221ff639a80adc8c1.jpg) + +![](images/2b41be916f46aab5b44c2b1856909fab363e1551bda426092629aa48303e29ad.jpg) +Figure 13: ERM CI with varying $\beta$ + +![](images/c60ee770828ec94a8c86ca8d68bf4641cf63ffa00e03d8996de9f5c9f395dc0d.jpg) + +![](images/461f09397499eb18f7e8c1e3a8f53433ff36c31cba55045033260c1ce6129bb3.jpg) + +![](images/2fea393464bf2f39caeadd29a519c12f35fff98fd685214c639c17b0df9fa621.jpg) + +![](images/31627781f311ffa1d67d8879eddd54befaa59f00507f67f78d3318746604cf8c.jpg) +Figure 14: CVaR bandit + +![](images/f74415ecece873d8ed62644d39f41b40206f20859b30b4b55b3c13f6627a4831.jpg) \ No newline at end of file diff --git a/adistributionoptimizationframeworkforconfidenceboundsofriskmeasures/images.zip b/adistributionoptimizationframeworkforconfidenceboundsofriskmeasures/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9b72f24188a43a7a589bf0d240f15c49e42f442c --- /dev/null +++ b/adistributionoptimizationframeworkforconfidenceboundsofriskmeasures/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc93b98e1f1322aa26c23de2c87824673c5f508094da06e0d32681f62b9829a4 +size 2066515 diff --git a/adistributionoptimizationframeworkforconfidenceboundsofriskmeasures/layout.json b/adistributionoptimizationframeworkforconfidenceboundsofriskmeasures/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d10f6736935689cf8910cb3d49d18c24952a88eb --- /dev/null +++ b/adistributionoptimizationframeworkforconfidenceboundsofriskmeasures/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1c7baf1f8be0a7d22ee807b598cc57fe0e12d4fa43daf6520360bc8b5987ffe +size 1356019 diff --git a/afastoptimisticmethodformonotonevariationalinequalities/5f174af3-ebf4-4b6a-951b-431651b835af_content_list.json b/afastoptimisticmethodformonotonevariationalinequalities/5f174af3-ebf4-4b6a-951b-431651b835af_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6a9e1c0b2cacd0438ae3312fc70e7e62173a63dc --- /dev/null +++ b/afastoptimisticmethodformonotonevariationalinequalities/5f174af3-ebf4-4b6a-951b-431651b835af_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14a3d4ce8bba9413a1de9da05f80ae3c4e814b411f66955e0e129a4640dc42eb +size 234116 diff --git a/afastoptimisticmethodformonotonevariationalinequalities/5f174af3-ebf4-4b6a-951b-431651b835af_model.json b/afastoptimisticmethodformonotonevariationalinequalities/5f174af3-ebf4-4b6a-951b-431651b835af_model.json new file mode 100644 index 0000000000000000000000000000000000000000..27cebc55cf9bbd3faf9d20e025981ecfc8579d61 --- /dev/null +++ b/afastoptimisticmethodformonotonevariationalinequalities/5f174af3-ebf4-4b6a-951b-431651b835af_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e301023e84d9df0920f18f554d9bb7915c9a47276ab1a783e66c758e822fa0fd +size 269632 diff --git a/afastoptimisticmethodformonotonevariationalinequalities/5f174af3-ebf4-4b6a-951b-431651b835af_origin.pdf b/afastoptimisticmethodformonotonevariationalinequalities/5f174af3-ebf4-4b6a-951b-431651b835af_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..224404955c845a4bcccdec6af26b2c95fd97a04c --- /dev/null +++ b/afastoptimisticmethodformonotonevariationalinequalities/5f174af3-ebf4-4b6a-951b-431651b835af_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2e8ac8c9aff2c31b5b83c2e12d9d59586bcd28ef491c7cc325b4d87c09c685e +size 1250933 diff --git a/afastoptimisticmethodformonotonevariationalinequalities/full.md b/afastoptimisticmethodformonotonevariationalinequalities/full.md new file mode 100644 index 0000000000000000000000000000000000000000..cd0b625b916f0c9554b208b806b169deca9a6123 --- /dev/null +++ b/afastoptimisticmethodformonotonevariationalinequalities/full.md @@ -0,0 +1,1420 @@ +# A Fast Optimistic Method for Monotone Variational Inequalities + +Michael Sedlmayer¹ Dang-Khoa Nguyen² Radu Ioan Bot¹² + +# Abstract + +We study monotone variational inequalities that can arise as optimality conditions for constrained convex optimisation or convex-concave minimax problems and propose a novel algorithm that uses only one gradient/operator evaluation and one projection onto the constraint set per iteration. The algorithm, which we call $fOGDA-VI$ , achieves a $o(1/k)$ rate of convergence in terms of the restricted gap function as well as the natural residual for the last iterate. Moreover, we provide a convergence guarantee for the sequence of iterates to a solution of the variational inequality. These are the best theoretical convergence results for numerical methods for (only) monotone variational inequalities reported in the literature. To empirically validate our algorithm we investigate a two-player matrix game with mixed strategies of the two players. Concluding, we show promising results regarding the application of fOGDA-VI to the training of generative adversarial nets. + +# 1. Introduction + +Variational inequalities are fundamental models in various fields such as optimisation, e.g., when determining primal-dual pairs of optimal solutions of constrained convex optimisation problems (Bauschke & Combettes, 2011), economics, game theory (Morgenstern & Von Neumann, 1953), or partial differential equations. Recently, they have attracted particularly significant attention in the area of machine learning due to the fundamental role they play, for instance, in multi agent reinforcement learning (Omidshafiei et al., 2017), robust adversarial learning (Madry et al., 2018) and the training of generative adversarial networks (GANs) (Goodfellow et al., 2014; Goodfellow, 2016). + +$^{1}$ Research Network Data Science, University of Vienna, Vienna, Austria $^{2}$ Faculty of Mathematics, University of Vienna, Vienna, Austria. Correspondence to: Michael Sedlmayer . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +# 1.1. Problem Setting + +In the following we consider $\mathbb{R}^d$ with its standard inner product denoted by $\langle \cdot, \cdot \rangle$ and induced norm $\|\cdot\|$ . Let $F: \mathbb{R}^d \to \mathbb{R}^d$ be a monotone operator, i.e., + +$$ +\langle F (w) - F (z), w - z \rangle \geq 0 \quad \forall w, z \in \mathbb {R} ^ {d}, +$$ + +which is also $L$ -Lipschitz continuous, i.e., + +$$ +\| F (w) - F (z) \| \leq L \| w - z \| \quad \forall w, z \in \mathbb {R} ^ {d}. +$$ + +Furthermore, let $C$ be a nonempty closed convex subset of $\mathbb{R}^d$ . Then the (strong) classical variational inequality problem consists of finding $z^{*}\in C$ such that + +$$ +\langle F (z ^ {*}), z - z ^ {*} \rangle \geq 0 \quad \forall z \in C. \tag {1} +$$ + +For the following considerations we assume that the solution set of (1) is nonempty, i.e., $\Omega := \{z^{*} \in C \mid \langle F(z^{*}), z - z^{*} \rangle \geq 0 \quad \forall z \in C\} \neq \emptyset$ . + +Note that in the case of $F$ being monotone and continuous, the above strong formulation is equivalent to the following problem, + +$$ +\langle F (z), z - z ^ {*} \rangle \geq 0 \quad \forall z \in C, \tag {2} +$$ + +which is known as the weak version of the variational inequality. Writing $N_C(z) \coloneqq \{w \in \mathbb{R}^d \mid \langle v - z, w \rangle \leq 0 \forall v \in C\}$ , for $z \in C$ , and $N_C(z) \coloneqq \emptyset$ , for $z \notin C$ , to denote the normal cone of $C$ , condition (1) is equivalent to the following monotone inclusion, where we want to find $z^* \in \mathbb{R}^d$ such that + +$$ +0 \in F \left(z ^ {*}\right) + N _ {C} \left(z ^ {*}\right). \tag {3} +$$ + +# 1.2. Contribution + +We introduce an accelerated first order method for solving the constrained variational inequality problem (1) that uses a single operator evaluation and a single projection in each iteration. Our proposed algorithm, called fOGDA-VI, exhibits a $o(1 / k)$ rate of convergence for the last iterate which is better than the $\mathcal{O}(1 / k)$ results for other accelerated algorithms. Moreover, fOGDA-VI exhibits convergence of the generated sequence to a solution of the variational inequality under investigation, which is not necessarily the case for other accelerated methods (Cai et al., 2022a; Cai & Zheng, 2022) that have been proposed for (1). + +# 1.3. Overview + +This paper is structured as follows. In Section 2 we discuss suitable convergence measures and (accelerated) solution methods for monotone variational inequalities governed by a monotone and Lipschitz operator. The algorithm fOGDA-VI and the accompanying convergence results are presented in Section 3, which is followed by illustrations of the empirical performance of the proposed method when solving two-player matrix games and in the training of GANs in Section 4. + +# 2. Solving Variational Inequalities + +In this section we recall appropriate measures of convergence for solution methods for monotone variational inequalities and provide an overview on the most important solution methods from the literature, both nonaccelerated and accelerated ones, for solving (1). + +# 2.1. Convergence Measures + +We start with presenting three suitable measures that are commonly used to judge the quality of prospective solutions. + +Restricted gap function For $z^{*} \in \Omega, z_{0} \in \mathbb{R}^{d}$ and $\delta(z_{0}) := \|z^{*} - z_{0}\|$ , the restricted gap function associated with the variational inequality (1) is defined as + +$$ +\operatorname {G a p} (z) := \sup _ {w \in C \cap \mathbb {B} (z ^ {*}; \delta (z _ {0}))} \langle F (w), z - w \rangle \geq 0. +$$ + +It is also known as merit function (Nesterov, 2007) and it measures how much the statement of (2) is violated. + +In the above definition, $\mathbb{B}(z;\delta) := \{w \in \mathbb{R}^d \mid \| w - z\| \leq \delta\}$ denotes the closed ball centred at $z \in \mathbb{R}^d$ with radius $\delta > 0$ . The restriction of the supremum to a bounded set, particularly the ball $\mathbb{B}(z^*; \delta(z_0))$ in our case, is essential to avoid an infinitely large gap when $C$ is unbounded. + +Tangent residual Another quantity that can be used to measure the quality of a solution candidate with respect to the variational inequality (1) is based on the observation that the latter it is equivalent to the monotone inclusion (3). The so-called tangent residual is given by + +$$ +r (z) := \inf _ {\zeta \in N _ {C} (z)} \| F (z) + \zeta \|. +$$ + +In a straightforward way this quantity extends the usual measure $\| F(z)\|$ in the unconstrained setting of monotone equations, where the goal is to find $z^{*}\in \mathbb{R}^{d}$ such that + +$$ +F (z ^ {*}) = 0, \tag {4} +$$ + +to variational inequalities by measuring the distance from 0 to $F(z) + N_{C}(z)$ . Note, if $z \notin C$ we have $N_{C}(z) = \emptyset$ and thus $r(z) = +\infty$ . + +Natural residual Another useful convergence measure is the natural residual, which in fact is upper bounded by the tangent residual, see Section B.1. For this we write $P_C(z)$ to denote the projection of $z \in \mathbb{R}^d$ onto the closed convex set $C$ , which is uniquely defined and given by $P_C(z) = \arg \min_{w \in C} \| w - z \|$ . Using the characterisation of the projection via the normal cone (see Proposition 6.46 in (Bauschke & Combettes, 2011)) we observe that + +$$ +0 \in F (z ^ {*}) + N _ {C} (z ^ {*}) \Leftrightarrow z ^ {*} = P _ {C} \left[ z ^ {*} - F (z ^ {*}) \right]. +$$ + +This motivates to look at + +$$ +\operatorname {R e s} (z) := \| z - P _ {C} [ z - F (z) ] \|, +$$ + +which is also known as fixed point residual. Note, in the unconstrained case (4) the two residuals coincide + +$$ +r (z) = \operatorname {R e s} (z) = \| F (z) \| \quad \forall z \in \mathbb {R} ^ {d}. +$$ + +# 2.2. Solution Methods + +In this work we are interested exclusively in first order methods that are fully splitting, i.e., algorithms that only use direct evaluations of the operator $F$ and projections onto $C$ as main building blocks. As a general Lipschitz continuous operator is not necessarily cocoercive, the simplest first order method that is splitting – the Forward-Backward (FB) algorithm – can not be used to solve (1). + +# 2.2.1. NONACCELERATED SOLUTION METHODS + +Extragradient (EG) method Korpelevich (1976) and Antipin (1976) proposed to take a second forward evaluation of $F$ in each iteration in order to solve (1). This results in the following scheme for $k \geq 0$ + +$$ +\mathrm {E G}: \left\lfloor \begin{array}{l} w _ {k} = P _ {C} \left[ z _ {k} - \gamma F \left(z _ {k}\right) \right] \\ z _ {k + 1} = P _ {C} \left[ z _ {k} - \gamma F \left(w _ {k}\right) \right] \end{array} \right. \tag {5} +$$ + +which converges to a solution of (1) for $0 < \gamma < 1 / L$ . + +It is known that EG converges with a rate of $\mathcal{O}(1 / \kappa)$ in terms of the restricted gap function for the averaged, or ergodic, iterates + +$$ +\bar {w} _ {K} := \frac {1}{K} \sum_ {k = 1} ^ {K} w _ {k} +$$ + +in both the unconstrained (Nemirovski, 2004; Nesterov, 2007; Mokhtari et al., 2020) and the constrained case (Hsieh et al., 2019), which further seems to be optimal (Ouyang & Xu, 2021). The best iterate convergence in terms of the tangent residual, however, is known to yield a rate of $\mathcal{O}(1 / \sqrt{\kappa})$ (Korpelevich, 1976; Facchinei & Pang, 2003), i.e., + +$$ +\min _ {1 \leq k \leq K} r (z _ {k}) = \mathcal {O} \left(\frac {1}{\sqrt {K}}\right) \quad \text {a s} K \to + \infty . +$$ + +The more desirable last iterate convergence rate for EG was derived only recently in the unconstrained case (Gorbunov et al., 2022) which was then extended to the constrained case as well (Cai et al., 2022b). In fact, + +$$ +\operatorname {G a p} (z _ {k}) = \mathcal {O} \left(\frac {1}{\sqrt {k}}\right) \text {a n d} r (z _ {k}) = \mathcal {O} \left(\frac {1}{\sqrt {k}}\right), +$$ + +as $k \to +\infty$ . The result for the restricted gap function (Golowich et al., 2020b) as well as for the residuals is actually tight, meaning that the convergence rate for the averaged iterates is better than for the last iterate. Nevertheless, we emphasise that the latter one is more appealing and that the averaged iterates might still show acceptable behaviour while the actual trajectory of iterates cycles around the set of solutions (Mertikopoulos et al., 2018). + +Popov's method In the saddle point setting Popov (1980) introduced the following algorithm which, when applied to (3), reads for $k \geq 1$ + +$$ +\text {P o p o v :} \left\lfloor \begin{array}{l} w _ {k} = P _ {C} \left[ z _ {k} - \gamma F \left(w _ {k - 1}\right) \right] \\ z _ {k + 1} = P _ {C} \left[ z _ {k} - \gamma F \left(w _ {k}\right) \right] \end{array} \right. \tag {6} +$$ + +which converges to a solution of (1) for $0 < \gamma < 1/2L$ . The update rule of (6) is very similar to (5) but requires $F$ to be evaluated only once per iteration. Actually, Popov differs from EG only in the first block, where $F(z_k)$ is replaced by $F(w_{k-1})$ . In the unconstrained case, (6) can be written in one line, yielding a method usually known as Optimistic Gradient Descent Ascent (OGDA), a name that was coined by works on GAN training (Daskalakis et al., 2018; Daskalakis & Panageas, 2018). + +Given the close connection between EG and OGDA, it is not surprising that many convergence rate results hold in a similar way. In terms of the restricted gap OGDA converges like $\mathcal{O}(1 / k)$ and $\mathcal{O}(1 / \sqrt{k})$ for averaged iterates (Mokhtari et al., 2020) and last iterates (Golowich et al., 2020b), respectively, where the latter one is optimal. The convergence rate in terms of the residuals is $\mathcal{O}(1 / \sqrt{k})$ and it can not be improved in general (Golowich et al., 2020a; Chavdarova et al., 2021a; Cai et al., 2022b), as seen for EG. + +We have seen that both EG and Popov's method require two projections in each iteration. One might think that this is necessary to obtain convergent algorithms for the variational inequality (3) when the operator $F$ is merely Lipschitz continuous and not cocoercive. This is not the case, however, and in the following we will look at algorithms that need only one evaluation of the projection operator per iteration. + +Forward-Backward-Forward (FBF) method One of these single-call projection methods is the FBF method. It was proposed by Tseng (2000) and applied to (3) it iterates + +for $k\geq 0$ + +$$ +\mathrm {F B F}: \left\lfloor \begin{array}{l} w _ {k} = P _ {C} \left[ z _ {k} - \gamma F (z _ {k}) \right] \\ z _ {k + 1} = w _ {k} - \gamma F (w _ {k}) + \gamma F (z _ {k}) \end{array} \right. \tag {7} +$$ + +which converges to a solution of (1) for $0 < \gamma < 1 / L$ . Notice that in each iteration this iterative scheme performs two evaluations of $F$ along the sequences $(z_{k})_{k\geq 0}$ and $(w_{k})_{k\geq 0}$ , similar to EG - the first line of FBF and EG is even identical. In the second line, however, instead of performing another projection the forward step regarding the intermediate iterate $w_{k}$ is corrected by the previous update $F(z_{k})$ . Moreover, for the unconstrained problem (4), i.e., in the absence of projections, FBF and EG are equivalent. + +Forward-Reflected-Backward (FRB) method Another single-call projection method that even requires only one evaluation of $F$ like the basic FB algorithm was proposed by Malitsky and Tam (2020). The FRB algorithm is given for $k \geq 1$ by + +$$ +\operatorname {F R B}: \left\lfloor z _ {k + 1} = P _ {C} \left[ z _ {k} - 2 \gamma F \left(z _ {k}\right) + \gamma F \left(z _ {k - 1}\right) \right] \quad (8) \right. +$$ + +and converges to a solution of (1) for $0 < \gamma < 1/2L$ . Note that FRB can be deducted from FBF by reusing $F(w_{k-1})$ instead of $F(z_k)$ in the first line of (7), similarly to how Popov's method can be obtained from EG. Hence (8) coincides with (6) and OGDA in the unconstrained case. + +Projected Reflected Gradient (RG) method Before investigating FRB, Malitsky (2015) introduced another similar method where the order of the reflection and the forward step is reversed. In particular, a second forward step can be avoided by evaluating $F$ at an appropriate linear combination of the iterates. This gives rise to the following method for $k \geq 1$ + +$$ +\mathrm {R G}: \left\lfloor \begin{array}{l} w _ {k} = 2 z _ {k} - z _ {k - 1} \\ z _ {k + 1} = P _ {C} \left[ z _ {k} - \gamma F \left(w _ {k}\right) \right] \end{array} \right. \tag {9} +$$ + +which converges to a solution of (1) for $0 < \gamma < (\sqrt{2} - 1) / L$ . + +Despite the similarities in the construction and iterate convergence, nonasymptotic convergence is less understood in the case of single-call projection methods. For example Banert and Boj (2018) derived for Tseng's method an ergodic $\mathcal{O}(1 / k)$ rate in terms of function values in the context of convex optimisation; see also (Bohm et al., 2022) for an ergodic $\mathcal{O}(1 / \sqrt{k})$ convergence result in terms of the restricted gap function in the stochastic setting. For Malitsky's RG algorithm convergence in terms of the gap function and residuals like $\mathcal{O}(1 / \sqrt{k})$ for the last iterate was established recently (Cai & Zheng, 2022). + +# 2.2.2. ACCELERATED SOLUTION METHODS + +Extra Anchored Gradient (EAG) algorithm An accelerated algorithm for solving the monotone equation (4) that + +is based on EG, called Extra Anchored Gradient (EAG) algorithm, was proposed by Yoon and Ryu (2021). It is designed by using anchoring, a technique that can be traced back to Halpern's algorithm (1967). This iterative scheme exhibits a convergence rate of + +$$ +\left\| F (z _ {k}) \right\| = \mathcal {O} \left(\frac {1}{k}\right) \quad \text {a s} k \rightarrow + \infty . +$$ + +These considerations have been followed by extension of EAG to the constrained setting (Cai et al., 2022a), where the authors consider for $k \geq 0$ + +$$ +\mathrm {E A G :} \left\lfloor \begin{array}{l} w _ {k} = P _ {C} \left[ z _ {k} - \gamma F (z _ {k}) + \frac {1}{k + 1} (z _ {0} - z _ {k}) \right] \\ z _ {k + 1} = P _ {C} \left[ z _ {k} - \gamma F (w _ {k}) + \frac {1}{k + 1} (z _ {0} - z _ {k}) \right] \end{array} \right. +$$ + +with $0 < \gamma < 1 / \sqrt{3} L$ . One can notice that the algorithm uses two operator evaluations and two projection steps, like EG, and in the unconstrained case it coincides with its projection free counterpart (Yoon & Ryu, 2021), maintaining the $\mathcal{O}(1 / k)$ convergence rate for gap function and residuals. + +Accelerated Reflected Gradient (ARG) algorithm Similar to the nonaccelerated methods from the previous subsection, one can also reduce the number of necessary operator evaluations and projections to one each per iteration. This was done by investigating an accelerated version (Cai & Zheng, 2022) of the projected reflected gradient method (9) with convergence rate $\mathcal{O}(1 / k)$ . The method is given for $k\geq 1$ by + +$$ +\mathrm {A R G :} \left\lfloor \begin{array}{c} w _ {k} = 2 z _ {k} - z _ {k - 1} + \frac {1}{k + 1} (z _ {0} - z _ {k}) \\ - \frac {1}{k} (z _ {0} - z _ {k - 1}) \\ z _ {k + 1} = P _ {C} \left[ z _ {k} - \gamma F (w _ {k}) + \frac {1}{k + 1} (z _ {0} - z _ {k}) \right] \end{array} \right. +$$ + +with $0 < \gamma \leq 1 / 12L$ . + +It is worth mentioning that for constrained EAG (Cai et al., 2022a) and ARG (Cai & Zheng, 2022) there are no guarantees for the iterates to converge to a solution and that, in spite of the fast theoretical convergence rate, the effect of the anchor $z_0$ , to which the algorithm returns in every iteration, on the convergence speed is a slowing one, as we will see in the numerical experiments. + +Further accelerated algorithms Further variants of anchoring based algorithms have been proposed by Tran-Dinh (2022) and together with Luo (Tran-Dinh & Luo, 2021), which all exhibit the same convergence rate in terms of the operator norm as EAG for (4). Monotone inclusions are also considered in these works, with a more general operator than the normal cone, but this requires either taking a backward step or additionally asking for cocoercivity of $F$ . In the same spirit, an $o(1 / k)$ rate of convergence together with convergence of iterates was shown for an accelerated version of the Krasnosel'skii-Mann algorithm (Boj & Nguyen, 2022). + +Explicit Fast OGDA (fOGDA) algorithm Another approach which is different from the Halpern-type methods mentioned above was investigated in (Bojt et al., 2022). An appropriate (explicit) discretisation of a second-order dynamical system with vanishing damping term gives rise to an accelerated algorithm related to OGDA, called fast OGDA (fOGDA), which will constitute a starting point for our considerations in the following. The fast OGDA algorithm (Bojt et al., 2022) for solving the monotone equation (4) is given for $k \geq 1$ by + +$$ +\mathrm {f O G D A :} \left\lfloor \begin{array}{l} w _ {k} = z _ {k} + \frac {k}{k + \alpha} (z _ {k} - z _ {k - 1}) - \gamma \frac {\alpha}{k + \alpha} F (w _ {k - 1}) \\ z _ {k + 1} = w _ {k} - \gamma \frac {2 k + \alpha}{k + \alpha} (F (w _ {k}) - F (w _ {k - 1})) \end{array} \right. +$$ + +and converges to a solution of (4) for $0 < \gamma < 1/4L$ and $\alpha > 2$ . It was shown that fOGDA exhibits convergence rates like + +$$ +\operatorname {G a p} (z _ {k}) = o \left(\frac {1}{k}\right) \text {a n d} \| F (z _ {k}) \| = o \left(\frac {1}{k}\right) +$$ + +as $k\to +\infty$ + +# 3. Main Results + +In this section, we will first motivate the changes necessary to extend fOGDA to solve the variational inequality (1) and introduce our newly proposed method which we call $fOGDA-VI$ . This is followed by formally stating the convergence results of fOGDA-VI - convergence of the iterates to a solution as well as convergence in terms of the restricted gap and the residuals like $o(1/k)$ for the last iterate. + +# 3.1. Extending fOGDA to the Constrained Case + +It can be seen empirically, that incorporating one (or more) projections to the unconstrained fOGDA method in a naive way, similar to FRB or Popov, is not sufficient to obtain a solution method for (1). Instead, the idea is to introduce for every $k \geq 1$ an appropriate element $\zeta_k$ of the normal cone $N_C(z_k)$ . This is done by replacing $F(w_{k-1})$ by the sum $F(w_{k-1}) + \zeta_k$ . One might think that because of this the suitable choice would be to take $\zeta_k \in N_C(w_{k-1})$ , but this is not the case which can be motivated as follows. For the unconstrained case, i.e., when solving the monotone equation (4), we want to find a sequence $(z_k)_{k \geq 1}$ yielding $\| F(z_k) \| \to 0$ . However, in the constrained case, i.e., when tackling the monotone inclusion (3), we aim to establish sequences $(z_k)_{k \geq 1}, (\zeta_k)_{k \geq 1}$ with $\zeta_k \in N_C(z_k)$ such that + +$$ +\left\| F \left(z _ {k}\right) + \zeta_ {k} \right\|\rightarrow 0. +$$ + +# Algorithm 1 fOGDA-VI + +Input: momentum parameter $\alpha > 2$ ; starting values $z_0, w_0 \in \mathbb{R}^d, z_1 \in C, \zeta_1 \in N_C(z_1)$ ; step size $0 < \gamma < 1/4L$ ; number of iterations $K > 1$ . + +for $k = 1$ to $K$ do + +Compute + +$$ +w _ {k} = z _ {k} + \frac {k}{k + \alpha} (z _ {k} - z _ {k - 1}) - \gamma \frac {\alpha}{k + \alpha} (F (w _ {k - 1}) + \zeta_ {k}) +$$ + +$$ +z _ {k + 1} = P _ {C} \left[ w _ {k} - \gamma \left(1 + \frac {k}{k + \alpha}\right) (F (w _ {k}) - F (w _ {k - 1}) - \zeta_ {k}) \right] +$$ + +$$ +\zeta_ {k + 1} = \frac {k + \alpha}{\gamma (2 k + \alpha)} \left(w _ {k} - z _ {k + 1}\right) - \left(F (w _ {k}) - F (w _ {k - 1}) - \zeta_ {k}\right) +$$ + +end for + +Then instead of fOGDA we have an algorithm which is given for every $k \geq 1$ by + +$$ +w _ {k} = z _ {k} + \frac {k}{k + \alpha} (z _ {k} - z _ {k - 1}) - \gamma \frac {\alpha}{k + \alpha} (F (w _ {k - 1}) + \zeta_ {k}), +$$ + +$$ +z _ {k + 1} = w _ {k} - \gamma \frac {2 k + \alpha}{k + \alpha} (F (w _ {k}) - F (w _ {k - 1}) + \zeta_ {k + 1} - \zeta_ {k}). \tag {10} +$$ + +At first glance this method seems to be implicit, as both $z_{k+1}$ and $\zeta_{k+1}$ appear on the same line. However, the second line in (10) can be used to formulate a projection step (see Appendix B.1 for details). Even though from the perspective of (13) the appearance of the normal cone element is a natural consequence of the projection, finding its correct formulation is a highly non-trivial task. These considerations give rise to Algorithm 1, which we call fOGDA-VI. + +The two main differences of our proposed method – introduction of a projection at a specific spot as well as explicit computation of a particular element in the normal cone – are deemed to be necessary. Changing (or even neglecting) either of them results in algorithms that fail to converge in general. + +Remark 3.1. Initialisation of fOGDA-VI, however, is easy as for general $z_{1} \in C$ it is sufficient to take $\zeta_{1} = 0$ . With arbitrary $\hat{z} \in \mathbb{R}^{d}$ , one can also take $z_{1} \coloneqq P_{C}(\hat{z})$ and $\zeta_{1} \coloneqq \hat{z} - z_{1} \in N_{C}(z_{1})$ + +# 3.2. Convergence Statements + +The first main result concerns the convergence of the sequence of iterates to an element in $\Omega$ . + +Theorem 3.2. Let $(z_{k})_{k\geq 0}$ be the sequence generated by Algorithm 1. Then the sequence $(z_{k})_{k\geq 0}$ converges to a solution of (1). + +The central idea for the proof is to define an appropriate family of energy functions $(\mathcal{G}_{\lambda ,k})_{k\geq 0}$ , where $\lambda \geq 0$ is a parameter that depends on $\alpha$ , which dissipate over the course of the algorithm to obtain convergence or summability of + +various helpful quantities. Even though we are not able to enforce the family of discrete energies $(\mathcal{G}_{\lambda ,k})_{k\geq 0}$ to have an actual nonincreasing property, we can at least show that for every $k\geq 0$ + +$$ +\mathcal {G} _ {\lambda , k + 1} \leq (1 + d _ {\lambda , k}) \mathcal {G} _ {\lambda , k} - b _ {\lambda , k}, \tag {11} +$$ + +with some sequences $(b_{\lambda ,k})_{k\geq 0}$ and $(d_{\lambda ,k})_{k\geq 0}$ . The aim is to control these two sequences in such a way that we can still derive some beneficial asymptotic results for $(\mathcal{G}_{\lambda ,k})_{k\geq 0}$ . As the additional terms are not necessarily nonnegative, a novel Lyapunov analysis is needed. For instance, we show that there exist $0\leq \underline{\lambda} (\alpha) < \overline{\lambda} (\alpha)\leq (3\alpha -2) / 4$ such that every $\underline{\lambda} (\alpha) < \lambda < \overline{\lambda} (\alpha)$ provides an energy function $(\mathcal{G}_{\lambda ,k})_{k\geq 0}$ that is bounded from below and nonnegative sequences $(b_{\lambda ,k})_{k\geq 0}$ and $(d_{\lambda ,k})_{k\geq 0}$ with $\sum_{k\geq 0}d_{\lambda ,k} < + \infty$ such that the inequality (11) holds for $k$ large enough. This allows us to conclude that $\lim_{k\to +\infty}\mathcal{G}_{\lambda ,k}\in \mathbb{R}$ exists, see Lemma A.2 for more details. + +From this we can then verify that the first condition of Opial's lemma, see Lemma A.3, is fulfilled, while its second condition follows from the maximal monotonicity of $F + N_{C}$ , see Proposition A.4. + +The asymptotic convergence of the iterates is complemented by statements about convergence rates in terms of the restricted gap as well as the natural residual for the last iterate. + +Theorem 3.3. Let $z^{*} \in \Omega$ be a solution of (1) and let $(z_{k})_{k \geq 0}$ be the sequence generated by Algorithm 1. Then, as $k \to +\infty$ , we have + +$$ +\operatorname {G a p} (z _ {k}) = o \left(\frac {1}{k}\right) \quad a n d \quad \operatorname {R e s} (z _ {k}) = o \left(\frac {1}{k}\right). +$$ + +Remark 3.4. The tangent residual exhibits the same last iterate convergence rate as the natural residual, i.e., + +$$ +r (z _ {k}) = o \left(\frac {1}{k}\right). +$$ + +In fact, we use the observation $\mathsf{Res}(z) \leq \| F(z) + \zeta \|$ , see (14), in the proof of Theorem 3.3 to obtain the convergence rate for the natural residual. As the restricted gap and the natural residual are mostly used in the literature (and are probably more intuitive) to quantify the convergence behaviour of numerical methods for variational inequalities, we opted to present Theorem 3.3 in the above manner. + +# 4. Numerical Experiments + +In this section we provide two numerical experiments to complement our theoretical results. For the first one we treat a two-player zero sum game, which amounts to solving a bilinear saddle point problem constrained by standard simplexes. The second one consists of application of fOGDA-VI to the training of GANs. + +# 4.1. Two-player Zero Sum Game + +![](images/5a09476315713f09c4bffa3324976b01bf7b4e431a0a4921545d970cf12be271.jpg) +Figure 1. Comparison of different momentum parameters $\alpha > 2$ in Algorithm 1 in terms of the natural residual. + +We aim to solve a two-player zero sum game with mixed strategies, which means that we need to solve the following bilinear saddle point problem, + +$$ +\min _ {x \in \Delta^ {m}} \max _ {y \in \Delta^ {n}} \Phi (x, y) := x ^ {T} A y, \tag {12} +$$ + +where $A \in \mathbb{R}^{m \times n}$ is a given pay-off matrix and $\Delta^d = \{v \in \mathbb{R}_+^d \mid \sum_{i=1}^d v_i = 1\}$ denotes the $d$ -dimensional standard simplex. Recall that a solution of (12) is given by a saddle + +point $(x^{*},y^{*})\in \Delta^{m}\times \Delta^{n}$ satisfying + +$$ +\Phi (x ^ {*}, y) \leq \Phi (x ^ {*}, y ^ {*}) \leq \Phi (x, y ^ {*}) \quad \forall (x, y) \in \Delta^ {m} \times \Delta^ {n}. +$$ + +This leads to a monotone inclusion problem (3) with $C = \Delta^m \times \Delta^n$ and + +$$ +F: \mathbb {R} ^ {m} \times \mathbb {R} ^ {n} \to \mathbb {R} ^ {m} \times \mathbb {R} ^ {n}, \left( \begin{array}{c} x \\ y \end{array} \right) \mapsto \left( \begin{array}{c} \nabla_ {x} \Phi (x, y) \\ - \nabla_ {y} \Phi (x, y) \end{array} \right), +$$ + +which gives + +$$ +F (x, y) = \left( \begin{array}{c} A y \\ - A ^ {T} x \end{array} \right) = \left( \begin{array}{c c} 0 & A \\ - A ^ {T} & 0 \end{array} \right) \left( \begin{array}{c} x \\ y \end{array} \right). +$$ + +![](images/430ce8f6f8f761b91aab0c9721d0f97904ee714f636a7d0ac68d4e1d8e4feefd.jpg) +Figure 2. Comparison of different methods in terms of the natural residual. + +Notice that $F$ is Lipschitz continuous but not cocoercive, thus indeed the regular Projected Gradient Descent Ascent algorithm (PGDA), which is in this case nothing else than the FB algorithm, cannot be applied. + +For our experiments we choose $d = n = 50$ and $A$ to have entries drawn from the uniform distribution on the half-open interval [0,1). As the parameter $\alpha > 2$ can be chosen arbitrarily, we do a comparison of different values in terms of the natural residual in Figure 1 to gain more insight. Note that there is no upper bound for $\alpha$ that would be given on the basis of the theoretical considerations. As it turns out, from a certain point on after a period where all values of $\alpha$ perform similarly, bigger choices for $\alpha$ seem to give better results with faster convergence (even though the convergence rate of $o(1 / k)$ is the same for all choices). + +Table 1. Comparison of fOGDA-VI with LA-GDA in terms of Fréchet Inception Distance (FID; lower is better) and Inception Score (IS; higher is better). We report the best obtained scores, averaged over 5 runs with 500,000 iterations each. For all considered methods we evaluated the last (non averaged) iterates, the uniform average, the exponential moving average (EMA) and the EMA on the "slow weights" for the method incorporating "lookahead". Best scores for each metric are in boldface. + +
MethodFIDIS
non avg.uniform avg.EMAEMA-slownon avg.uniform avg.EMAEMA-slow
fOGDA18.49 ± 1.0917.38 ± 1.6918.51 ± 1.13-7.82 ± .078.7 ± .158.1 ± .15-
LA-GDA16.7 ± .6716.02 ± .8416.84 ± .7115.31 ± 1.277.88 ± .088.76 ± .198.29 ± .078.59 ± .1
+ +When going to "extremely big" choices of $\alpha$ we could not only observe further boost in convergence speed, but also increased oscillatory behaviour after a certain point. Whether fOGDA-VI for $\alpha \rightarrow +\infty$ amounts to a convergent method is not obvious; for the unconstrained fOGDA by (Bojt et al., 2022) one can see that this would lead to the unaccelerated OGDA method. + +Concluding, we show a comparison of different methods that we have encountered in Sections 2.2.1 and 2.2.2 in Figure 2 where we report results on the natural residual. We see that fOGDA-VI clearly outperforms all other methods while only requiring one evaluation of $F$ and one projection in each iteration. + +# 4.2. GAN Training + +Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) form a powerful class of generative models that can produce for example unseen realistic images. Originally the problem was posed as a zero sum game between two adversarial players played by two neural networks, called generator and discriminator, that try to minimise and maximise the same loss function, respectively. The minimax structure of the underlying optimisation problem generally leads to cycling behaviour during the training process, making GANs notoriously hard to optimise (Mescheder et al., 2017; 2018). As it was shown empirically that principled methods that are used to solve variational inequalities (Gidel et al., 2019) and monotone inclusions (Bohm et al., 2022) can prove beneficial in the training process, we will apply our proposed algorithm fOGDA-VI to train ResNet architectures on the CIFAR-10 dataset. + +For our experiments we use ResNet (He et al., 2016) architectures, see Appendix C.1, with the hinge version of the adversarial non-saturating loss (Miyato et al., 2018) trained on the CIFAR-10 (Krizhevsky, 2009) data set, which consists of 60,000 $(32 \times 32 \times 3)$ -images in 10 classes, with 6,000 images per class. The metrics used to evaluate the generated images are the inception score (Salimans et al., 2016) (IS; higher is better) and the Frechet inception distance (Heusel et al., 2017) (FID; lower is better), both computed on 50,000 samples in their original implementations. + +Furthermore, in our experiments instead of stochastic gradients we use the Adam optimiser (Kingma & Ba, 2014) with parameters $\beta_{1} = 0$ and $\beta_{2} = 0.9$ that were used in recent experiments (Chavdarova et al., 2021b) outperforming the class-dependent BigGAN (Brock et al., 2019) model on CIFAR-10. Additionally, we keep the batch size and the ratio of discriminator and generator updates the same as in (Chavdarova et al., 2021b). + +Since we have mini batch updates for the GAN experiments instead of taking the full gradient we perform significantly more steps to incorporate the entire gradient information. Because of this the iterator $k$ in Algorithm 1 might grow too large soon, so we conducted experiments to update the iterator $k$ only every $n$ -th step with different choices of $n$ . Furthermore, we also did a hyperparameter search regarding the learning rate and the momentum parameter $\alpha > 2$ in Algorithm 1. + +Table 2. Overall best obtained scores in terms of Fréchet Inception Distance (FID; lower is better) and Inception Score (IS; higher is better) for the last (non averaged) iterates. Best scores for each metric are in boldface. + +
MethodFIDIS
fOGDA-VI15.698.91
LA-GDA14.099.06
+ +When performing the hyperparameter search for the momentum parameter $\alpha$ , we experienced that, just as in the theoretically justified setting of Section 4.1, bigger values seemed to perform better. Regarding the frequency of iterator updates we also observed better behaviour for bigger values of $n$ in general. The parameters we used for the fOGDA-VI experiments were $\alpha = 100$ and $n = 1000$ , and a learning rate of $\gamma = 0.0001$ . We compared the results obtained by fOGDA-VI with the best method from (Chavdarova et al., 2021b), a variant of Gradient Descent Ascent incorporating averaging during the training which they call "lookahead", resulting in a method we denote by LA-GDA, for the convergence properties of which no theoretical evidence is available. For the LA-GDA experiments we kept all hyperparameters as reported by Chavdarova et al. (2021b). For both methods + +![](images/10566333c46b4618e717fc68e4a432715762ca937a786597d2338c37998480cb.jpg) +(a) FID. + +![](images/2e4d528593c33331882449310fd1a931cb97e31f2f0c1be7dead9209729d3585.jpg) +(b) IS. +Figure 3. The median and the individual runs are illustrated with ticker solid lines and transparent lines, respectively. We report (a) FID and (b) IS for the last (non averaged) iterates. + +we conducted 5 runs with 500,000 iterations each. + +To further reduce the cycling characteristics of GAN training two commonly used techniques in practice are the (uniform) averaging and the exponential moving averaging (EMA) of the network weights. The beneficial effects of uniform and exponential averaging can also be observed in our experiments (see Table 1), however uniform averaging seems to have a stronger impact on the scores. + +In Table 1 we report a comparison of fOGDA-VI with LA-GDA in terms of Fréchet Inception Distance (FID; lower is better) (Heusel et al., 2017) and Inception Score (IS; higher is better) (Salimans et al., 2016). We report the best obtained scores, averaged over 5 runs with 500,000 iterations each. For both methods we evaluated the last (non averaged) iterates, the uniform average, the exponential moving average (EMA) and the EMA on the "slow weights" for LA-GDA. One can see that while both considered methods give comparable results and show similar behaviour, the best score for both FID and IS is obtained by LA-GDA. An interesting observation is that the FID scores obtained by LA-GDA seem to be significantly worse than those reported in (Chavdarova et al., 2021b), while it exhibits higher reported IS values. + +The scores reported in Table 2, where we list the overall best obtained values for both metrics, support the observations from Table 1. In general, the results for fOGDA-VI and LA-GDA are on a similar level with the latter giving the altogether best scores. + +Figure 3 shows all five individual runs and the respective + +median for both methods in terms of (a) FID and (b) IS. It can be observed that LA-GDA achieves better results than fOGDA-VI during the first 200,000 iterations, while from then on the both methods achieve similar scores. As it appears, the medians of fOGDA-VI seem to stay more consistently on the level of the optimal scores while LA-GDA worsens again over time. + +# 5. Conclusion + +In this work we proposed a novel algorithm, called fOGDA-VI, to solve monotone variational inequalities that recovers the explicit fOGDA method from (Bojt et al., 2022) in the unconstrained case. We showed that fOGDA-VI exhibits a better rate of convergence than other accelerated methods, giving a rate of convergence like $o\left( {1/{k}_{i}}\right)$ in terms of the restricted gap function and the tangent and natural residuals, while still maintaining convergence of the iterates to a solution of the variational inequality under investigation. To validate our method in practice we treated a constrained bilinear minimax problem for which we obtained superior behaviour on this theoretically justified task. Moreover, application of fOGDA-VI to the training of GANs gives promising results even in practical settings that do not warrant the required assumptions. + +# Acknowledgements + +The authors would like to thank the anonymous reviewers and the program chairs for their valuable suggestions and comments that have improved the quality of the paper. + +Michael Sedlmayer would like to acknowledge support from the Austrian Research Promotion Agency (FFG), project "Smart operation of wind turbines under icing conditions (SOWINDIC)". Dang-Khoa Nguyen would like to acknowledge support from the Austrian Science Fund (FWF), project P 34922. Radu Ioan Bojt would like to acknowledge partial support from the Austrian Science Fund (FWF), projects W 1260 and P 34922. + +# References + +Antipin, A. S. On a method for convex programs using a symmetrical modification of the lagrange function. *Ekonomika i Matematicheskie Metody*, 12(6):1164-1173, 1976. +Banert, S. and Bot, R. I. A forward-backward-forward differential equation and its asymptotic properties. Journal of Convex Analysis, 25(2):371-388, 2018. +Bauschke, H. H. and Combettes, P. L. Convex analysis and monotone operator theory in Hilbert spaces. Springer, New York, 2011. +Bauschke, H. H. and Combettes, P. L. Convex analysis and monotone operator theory in Hilbert spaces, Second Edition. Springer, Cham, 2017. +Bohm, A., Sedlmayer, M., Csetnek, E. R., and Boj, R. I. Two steps at a time—taking GAN training in stride with Tseng's method. SIAM Journal on Mathematics of Data Science, 4(2):750-771, 2022. +Boit, R. I. and Nguyen, D.-K. Fast Krasnosel'skii-Mann algorithm with a convergence rate of the fixed point iteration of $o(1 / k)$ . arXiv preprint arXiv:2206.09462, 2022. +Boi, R. I., Csetnek, E. R., and Nguyen, D.-K. Fast OGDA in continuous and discrete time. arXiv preprint arXiv:2203.10947, 2022. +Brock, A., Donahue, J., and Simonyan, K. Large scale GAN training for high fidelity natural image synthesis. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=B1xsqj09Fm. +Cai, Y. and Zheng, W. Accelerated single-call methods for constrained min-max optimization. In OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), 2022. +Cai, Y., Oikonomou, A., and Zheng, W. Accelerated algorithms for monotone inclusions and constrained nonconvex-nonconcave min-max optimization. In OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), 2022a. + +Cai, Y., Oikonomou, A., and Zheng, W. Tight last-iterate convergence of the extragradient and the optimistic gradient descent-ascent algorithm for constrained monotone variational inequalities. arXiv preprint arXiv:2204.09228, 2022b. +Chavdarova, T., Jordan, M. I., and Zampetakis, M. Last-iterate convergence of saddle point optimizers via high-resolution differential equations. In The 13th Annual Workshop on Optimization for Machine Learning (OPT2021), 2021a. +Chavdarova, T., Pagliardini, M., Stich, S. U., Fleuret, F., and Jaggi, M. Taming GANs with lookahead-minmax. In International Conference on Learning Representations, 2021b. URL https://openreview.net/forum?id=ZW0yXJyNmoG. +Daskalakis, C. and Panageas, I. The limit points of (optimistic) gradient descent in min-max optimization. In Advances in Neural Information Processing Systems, pp. 9236-9246, 2018. +Daskalakis, C., Ilyas, A., Syrgkanis, V., and Zeng, H. Training GANs with optimism. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=SJJySbbAZ. +Facchinei, F. and Pang, J.-S. Finite-dimensional variational inequalities and complementarity problems. Springer, 2003. +Gidel, G., Berard, H., Vignoud, G., Vincent, P., and Lacoste-Julien, S. A variational inequality perspective on Generative Adversarial Networks. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=r1laEnA5Ym. +Golowich, N., Pattathil, S., and Daskalakis, C. Tight last-iterate convergence rates for no-regret learning in multiplayer games. Advances in neural information processing systems, 33:20766-20778, 2020a. +Golowich, N., Pattathil, S., Daskalakis, C., and Ozdaglar, A. Last iterate is slower than averaged iterate in smooth convex-concave saddle point problems. In Conference on Learning Theory, pp. 1758-1784. PMLR, 2020b. +Goodfellow, I. Nips 2016 tutorial: Generative adversarial networks. arXiv:1701.00160, 2016. +Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Advances in Neural Information Processing Systems, volume 27, pp. 2672-2680, 2014. + +Gorbunov, E., Loizou, N., and Gidel, G. Extragraphient method: $\mathcal{O}(1 / k)$ last-iterate convergence for monotone variational inequalities and connections with cocoercivity. In International Conference on Artificial Intelligence and Statistics, pp. 366-402. PMLR, 2022. +Halpern, B. Fixed points of nonexpanding maps. Bulletin of the American Mathematical Society, 73(6):957-961, 1967. +He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. +Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In Advances in Neural Information Processing Systems, volume 30, pp. 6626-6637, 2017. +Hsieh, Y.-G., Iutzeler, F., Malick, J., and Mertikopoulos, P. On the convergence of single-call stochastic extra-gradient methods. Advances in Neural Information Processing Systems, 32, 2019. +Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014. +Korpelevich, G. M. The extragradient method for finding saddle points and other problems. *Ekonomika i Matematicheskie Metody*, 12:747-756, 1976. +Krizhevsky, A. Learning multiple layers of features from tiny images. Master's thesis, University of Toronto, Canada, 2009. +Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJzIBfZAb. +Malitsky, Y. Projected reflected gradient methods for monotone variational inequalities. SIAM Journal on Optimization, 25(1):502-520, 2015. +Malitsky, Y. and Tam, M. K. A forward-backward splitting method for monotone inclusions without cocoercivity. SIAM Journal on Optimization, 30(2):1451-1472, 2020. +Mertikopoulos, P., Papadimitriou, C., and Piliouras, G. Cycles in adversarial regularized learning. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 2703-2717. SIAM, 2018. +Mescheder, L., Nowozin, S., and Geiger, A. The numerics of GANs. In Advances in + +Neural Information Processing Systems, volume 30, 2017. URL https://proceedings.neurips.cc/paper/2017/file/4588e674d3f0faf985047d4c3f13ed0d-Paper.pdf. +Mescheder, L., Geiger, A., and Nowozin, S. Which training methods for GANs do actually converge? In International Conference on Machine Learning, 2018. +Miyato, T., Kataoka, T., Koyama, M., and Yoshida, Y. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=B1QRgziT-. +Mokhtari, A., Ozdaglar, A. E., and Pattathil, S. Convergence rate of $\mathcal{O}(1 / k)$ for optimistic gradient and extra-gradient methods in smooth convex-concave saddle point problems. SIAM Journal on Optimization, 30(4):3230-3251, 2020. +Morgenstern, O. and Von Neumann, J. Theory of games and economic behavior. Princeton university press, 1953. +Nemirovski, A. Prox-method with rate of convergence $\mathcal{O}(1 / t)$ for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM Journal on Optimization, 15(1):229-251, 2004. +Nesterov, Y. Dual extrapolation and its applications to solving variational inequalities and related problems. Mathematical Programming, 109(2-3):319-344, 2007. +Omidshafiei, S., Pazis, J., Amato, C., How, J. P., and Vian, J. Deep decentralized multi-task multi-agent reinforcement learning under partial observability. In International Conference on Machine Learning, pp. 2681-2690. PMLR, 2017. +Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bulletin of the American Mathematical Society, 73(4):591-597, 1967. +Ouyang, Y. and Xu, Y. Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems. Mathematical Programming, 185(1):1-35, 2021. +Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. Pytorch: An imperative style, high-performance deep + +learning library. In Advances in Neural Information Processing Systems 32, pp. 8024-8035, 2019. +URL http://papers.neurips.cc/paper/ +9015-pytorch-an-imperative-style-high-performance-deep-learning-library. pdf. +Popov, L. D. A modification of the Arrow-Hurwicz method for search of saddle points. Mathematical notes of the Academy of Sciences of the USSR, 28(5):845-848, 1980. +Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., and Chen, X. Improved techniques for training GANs. In Advances in Neural Information Processing Systems, pp. 2234-2242, 2016. +Tran-Dinh, Q. The connection between Nesterov's accelerated methods and Halpern fixed-point iterations. arXiv preprint arXiv:2203.04869, 2022. +Tran-Dinh, Q. and Luo, Y. Halpern-type accelerated and splitting algorithms for monotone inclusions. arXiv preprint arXiv:2110.08150, 2021. +Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM Journal on Control and Optimization, 38(2):431-446, 2000. +Yoon, T. and Ryu, E. K. Accelerated algorithms for smooth convex-concave minimax problems with $\mathcal{O}(1 / k^2)$ rate on squared gradient norm. In International Conference on Machine Learning, pp. 12098-12109. PMLR, 2021. + +# A. Auxiliary Results + +In the following we present auxiliary results that will be necessary for the convergence analysis in Appendix B. + +We start this section with a result which characterises the convergence of sequences. + +Lemma A.1 (see Lemma 32 in (Boş et al., 2022)). Let $a \geq 1$ and $(q_k)_{k \geq 1}$ be a bounded sequence in $\mathbb{R}^d$ such that + +$$ +\lim _ {k \rightarrow + \infty} \left(q _ {k + 1} + \frac {k}{a} (q _ {k + 1} - q _ {k})\right) = p \in \mathbb {R} ^ {d}. +$$ + +Then $\lim_{k\to +\infty}q_k = p$ + +The following result is a particular instance of Lemma 5.31 in (Bauschke & Combettes, 2017). + +Lemma A.2. Let $(a_{k})_{k\geq 1}$ , $(b_{k})_{k\geq 1}$ , $(d_{k})_{k\geq 1}$ and $(d_{k})_{k\geq 1}$ be sequences of real numbers. Assume that $(a_{k})_{k\geq 1}$ is bounded from below, $(b_{k})_{k\geq 1}$ and $(d_{k})_{k\geq 1}$ are nonnegative sequences such that $\sum_{k\geq 1}d_k < +\infty$ . If + +$$ +a _ {k + 1} \leq (1 + d _ {k}) a _ {k} - b _ {k} \quad \forall k \geq 1, +$$ + +then the following statements are true: + +(i) the sequence $(b_{k})_{k\geq 1}$ is summable, i.e., $\sum_{k\geq 1}b_k < +\infty$ +(ii) the sequence $(a_{k})_{k\geq 1}$ is convergent. + +To show convergence of the sequence of iterates we will use the following result, which is a finite dimensional version of the so-called Opial Lemma (Opial, 1967). + +Lemma A.3. Let $S \subseteq \mathbb{R}^d$ be a nonempty set and $(z_k)_{k \geq 1} \subseteq \mathbb{R}^d$ a sequence such that the following two conditions hold: + +(i) for every $z^{*}\in S$ , $\lim_{k\to +\infty}\| z_k - z^*\|$ exists; +(ii) every cluster point of $(z_{k})_{k\geq 1}$ belongs to $S$ + +Then $(z_{k})_{k\geq 0}$ converges to an element in $S$ + +The following result about maximal monotone operators will be crucial in the convergence analysis to verify item (ii) from Lemma A.3. + +Proposition A.4 (see Proposition 20.33 in (Bauschke & Combettes, 2011)). Let $A: \mathbb{R}^d \to 2\mathbb{R}^d$ be maximal monotone, with $\operatorname{gra} A$ denoting the graph of $A$ . Then $\operatorname{gra} A$ is closed, i.e., for every sequence $(z_k, v_k)_{k \geq 1}$ in $\operatorname{gra} A$ and every $(z, v) \in \mathbb{R}^d \times \mathbb{R}^d$ , if $z_k \to z$ and $v_k \to v$ , then $(z, v) \in \operatorname{gra} A$ . + +Lemma A.5. Let $a, b, c \in \mathbb{R}$ be such that $a \neq 0$ and $b^2 - ac \leq 0$ . The following statements are true: + +(i) if $a > 0$ , then + +$$ +a \left\| x \right\| ^ {2} + 2 b \left\langle x, y \right\rangle + c \left\| y \right\| ^ {2} \geq 0 \quad \forall x, y \in \mathbb {R} ^ {d}; +$$ + +(ii) if $a < 0$ , then + +$$ +a \| x \| ^ {2} + 2 b \langle x, y \rangle + c \| y \| ^ {2} \leq 0 \quad \forall x, y \in \mathbb {R} ^ {d}. +$$ + +# B. Convergence Analysis + +In the following section we will establish the necessary convergence analysis to prove the main theoretical results of this work Theorem 3.2 and Theorem 3.3. + +# B.1. Notation and Preliminary Considerations + +We start with proving the fact, that indeed + +$$ +\operatorname {R e s} (z) \leq r (z) \quad \forall z \in \mathbb {R} ^ {d}. +$$ + +To this end is sufficient to only consider the case when $z \in C$ as otherwise the inequality trivially holds. Let $z \in C$ and $\zeta \in N_C(z)$ , then by the following equivalence (see Proposition 6.45 in (Bauschke & Combettes, 2011)) + +$$ +p = P _ {C} (v) \quad \Leftrightarrow \quad \exists \zeta \in N _ {C} (p) \text {s u c h} v - p = \zeta , \tag {13} +$$ + +we obtain that $z = P_{C}[z + \zeta]$ . Since the projection onto a nonempty closed convex set is nonexpansive, we then deduce that + +$$ +\operatorname {R e s} (z) = \| z - P _ {C} [ z - F (z) ] \| = \| P _ {C} [ z + \zeta ] - P _ {C} [ z - F (z) ] \| \leq \| F (z) + \zeta \|. \tag {14} +$$ + +Since $\zeta \in N_C(z)$ was arbitrary, we conclude that $\operatorname{Res}(z) \leq r(z)$ . + +The characterisation (13) can be also used to deduce a projection step from the second line of (10). For all $k \geq 1$ we have + +$$ +z _ {k + 1} = w _ {k} - \gamma \left(1 + \frac {k}{k + \alpha}\right) (F (w _ {k}) - F (w _ {k - 1}) + \zeta_ {k + 1} - \zeta_ {k}) +$$ + +with $\zeta_k \in N_C(z_k)$ , which using (13) is equivalent to + +$$ +z _ {k + 1} = P _ {C} \left[ w _ {k} - \gamma \left(1 + \frac {k}{k + \alpha}\right) \left(F (w _ {k}) - F (w _ {k - 1}) - \zeta_ {k}\right) \right], +$$ + +using that for $\lambda >0$ + +$$ +\zeta \in N _ {C} (z) \quad \Leftrightarrow \quad \lambda \zeta \in N _ {C} (z). +$$ + +In the following for every $k \geq 1$ we will use the notation + +$$ +v _ {k} := F \left(w _ {k - 1}\right) + \zeta_ {k}, \tag {15} +$$ + +which we plug into Algorithm 1 to obtain + +$$ +w _ {k} = z _ {k} + \frac {k}{k + \alpha} \left(z _ {k} - z _ {k - 1}\right) - \gamma \frac {\alpha}{k + \alpha} v _ {k}, \tag {16a} +$$ + +$$ +z _ {k + 1} = w _ {k} - \gamma \left(1 + \frac {k}{k + \alpha}\right) \left(v _ {k + 1} - v _ {k}\right). \tag {16b} +$$ + +Since $0 < \gamma < 1 / 4L$ there exists $0 < \varepsilon < 1$ such that + +$$ +\gamma = \frac {1 - \varepsilon}{4 L}. \tag {17} +$$ + +Hence, the definition of $v_{k}$ together with the Lipschitz continuity of $F$ give us + +$$ +\begin{array}{l} \left\| \zeta_ {k + 1} + F \left(z _ {k + 1}\right) - v _ {k + 1} \right\| = \left\| F \left(z _ {k + 1}\right) - F \left(w _ {k}\right) \right\| \leq L \| z _ {k + 1} - w _ {k} \| \tag {18} \\ \leq \gamma L \left\| v _ {k + 1} - v _ {k} \right\| \leq \frac {1 - \varepsilon}{4} \left\| v _ {k + 1} - v _ {k} \right\| \leq \frac {1}{4} \left\| v _ {k + 1} - v _ {k} \right\| \leq \left\| v _ {k + 1} - v _ {k} \right\|. \\ \end{array} +$$ + +By summing up (16a) and (16b) we find for every $k \geq 1$ + +$$ +(k + \alpha) (z _ {k + 1} - z _ {k}) - k (z _ {k} - z _ {k - 1}) = - \alpha \gamma v _ {k + 1} - 2 \gamma k (v _ {k + 1} - v _ {k}). \tag {19} +$$ + +Let $0 \leq \lambda \leq \alpha - 1$ and $z^{*} \in \Omega$ . Following (Bojt et al., 2022) we denote for every $k \geq 1$ + +$$ +u _ {\lambda , k} := 2 \lambda \left(z _ {k} - z ^ {*}\right) + 2 k \left(z _ {k} - z _ {k - 1}\right) + \frac {3 \alpha - 2}{\alpha - 1} \gamma k v _ {k}, \tag {20} +$$ + +$$ +\begin{array}{l} \mathcal {E} _ {\lambda , k} := \frac {1}{2} \| u _ {\lambda , k} \| ^ {2} + 2 \lambda (\alpha - 1 - \lambda) \| z _ {k} - z ^ {*} \| ^ {2} + \frac {2 (\alpha - 2)}{\alpha - 1} \lambda \gamma k \left\langle z _ {k} - z ^ {*}, v _ {k} \right\rangle \tag {21} \\ + \frac {\alpha - 2}{\alpha - 1} \gamma^ {2} k \left(\frac {3 \alpha - 2}{2 (\alpha - 1)} k + \alpha\right) \| v _ {k} \| ^ {2}, \\ \end{array} +$$ + +$$ +\begin{array}{l} \mathcal {G} _ {\lambda , k} := \mathcal {E} _ {\lambda , k} - \frac {2 (\alpha - 2)}{\alpha - 1} \gamma k ^ {2} \left\langle z _ {k} - z _ {k - 1}, F \left(z _ {k}\right) - F \left(w _ {k - 1}\right) \right\rangle \tag {22} \\ + \frac {\alpha - 2}{\alpha - 1} \gamma^ {2} k \sqrt {k} \left((1 - \varepsilon) \sqrt {k} + \alpha\right) \| v _ {k} - v _ {k - 1} \| ^ {2}. \\ \end{array} +$$ + +# B.2. Properties of the Energy Functions + +We collect the properties of the energy functions $(\mathcal{E}_{\lambda ,k})_{k\geq 1}$ and $(\mathcal{G}_{\lambda ,k})_{k\geq 1}$ in the following results. Please note that the statement of the first lemma can be deduced from eq. (81) in (Bo et al., 2022), taking into account an appropriate formal correspondence between certain quantities. For better comprehensibility and in order to be able to refer to equations later on, we provide its proof nevertheless. + +Lemma B.1. Let $z^{*}\in \Omega$ and $(z_{k})_{k\geq 0}$ be the sequence generated by Algorithm 1. For $0\leq \lambda \leq \alpha -1$ , the following identity holds for every $k\geq 1$ + +$$ +\begin{array}{l} \mathcal {E} _ {\lambda , k + 1} - \mathcal {E} _ {\lambda , k} \\ = - 4 (\alpha - 2) \lambda \gamma \left\langle z _ {k + 1} - z ^ {*}, v _ {k + 1} \right\rangle \\ + 2 (\lambda + 1 - \alpha) (2 k + \alpha + 1) \| z _ {k + 1} - z _ {k} \| ^ {2} \\ + \frac {2}{\alpha - 1} \left(4 (\alpha - 1) (\lambda + 1 - \alpha) - \alpha (\alpha - 2)\right) \gamma k \langle z _ {k + 1} - z _ {k}, v _ {k + 1} \rangle (23) \\ + \frac {2}{\alpha - 1} \left(2 \alpha (\alpha - 1) (\lambda + 1 - \alpha) + \alpha - 2 (\alpha - 1) ^ {2}\right) \gamma \langle z _ {k + 1} - z _ {k}, v _ {k + 1} \rangle (23) \\ - \frac {2 (\alpha - 2)}{\alpha - 1} \gamma k (k + \alpha) \left\langle z _ {k + 1} - z _ {k}, v _ {k + 1} - v _ {k} \right\rangle - \frac {\alpha - 2}{\alpha - 1} \gamma^ {2} k (2 k + \alpha) \left\| v _ {k + 1} - v _ {k} \right\| ^ {2} \\ - \frac {\alpha - 2}{\alpha - 1} \gamma^ {2} \left(2 (3 \alpha - 2) k + 2 \alpha^ {2} + \alpha - 2\right) \| v _ {k + 1} \| ^ {2}. \\ \end{array} +$$ + +Proof. Recall that by the definition in (20), we have for every $k \geq 1$ + +$$ +u _ {\lambda , k} = 2 \lambda \left(z _ {k} - z ^ {*}\right) + 2 k \left(z _ {k} - z _ {k - 1}\right) + \frac {3 \alpha - 2}{\alpha - 1} \gamma k v _ {k}. +$$ + +Similarly, + +$$ +u _ {\lambda , k + 1} = 2 \lambda \left(z _ {k + 1} - z ^ {*}\right) + 2 (k + 1) \left(z _ {k + 1} - z _ {k}\right) + \frac {3 \alpha - 2}{\alpha - 1} \gamma (k + 1) v _ {k + 1}. \tag {24} +$$ + +Thus, after subtraction we deduce from (19) that + +$$ +\begin{array}{l} u _ {\lambda , k + 1} - u _ {\lambda , k} \\ = 2 (\lambda + 1 - \alpha) (z _ {k + 1} - z _ {k}) + 2 (k + \alpha) (z _ {k + 1} - z _ {k}) - 2 k (z _ {k} - z _ {k - 1}) \\ + \frac {3 \alpha - 2}{\alpha - 1} \gamma v _ {k + 1} + \frac {3 \alpha - 2}{\alpha - 1} \gamma k \left(v _ {k + 1} - v _ {k}\right) \tag {25} \\ = 2 (\lambda + 1 - \alpha) (z _ {k + 1} - z _ {k}) + \frac {\alpha - 2 (\alpha - 1) ^ {2}}{\alpha - 1} \gamma v _ {k + 1} + \frac {2 - \alpha}{\alpha - 1} \gamma k (v _ {k + 1} - v _ {k}). \\ \end{array} +$$ + +For $k\geq 1$ we know that + +$$ +\frac {1}{2} \left(\left\| u _ {\lambda , k + 1} \right\| ^ {2} - \left\| u _ {\lambda , k} \right\| ^ {2}\right) = \left\langle u _ {\lambda , k + 1}, u _ {\lambda , k + 1} - u _ {\lambda , k} \right\rangle - \frac {1}{2} \left\| u _ {\lambda , k + 1} - u _ {\lambda , k} \right\| ^ {2}, \tag {26} +$$ + +and that for every $k\geq 0$ + +$$ +\begin{array}{l} 2 \lambda (\alpha - 1 - \lambda) \left(\left\| z _ {k + 1} - z ^ {*} \right\| ^ {2} - \left\| z _ {k} - z ^ {*} \right\| ^ {2}\right) \tag {27} \\ = 4 \lambda (\alpha - 1 - \lambda) \left\langle z _ {k + 1} - z ^ {*}, z _ {k + 1} - z _ {k} \right\rangle - 2 \lambda (\alpha - 1 - \lambda) \| z _ {k + 1} - z _ {k} \| ^ {2}. \\ \end{array} +$$ + +We use the relations (24) and (25) to derive for every $k \geq 1$ + +$$ +\begin{array}{l} \left\langle u _ {\lambda , k + 1}, u _ {\lambda , k + 1} - u _ {\lambda , k} \right\rangle \\ = 4 \lambda (\lambda + 1 - \alpha) \left\langle z _ {k + 1} - z ^ {*}, z _ {k + 1} - z _ {k} \right\rangle \\ + \frac {2}{\alpha - 1} \left(\alpha - 2 (\alpha - 1) ^ {2}\right) \lambda \gamma \langle z _ {k + 1} - z ^ {*}, v _ {k + 1} \rangle \\ - \frac {2 (\alpha - 2)}{\alpha - 1} \lambda \gamma k \left\langle z _ {k + 1} - z ^ {*}, v _ {k + 1} - v _ {k} \right\rangle + 4 (\lambda + 1 - \alpha) (k + 1) \| z _ {k + 1} - z _ {k} \| ^ {2} \\ + \frac {2}{\alpha - 1} \left((3 \alpha - 2) (\lambda + 1 - \alpha) + \alpha - 2 (\alpha - 1) ^ {2}\right) \gamma (k + 1) \langle z _ {k + 1} - z _ {k}, v _ {k + 1} \rangle \tag {28} \\ - \frac {2 (\alpha - 2)}{\alpha - 1} \gamma (k + 1) k \langle z _ {k + 1} - z _ {k}, v _ {k + 1} - v _ {k} \rangle \\ + \frac {1}{(\alpha - 1) ^ {2}} \left(\alpha - 2 (\alpha - 1) ^ {2}\right) (3 \alpha - 2) \gamma^ {2} (k + 1) \| v _ {k + 1} \| ^ {2} \\ - \frac {1}{(\alpha - 1) ^ {2}} (\alpha - 2) (3 \alpha - 2) \gamma^ {2} (k + 1) k \left\langle v _ {k + 1}, v _ {k + 1} - v _ {k} \right\rangle , \\ \end{array} +$$ + +and + +$$ +\begin{array}{l} - \frac {1}{2} \| u _ {\lambda , k + 1} - u _ {\lambda , k} \| ^ {2} = - 2 (\lambda + 1 - \alpha) ^ {2} \| z _ {k + 1} - z _ {k} \| ^ {2} \\ - \frac {2}{\alpha - 1} \left(\alpha - 2 (\alpha - 1) ^ {2}\right) (\lambda + 1 - \alpha) \gamma \langle z _ {k + 1} - z _ {k}, v _ {k + 1} \rangle \\ - \frac {1}{2 (\alpha - 1) ^ {2}} \left(\alpha - 2 (\alpha - 1) ^ {2}\right) ^ {2} \gamma^ {2} \| v _ {k + 1} \| ^ {2} - \frac {(\alpha - 2) ^ {2}}{2 (\alpha - 1) ^ {2}} \gamma^ {2} k ^ {2} \| v _ {k + 1} - v _ {k} \| ^ {2} \tag {29} \\ + \frac {2 (\alpha - 2)}{\alpha - 1} (\lambda + 1 - \alpha) \gamma k \left\langle z _ {k + 1} - z _ {k}, v _ {k + 1} - v _ {k} \right\rangle \\ + \frac {\alpha - 2}{(\alpha - 1) ^ {2}} \left(\alpha - 2 (\alpha - 1) ^ {2}\right) \gamma^ {2} k \left\langle v _ {k + 1}, v _ {k + 1} - v _ {k} \right\rangle . \\ \end{array} +$$ + +After some algebra, we see that + +$$ +\begin{array}{l} \left(\left(3 \alpha - 2\right) (\lambda + 1 - \alpha) + \alpha - 2 (\alpha - 1) ^ {2}\right) (k + 1) - (\lambda + 1 - \alpha) \left(\alpha - 2 (\alpha - 1) ^ {2}\right) \\ = \left(\left(3 \alpha - 2\right) (\lambda + 1 - \alpha) + \alpha - 2 (\alpha - 1) ^ {2}\right) k + 2 \alpha (\alpha - 1) (\lambda + 1 - \alpha) \\ + \alpha - 2 (\alpha - 1) ^ {2} \\ = \left((3 \alpha - 2) (\lambda + 1 - \alpha) + \alpha - 2 (\alpha - 1) ^ {2} + (\alpha - 2) \lambda\right) k - (\alpha - 2) \lambda k \tag {30} \\ + 2 \alpha (\alpha - 1) (\lambda + 1 - \alpha) + \alpha - 2 (\alpha - 1) ^ {2} \\ = \left(4 (\alpha - 1) (\lambda + 1 - \alpha) - \alpha (\alpha - 2)\right) k - (\alpha - 2) \lambda k \\ + 2 \alpha (\alpha - 1) (\lambda + 1 - \alpha) + \alpha - 2 (\alpha - 1) ^ {2}. \\ \end{array} +$$ + +By plugging (28) and (29) into (26), and by taking into consideration the relation (30), we get for every $k \geq 1$ + +$$ +\begin{array}{l} \frac {1}{2} \left(\| u _ {\lambda , k + 1} \| ^ {2} - \| u _ {\lambda , k} \| ^ {2}\right) = 4 \lambda (\lambda + 1 - \alpha) \langle z _ {k + 1} - z ^ {*}, z _ {k + 1} - z _ {k} \rangle \\ + \frac {2}{\alpha - 1} \left(\alpha - 2 (\alpha - 1) ^ {2}\right) \lambda \gamma \langle z _ {k + 1} - z ^ {*}, v _ {k + 1} \rangle \\ - \frac {2 (\alpha - 2)}{\alpha - 1} \lambda \gamma k \left\langle z _ {k + 1} - z ^ {*}, v _ {k + 1} - v _ {k} \right\rangle \\ + 2 \left(\lambda + 1 - \alpha\right) \left(2 k + \alpha + 1 - \lambda\right) \left\| z _ {k + 1} - z _ {k} \right\| ^ {2} \\ + \frac {2}{\alpha - 1} \left(4 (\alpha - 1) (\lambda + 1 - \alpha) - \alpha (\alpha - 2) - (\alpha - 2) \lambda\right) \gamma k \langle z _ {k + 1} - z _ {k}, v _ {k + 1} \rangle \tag {31} \\ + \frac {2}{\alpha - 1} \left(2 \alpha (\alpha - 1) (\lambda + 1 - \alpha) + \alpha - 2 (\alpha - 1) ^ {2}\right) \gamma \langle z _ {k + 1} - z _ {k}, v _ {k + 1} \rangle \\ - \frac {2 (\alpha - 2)}{\alpha - 1} \gamma k (k + \alpha - \lambda) \left\langle z _ {k + 1} - z _ {k}, v _ {k + 1} - v _ {k} \right\rangle - \frac {(\alpha - 2) ^ {2}}{2 (\alpha - 1) ^ {2}} \gamma^ {2} k ^ {2} \left\| v _ {k + 1} - v _ {k} \right\| ^ {2} \\ + \frac {1}{2 (\alpha - 1) ^ {2}} \left(\alpha - 2 (\alpha - 1) ^ {2}\right) \gamma^ {2} \left(2 (3 \alpha - 2) k + 2 \alpha^ {2} + \alpha - 2\right) \| v _ {k + 1} \| ^ {2} \\ - \frac {\alpha - 2}{(\alpha - 1) ^ {2}} \gamma^ {2} k \big ((3 \alpha - 2) k + 2 \alpha (\alpha - 1) \big) \langle v _ {k + 1}, v _ {k + 1} - v _ {k} \rangle . \\ \end{array} +$$ + +Furthermore, one can show that for every $k \geq 1$ we get + +$$ +\begin{array}{l} (k + 1) \left\langle z _ {k + 1} - z ^ {*}, v _ {k + 1} \right\rangle - k \left\langle z _ {k} - z ^ {*}, v _ {k} \right\rangle \\ = \left\langle z _ {k + 1} - z ^ {*}, v _ {k + 1} \right\rangle + k \left(\left\langle z _ {k + 1} - z ^ {*}, v _ {k + 1} \right\rangle - \left\langle z _ {k} - z ^ {*}, v _ {k} \right\rangle\right) \tag {32} \\ = \left\langle z _ {k + 1} - z ^ {*}, v _ {k + 1} \right\rangle + k \left\langle z _ {k + 1} - z ^ {*}, v _ {k + 1} - v _ {k} \right\rangle \\ - k \left\langle z _ {k + 1} - z _ {k}, v _ {k + 1} - v _ {k} \right\rangle + k \left\langle z _ {k + 1} - z _ {k}, v _ {k + 1} \right\rangle \\ \end{array} +$$ + +and + +$$ +\begin{array}{l} (k + 1) \left((3 \alpha - 2) (k + 1) + 2 \alpha (\alpha - 1)\right) \| v _ {k + 1} \| ^ {2} - k \left((3 \alpha - 2) k + 2 \alpha (\alpha - 1)\right) \| v _ {k} \| ^ {2} \\ = (2 (3 \alpha - 2) k + 2 \alpha^ {2} + \alpha - 2) \| v _ {k + 1} \| ^ {2} \\ + k \left((3 \alpha - 2) k + 2 \alpha (\alpha - 1)\right) \left(\left\| v _ {k + 1} \right\| ^ {2} - \left\| v _ {k} \right\| ^ {2}\right) \tag {33} \\ = (2 (3 \alpha - 2) k + 2 \alpha^ {2} + \alpha - 2) \| v _ {k + 1} \| ^ {2} \\ + 2 k \left((3 \alpha - 2) k + 2 \alpha (\alpha - 1)\right) \langle v _ {k + 1}, v _ {k + 1} - v _ {k} \rangle \\ - k \left(\left(3 \alpha - 2\right) k + 2 \alpha (\alpha - 1)\right) \| v _ {k + 1} - v _ {k} \| ^ {2}. \\ \end{array} +$$ + +In addition, direct computations show that + +$$ +\alpha - 2 (\alpha - 1) ^ {2} + \alpha - 2 = - 2 (\alpha - 1) (\alpha - 2) +$$ + +and + +$$ +- (\alpha - 2) ^ {2} - (\alpha - 2) (3 \alpha - 2) = - 4 (\alpha - 2) (\alpha - 1). +$$ + +Hence, multiplying (32) by $\frac{2\lambda\gamma(\alpha - 2)}{(\alpha - 1)} \geq 0$ and (33) by $\frac{\gamma^2(\alpha - 2)}{2(\alpha - 1)^2} > 0$ , followed by summing up the resulting identities with (27) and (31), yields (23) for every $k \geq 1$ . + +Lemma B.2. Let $z^{*} \in \Omega$ and $(z_{k})_{k \geq 0}$ be the sequence generated by Algorithm 1. For $0 \leq \lambda \leq \alpha - 1$ , the following statements are true: + +(i) for every $k \geq k_0 := \max \left\{2, \left\lceil \frac{1}{\alpha - 2} \right\rceil \right\}$ the following holds: + +$$ +\begin{array}{l} \mathcal {G} _ {\lambda , k + 1} - \mathcal {G} _ {\lambda , k} \leq \frac {(\alpha - 1) (\alpha - 2)}{\varepsilon (k + 1) ^ {2}} \lambda^ {2} \| z _ {k + 1} - z ^ {*} \| ^ {2} \\ - 4 (\alpha - 2) \lambda \gamma \left\langle z _ {k + 1} - z ^ {*}, \zeta_ {k + 1} + F \left(z _ {k + 1}\right) \right\rangle \\ + 4 \left(\eta_ {0} k + \eta_ {1}\right) \gamma \left\langle z _ {k + 1} - z _ {k}, v _ {k + 1} \right\rangle + \left(\eta_ {2} k + \kappa_ {0} \sqrt {k}\right) \left\| z _ {k + 1} - z _ {k} \right\| ^ {2} \\ + 4 \left(\eta_ {3} k + \kappa_ {1} \sqrt {k}\right) \gamma^ {2} \left\| v _ {k + 1} \right\| ^ {2} - \frac {\alpha - 2}{\alpha - 1} \mu_ {k} \gamma^ {2} \left\| v _ {k + 1} - v _ {k} \right\| ^ {2}, \\ \end{array} +$$ + +where + +$$ +\mu_ {k} := (k + 1) \left(\varepsilon (k + 1) + \alpha^ {2} \sqrt {k + 1} + (\alpha - 4)\right) - (\alpha - 2), +$$ + +$$ +\eta_ {0} := \frac {1}{2 (\alpha - 1)} \left(4 (\alpha - 1) (\lambda + 1 - \alpha) - \alpha (\alpha - 2)\right) < 0, +$$ + +$$ +\eta_ {1} := \frac {1}{2 (\alpha - 1)} \left(2 \alpha (\alpha - 1) (\lambda + 1 - \alpha) + \alpha - 2 (\alpha - 1) ^ {2}\right) < 0, +$$ + +$$ +\eta_ {2} := 4 (\lambda + 1 - \alpha) \leq 0, \tag {34} +$$ + +$$ +\eta_ {3} := - \frac {1}{2 (\alpha - 1)} (\alpha - 2) (3 \alpha - 2) < 0, +$$ + +$$ +\kappa_ {0} := \frac {1}{\alpha - 1} (\alpha - 2) \sqrt {\alpha - 2} > 0, +$$ + +$$ +\kappa_ {1} := \frac {1}{4 (\alpha - 1)} (\alpha - 2) \alpha > 0; +$$ + +(ii) for every $k \geq 1$ one has the following lower bound for the quantity $\mathcal{G}_{\lambda,k}$ + +$$ +\begin{array}{l} \mathcal {G} _ {\lambda , k} \geq \frac {\alpha - 2}{4 (3 \alpha - 2)} \left\| 4 \lambda \left(z _ {k} - z ^ {*}\right) + 2 k \left(z _ {k} - z _ {k - 1}\right) + \frac {2 (3 \alpha - 2)}{\alpha - 1} \gamma k v _ {k} \right\| ^ {2} \tag {35} \\ + \frac {(\alpha - 2) ^ {2}}{4 (3 \alpha - 2) (\alpha - 1)} k ^ {2} \| z _ {k} - z _ {k - 1} \| ^ {2} + 2 (\alpha - 1) \lambda \left(1 - \frac {4 \lambda}{3 \alpha - 2}\right) \| z _ {k} - z ^ {*} \| ^ {2}. \\ \end{array} +$$ + +Proof. (i) Let $k \geq 2$ be fixed. By the definition of $\mathcal{G}_{\lambda, k}$ in (22), we have for every $k \geq 2$ + +$$ +\begin{array}{l} \mathcal {G} _ {\lambda , k + 1} - \mathcal {G} _ {\lambda , k} \\ = \mathcal {E} _ {\lambda , k + 1} - \mathcal {E} _ {\lambda , k} - \frac {2 (\alpha - 2)}{\alpha - 1} \gamma \left[ (k + 1) ^ {2} \langle z _ {k + 1} - z _ {k}, F (z _ {k + 1}) - F (w _ {k}) \rangle \right. \\ \left. - k ^ {2} \left\langle z _ {k} - z _ {k - 1}, F \left(z _ {k}\right) - F \left(w _ {k - 1}\right) \right\rangle \right] \tag {36} \\ + \frac {(\alpha - 2) \alpha}{\alpha - 1} \gamma^ {2} \left[ (k + 1) \sqrt {k + 1} \| v _ {k + 1} - v _ {k} \| ^ {2} - k \sqrt {k} \| v _ {k} - v _ {k - 1} \| ^ {2} \right] \\ + \frac {(\alpha - 2) (1 - \varepsilon)}{\alpha - 1} \gamma^ {2} \left[ (k + 1) ^ {2} \| v _ {k + 1} - v _ {k} \| ^ {2} - k ^ {2} \| v _ {k} - v _ {k - 1} \| ^ {2} \right]. \\ \end{array} +$$ + +By using the definition of $\eta_0, \eta_1, \eta_2$ and $\eta_3$ in (34), for every $k \geq 1$ we deduce from (23) that + +$$ +\begin{array}{l} \mathcal {E} _ {\lambda , k + 1} - \mathcal {E} _ {\lambda , k} \\ = - 4 (\alpha - 2) \lambda \gamma \left\langle z _ {k + 1} - z ^ {*}, v _ {k + 1} \right\rangle + \left(\eta_ {2} k + 2 (\lambda + 1 - \alpha) (\alpha + 1)\right) \| z _ {k + 1} - z _ {k} \| ^ {2} \\ + 4 \left(\eta_ {0} k + \eta_ {1}\right) \gamma \left\langle z _ {k + 1} - z _ {k}, v _ {k + 1} \right\rangle - \frac {2 (\alpha - 2)}{\alpha - 1} \gamma k (k + \alpha) \left\langle z _ {k + 1} - z _ {k}, v _ {k + 1} - v _ {k} \right\rangle \\ - \frac {\alpha - 2}{\alpha - 1} k (2 k + \alpha) \gamma^ {2} \| v _ {k + 1} - v _ {k} \| ^ {2} + \left(4 \eta_ {3} k - \frac {\alpha - 2}{\alpha - 1} (2 \alpha^ {2} + \alpha - 2)\right) \gamma^ {2} \| v _ {k + 1} \| ^ {2} \tag {37} \\ \leq - 4 (\alpha - 2) \lambda \gamma \left\langle z _ {k + 1} - z ^ {*}, v _ {k + 1} \right\rangle - \frac {2 (\alpha - 2)}{\alpha - 1} \gamma k (\kappa + \alpha) \left\langle z _ {k + 1} - z _ {k}, v _ {k + 1} - v _ {k} \right\rangle \\ + \left(4 \left(\eta_ {0} k + \eta_ {1}\right) \gamma \left\langle z _ {k + 1} - z _ {k}, v _ {k + 1} \right\rangle + \eta_ {2} k \| z _ {k + 1} - z _ {k} \| ^ {2} + 4 \eta_ {3} k \gamma^ {2} \| v _ {k + 1} \| ^ {2}\right) \\ - \frac {\alpha - 2}{\alpha - 1} k (2 k + \alpha) \gamma^ {2} \| v _ {k + 1} - v _ {k} \| ^ {2}, \\ \end{array} +$$ + +where the inequality comes from the fact that $0 \leq \lambda \leq \alpha - 1$ and $\alpha > 2$ . Plugging (37) into (36) yields for every $k \geq 2$ + +$$ +\begin{array}{l} \mathcal {G} _ {\lambda , k + 1} - \mathcal {G} _ {\lambda , k} \\ \leq - 4 (\alpha - 2) \lambda \gamma \left\langle z _ {k + 1} - z ^ {*}, v _ {k + 1} \right\rangle - \frac {\alpha - 2}{\alpha - 1} (2 k ^ {2} + \alpha k) \gamma^ {2} \| v _ {k + 1} - v _ {k} \| ^ {2} \\ - \frac {2 (\alpha - 2)}{\alpha - 1} \gamma \left[ (k + 1) ^ {2} \langle z _ {k + 1} - z _ {k}, F (z _ {k + 1}) - F (w _ {k}) \rangle \right. \\ \left. - k ^ {2} \left\langle z _ {k} - z _ {k - 1}, F \left(z _ {k}\right) - F \left(w _ {k - 1}\right) \right\rangle \right] \\ + \frac {(\alpha - 2) \alpha}{\alpha - 1} \gamma^ {2} \left[ (k + 1) \sqrt {k + 1} \| v _ {k + 1} - v _ {k} \| ^ {2} - k \sqrt {k} \| v _ {k} - v _ {k - 1} \| ^ {2} \right] \tag {38} \\ + \frac {(\alpha - 2) (1 - \varepsilon)}{\alpha - 1} \gamma^ {2} \left[ (k + 1) ^ {2} \| v _ {k + 1} - v _ {k} \| ^ {2} - k ^ {2} \| v _ {k} - v _ {k - 1} \| ^ {2} \right] \\ + \left(4 \left(\eta_ {0} k + \eta_ {1}\right) \gamma \left\langle z _ {k + 1} - z _ {k}, v _ {k + 1} \right\rangle + \eta_ {2} k \left\| z _ {k + 1} - z _ {k} \right\| ^ {2} + 4 \eta_ {3} k \gamma^ {2} \left\| v _ {k + 1} \right\| ^ {2}\right) \\ - \frac {2 (\alpha - 2)}{\alpha - 1} \gamma k (k + \alpha) \left\langle z _ {k + 1} - z _ {k}, v _ {k + 1} - v _ {k} \right\rangle . \\ \end{array} +$$ + +Our next aim is to derive upper estimates for the first two terms on the right-hand side of (38), which will eventually simplify the subsequent three terms. From the Cauchy-Schwarz inequality and (18) we have for every $k \geq 1$ + +$$ +\begin{array}{l} - 4 (\alpha - 2) \lambda \gamma \left\langle z _ {k + 1} - z ^ {*}, v _ {k + 1} \right\rangle = - 4 (\alpha - 2) \lambda \gamma \left\langle z _ {k + 1} - z ^ {*}, \zeta_ {k + 1} + F (w _ {k}) \right\rangle \\ = - 4 (\alpha - 2) \lambda \gamma \left\langle z _ {k + 1} - z ^ {*}, \zeta_ {k + 1} + F \left(z _ {k + 1}\right) \right\rangle \\ + 4 (\alpha - 2) \lambda \gamma \left\langle z _ {k + 1} - z ^ {*}, F \left(z _ {k + 1}\right) - F \left(w _ {k}\right) \right\rangle \\ \leq - 4 (\alpha - 2) \lambda \gamma \left\langle z _ {k + 1} - z ^ {*}, \zeta_ {k + 1} + F \left(z _ {k + 1}\right) \right\rangle \\ + 4 (\alpha - 2) \lambda \gamma \| z _ {k + 1} - z ^ {*} \| \| F (z _ {k + 1}) - F (w _ {k}) \| \\ \leq - 4 (\alpha - 2) \lambda \gamma \left\langle z _ {k + 1} - z ^ {*}, \zeta_ {k + 1} + F \left(z _ {k + 1}\right) \right\rangle \tag {39} \\ + 2 (\alpha - 2) \lambda \gamma \| z _ {k + 1} - z ^ {*} \| \| v _ {k + 1} - v _ {k} \| \\ \leq - 4 (\alpha - 2) \lambda \gamma \left\langle z _ {k + 1} - z ^ {*}, \zeta_ {k + 1} + F (z _ {k + 1}) \right\rangle + \frac {(\alpha - 1) (\alpha - 2)}{\varepsilon (k + 1) ^ {2}} \lambda^ {2} \| z _ {k + 1} - z ^ {*} \| ^ {2} \\ + \frac {\alpha - 2}{\alpha - 1} \varepsilon \gamma^ {2} (k + 1) ^ {2} \| v _ {k + 1} - v _ {k} \| ^ {2}. \\ \end{array} +$$ + +For $\zeta_k \in N_C(z_k)$ and $\zeta_{k+1} \in N_C(z_{k+1})$ , the monotonicity of $N_C$ and $F$ , together with the relation (19) and the fact that for every $k \geq 1$ + +$$ +\zeta_ {k} + F (z _ {k}) - v _ {k} = F (z _ {k}) - F (w _ {k - 1}), +$$ + +yield for every $k\geq 1$ + +$$ +\begin{array}{l} - \frac {2 (\alpha - 2)}{\alpha - 1} \gamma k (k + \alpha) \langle z _ {k + 1} - z _ {k}, v _ {k + 1} - v _ {k} \rangle \\ \leq \frac {2 (\alpha - 2)}{\alpha - 1} \gamma k (k + \alpha) \left\langle z _ {k + 1} - z _ {k}, \left(\zeta_ {k + 1} + F (z _ {k + 1}) - v _ {k + 1}\right) - \left(\zeta_ {k} + F (z _ {k}) - v _ {k}\right) \right\rangle \\ = \frac {2 (\alpha - 2)}{\alpha - 1} \gamma k (k + \alpha) \left\langle z _ {k + 1} - z _ {k}, \left(F (z _ {k + 1}) - F (w _ {k})\right) - \left(F (z _ {k}) - F (w _ {k - 1})\right) \right\rangle \\ = \frac {2 (\alpha - 2)}{\alpha - 1} \gamma k (k + \alpha) \left\langle z _ {k + 1} - z _ {k}, F (z _ {k + 1}) - F (w _ {k}) \right\rangle \\ - \frac {2 (\alpha - 2)}{\alpha - 1} \gamma k (k + \alpha) \left\langle z _ {k + 1} - z _ {k}, F \left(z _ {k}\right) - F \left(w _ {k - 1}\right) \right\rangle \tag {40} \\ = \frac {2 (\alpha - 2)}{\alpha - 1} \gamma (k + 1) ^ {2} \langle z _ {k + 1} - z _ {k}, F (z _ {k + 1}) - F (w _ {k}) \rangle \\ - \frac {2 (\alpha - 2)}{\alpha - 1} \gamma k ^ {2} \left\langle z _ {k} - z _ {k - 1}, F (z _ {k}) - F (w _ {k - 1}) \right\rangle \\ + \frac {2 (\alpha - 2)}{\alpha - 1} \gamma ((\alpha - 2) k - 1) \langle z _ {k + 1} - z _ {k}, F (z _ {k + 1}) - F (w _ {k}) \rangle \\ + \frac {2 (\alpha - 2)}{\alpha - 1} \alpha \gamma^ {2} k \left\langle v _ {k + 1}, F (z _ {k}) - F (w _ {k - 1}) \right\rangle \\ + \frac {4 (\alpha - 2)}{\alpha - 1} \gamma^ {2} k ^ {2} \left\langle v _ {k + 1} - v _ {k}, F (z _ {k}) - F (w _ {k - 1}) \right\rangle . \\ \end{array} +$$ + +By Young's inequality together with (18) for every $k \geq \left\lceil \frac{1}{\alpha - 2} \right\rceil$ we obtain + +$$ +\begin{array}{l} \frac {2 (\alpha - 2)}{\alpha - 1} \gamma ((\alpha - 2) k - 1) \langle z _ {k + 1} - z _ {k}, F (z _ {k + 1}) - F (w _ {k}) \rangle \\ \leq \frac {\alpha - 2}{\alpha - 1} \left(\sqrt {(\alpha - 2) k - 1} \| z _ {k + 1} - z _ {k} \| ^ {2} \right. \\ + \gamma^ {2} \left(\left(\alpha - 2\right) k - 1\right) \sqrt {\left(\alpha - 2\right) k - 1} \| F \left(z _ {k + 1}\right) - F \left(w _ {k}\right) \| ^ {2}) \\ \leq \frac {\alpha - 2}{\alpha - 1} \sqrt {(\alpha - 2) k} \| z _ {k + 1} - z _ {k} \| ^ {2} \tag {41} \\ + (\alpha - 2) \sqrt {\alpha - 1} \gamma^ {2} (k + 1) \sqrt {k + 1} \| F (z _ {k + 1}) - F (w _ {k}) \| ^ {2} \\ \leq \frac {\alpha - 2}{\alpha - 1} \sqrt {(\alpha - 2) k} \left\| z _ {k + 1} - z _ {k} \right\| ^ {2} \\ + 4 \left(\alpha - 2\right) \sqrt {\alpha - 1} \gamma^ {4} L ^ {2} \left(k + 1\right) \sqrt {k + 1} \left\| v _ {k + 1} - v _ {k} \right\| ^ {2} \\ \leq \frac {\alpha - 2}{\alpha - 1} \sqrt {(\alpha - 2) k} \left\| z _ {k + 1} - z _ {k} \right\| ^ {2} + (\alpha - 2) \alpha \gamma^ {2} (k + 1) \sqrt {k + 1} \left\| v _ {k + 1} - v _ {k} \right\| ^ {2}, \\ \end{array} +$$ + +where in the second estimate we use the fact that $(\alpha - 2)k - 1 \leq (\alpha - 1)(k + 1)$ , while in the last one we combine $\sqrt{\alpha - 1} \leq \alpha$ and $\gamma L < 1/4 < 1$ . + +In addition, for every $k \geq 2$ we derive + +$$ +\begin{array}{l} \frac {2 (\alpha - 2)}{\alpha - 1} \alpha \gamma^ {2} k \left\langle v _ {k + 1}, F (z _ {k}) - F (w _ {k - 1}) \right\rangle \\ \leq \frac {\alpha - 2}{\alpha - 1} \alpha \gamma^ {2} \sqrt {k} \| v _ {k + 1} \| ^ {2} + \frac {\alpha - 2}{\alpha - 1} \alpha \gamma^ {2} k \sqrt {k} \| F (z _ {k}) - F (w _ {k - 1}) \| ^ {2} \\ \leq \frac {\alpha - 2}{\alpha - 1} \alpha \gamma^ {2} \sqrt {k} \| v _ {k + 1} \| ^ {2} + \frac {\alpha - 2}{\alpha - 1} \alpha \gamma^ {2} k \sqrt {k} \| v _ {k} - v _ {k - 1} \| ^ {2} \\ = \frac {\alpha - 2}{\alpha - 1} \alpha \gamma^ {2} \sqrt {k} \| v _ {k + 1} \| ^ {2} + \frac {\alpha - 2}{\alpha - 1} \alpha \gamma^ {2} (k + 1) \sqrt {k + 1} \| v _ {k + 1} - v _ {k} \| ^ {2} \\ - \frac {\alpha - 2}{\alpha - 1} \alpha \gamma^ {2} \left[ (k + 1) \sqrt {k + 1} \| v _ {k + 1} - v _ {k} \| ^ {2} - k \sqrt {k} \| v _ {k} - v _ {k - 1} \| ^ {2} \right], \\ \end{array} +$$ + +and, by using the Cauchy-Schwarz inequality and (18), + +$$ +\begin{array}{l} \frac {4 (\alpha - 2)}{\alpha - 1} \gamma^ {2} k ^ {2} \langle v _ {k + 1} - v _ {k}, F (z _ {k}) - F (w _ {k - 1}) \rangle \\ \leq \frac {2 (\alpha - 2)}{\alpha - 1} (1 - \varepsilon) \gamma^ {2} k ^ {2} \| v _ {k + 1} - v _ {k} \| \| v _ {k} - v _ {k - 1} \| \\ \leq \frac {\alpha - 2}{\alpha - 1} (1 - \varepsilon) \gamma^ {2} k ^ {2} \left(\left\| v _ {k + 1} - v _ {k} \right\| ^ {2} + \left\| v _ {k} - v _ {k - 1} \right\| ^ {2}\right) \\ = \frac {4 (\alpha - 2)}{\alpha - 1} \gamma^ {3} L k ^ {2} \left(\left\| v _ {k + 1} - v _ {k} \right\| ^ {2} + \left\| v _ {k} - v _ {k - 1} \right\| ^ {2}\right) \tag {42} \\ \leq - \frac {4 (\alpha - 2)}{\alpha - 1} \gamma^ {3} L \left((k + 1) ^ {2} \| v _ {k + 1} - v _ {k} \| ^ {2} - k ^ {2} \| v _ {k} - v _ {k - 1} \| ^ {2}\right) \\ + \frac {8 (\alpha - 2)}{\alpha - 1} \gamma^ {3} L (k + 1) ^ {2} \| v _ {k + 1} - v _ {k} \| ^ {2} \\ = - \frac {4 (\alpha - 2)}{\alpha - 1} \gamma^ {3} L \left((k + 1) ^ {2} \| v _ {k + 1} - v _ {k} \| ^ {2} - k ^ {2} \| v _ {k} - v _ {k - 1} \| ^ {2}\right) \\ + \frac {2 (\alpha - 2)}{\alpha - 1} (1 - \varepsilon) \gamma^ {2} (k + 1) ^ {2} \| v _ {k + 1} - v _ {k} \| ^ {2}, \\ \end{array} +$$ + +where we want to recall that the first equality comes from (17). By plugging (41) and (42) into (40), then combining the result with (39), we get after rearranging the terms for every $k \geq k_0$ + +$$ +\begin{array}{l} - 4 (\alpha - 2) \lambda \gamma \left\langle z _ {k + 1} - z ^ {*}, v _ {k + 1} \right\rangle - \frac {2 (\alpha - 2)}{\alpha - 1} \gamma k (k + \alpha) \left\langle z _ {k + 1} - z _ {k}, v _ {k + 1} - v _ {k} \right\rangle \\ \leq \frac {2 (\alpha - 2)}{\alpha - 1} \gamma \left[ (k + 1) ^ {2} \langle z _ {k + 1} - z _ {k}, F (z _ {k + 1}) - F (w _ {k}) \rangle \right. \\ \left. - k ^ {2} \left\langle z _ {k} - z _ {k - 1}, F \left(z _ {k}\right) - F \left(w _ {k - 1}\right) \right\rangle \right] \\ - \frac {\alpha - 2}{\alpha - 1} \alpha \gamma^ {2} \left[ (k + 1) \sqrt {k + 1} \| v _ {k + 1} - v _ {k} \| ^ {2} - k \sqrt {k} \| v _ {k} - v _ {k - 1} \| ^ {2} \right] \\ - \frac {4 (\alpha - 2)}{\alpha - 1} \gamma^ {3} L \left[ (k + 1) ^ {2} \| v _ {k + 1} - v _ {k} \| ^ {2} - k ^ {2} \| v _ {k} - v _ {k - 1} \| ^ {2} \right] \tag {43} \\ - 4 (\alpha - 2) \lambda \gamma \left\langle z _ {k + 1} - z ^ {*}, \zeta_ {k + 1} + F \left(z _ {k + 1}\right) \right\rangle \\ + \frac {\alpha - 2}{\alpha - 1} \left(\left(2 k ^ {2} + \alpha k\right) - \mu_ {k}\right) \gamma^ {2} \| v _ {k + 1} - v _ {k} \| ^ {2} \\ + \frac {1}{\alpha - 1} (\alpha - 2) \sqrt {(\alpha - 2) k} \| z _ {k + 1} - z _ {k} \| ^ {2} + \frac {\alpha - 2}{\alpha - 1} \alpha \gamma^ {2} \sqrt {k} \| v _ {k + 1} \| ^ {2} \\ + \frac {1}{\varepsilon (k + 1) ^ {2}} (\alpha - 1) (\alpha - 2) \lambda^ {2} \| z _ {k + 1} - z ^ {*} \| ^ {2}, \\ \end{array} +$$ + +where we set + +$$ +\begin{array}{l} \mu_ {k} := (2 k ^ {2} + \alpha k) - \varepsilon (k + 1) ^ {2} - (\alpha - 1) \alpha (k + 1) \sqrt {k + 1} - \alpha (k + 1) \sqrt {k + 1} \\ - 2 (1 - \varepsilon) (k + 1) ^ {2} \\ = \varepsilon (k + 1) ^ {2} + (\alpha - 4) k - 2 - \alpha^ {2} (k + 1) \sqrt {k + 1} \\ = (k + 1) \left(\varepsilon (k + 1) + \alpha^ {2} \sqrt {k + 1} + \alpha - 4\right) - (\alpha - 2). \\ \end{array} +$$ + +Finally, summing up (38) and (43), we obtain the desired estimate. + +(ii) Observe that + +$$ +\begin{array}{l} \frac {2 (\alpha - 2)}{\alpha - 1} \lambda \gamma k \left\langle z _ {k} - z ^ {*}, v _ {k} \right\rangle + \frac {(\alpha - 2) (3 \alpha - 2)}{2 (\alpha - 1) ^ {2}} \gamma^ {2} k ^ {2} \left\| v _ {k} \right\| ^ {2} \\ = \frac {\alpha - 2}{3 \alpha - 2} \left(\frac {2 (3 \alpha - 2)}{\alpha - 1} \lambda \gamma k \langle z _ {k} - z ^ {*}, v _ {k} \rangle + \frac {(3 \alpha - 2) ^ {2}}{2 (\alpha - 1) ^ {2}} \gamma^ {2} k ^ {2} \| v _ {k} \| ^ {2}\right) \\ = \frac {1}{3 \alpha - 2} (\alpha - 2) \left(\frac {1}{2} \left\| 2 \lambda (z _ {k} - z ^ {*}) + \frac {3 \alpha - 2}{\alpha - 1} \gamma k v _ {k} \right\| ^ {2} - 2 \lambda^ {2} \| z _ {k} - z ^ {*} \| ^ {2}\right). \\ \end{array} +$$ + +By the definition of $u_{\lambda ,k}$ in (20) and by using the identity + +$$ +\left\| x \right\| ^ {2} + \left\| y \right\| ^ {2} = \frac {1}{2} \left(\left\| x + y \right\| ^ {2} + \left\| x - y \right\| ^ {2}\right) \quad \forall x, y \in \mathbb {R} ^ {d}, +$$ + +we deduce that for every $k\geq 1$ + +$$ +\begin{array}{l} \mathcal {E} _ {\lambda , k} = \frac {1}{2} \left\| u _ {\lambda , k} \right\| ^ {2} + 2 \lambda (\alpha - 1 - \lambda) \left\| z _ {k} - z ^ {*} \right\| ^ {2} + \frac {2 (\alpha - 2)}{\alpha - 1} \lambda \gamma k \left\langle z _ {k} - z ^ {*}, v _ {k} \right\rangle \\ + \frac {\alpha - 2}{\alpha - 1} \gamma^ {2} k \left(\frac {1}{2 (\alpha - 1)} (3 \alpha - 2) k + \alpha\right) \| v _ {k} \| ^ {2} \\ = \frac {1}{2} \left\| 2 \lambda (z _ {k} - z ^ {*}) + 2 k (z _ {k} - z _ {k - 1}) + \frac {3 \alpha - 2}{\alpha - 1} \gamma k v _ {k} \right\| ^ {2} \\ + 2 (\alpha - 1) \lambda \left(1 - \frac {4 \lambda}{3 \alpha - 2}\right) \| z _ {k} - z ^ {*} \| ^ {2} + \frac {\alpha - 2}{\alpha - 1} \alpha \gamma^ {2} k \| v _ {k} \| ^ {2} \\ + \frac {\alpha - 2}{2 (3 \alpha - 2)} \left\| 2 \lambda (z _ {k} - z ^ {*}) + \frac {3 \alpha - 2}{\alpha - 1} \gamma k v _ {k} \right\| ^ {2} \\ = \frac {\alpha}{3 \alpha - 2} \left\| 2 \lambda (z _ {k} - z ^ {*}) + 2 k (z _ {k} - z _ {k - 1}) + \frac {3 \alpha - 2}{\alpha - 1} \gamma k v _ {k} \right\| ^ {2} \\ + 2 (\alpha - 1) \lambda \left(1 - \frac {4 \lambda}{3 \alpha - 2}\right) \| z _ {k} - z ^ {*} \| ^ {2} + \frac {\alpha - 2}{\alpha - 1} \alpha \gamma^ {2} k \| v _ {k} \| ^ {2} \\ + \frac {\alpha - 2}{4 (3 \alpha - 2)} \left\| 4 \lambda \left(z _ {k} - z ^ {*}\right) + 2 k \left(z _ {k} - z _ {k - 1}\right) + \frac {2 (3 \alpha - 2)}{\alpha - 1} \gamma k v _ {k} \right\| ^ {2} \\ + \frac {\alpha - 2}{3 \alpha - 2} k ^ {2} \left\| z _ {k} - z _ {k - 1} \right\| ^ {2}. \\ \end{array} +$$ + +Consequently, + +$$ +\begin{array}{l} \mathcal {G} _ {\lambda , k} = \mathcal {E} _ {\lambda , k} - \frac {2 (\alpha - 2)}{\alpha - 1} \gamma k ^ {2} \left\langle z _ {k} - z _ {k - 1}, F (z _ {k}) - F (w _ {k - 1}) \right\rangle \\ + \frac {\alpha - 2}{\alpha - 1} \gamma^ {2} k \sqrt {k} ((1 - \varepsilon) \sqrt {k} + \alpha) \| v _ {k} - v _ {k - 1} \| ^ {2} \\ \geq \frac {\alpha - 2}{4 (3 \alpha - 2)} \left\| 4 \lambda \left(z _ {k} - z ^ {*}\right) + 2 k \left(z _ {k} - z _ {k - 1}\right) + \frac {2 (3 \alpha - 2)}{\alpha - 1} \gamma k v _ {k} \right\| ^ {2} \\ + \frac {\alpha - 2}{3 \alpha - 2} k ^ {2} \left\| z _ {k} - z _ {k - 1} \right\| ^ {2} + 2 (\alpha - 1) \lambda \left(1 - \frac {4 \lambda}{3 \alpha - 2}\right) \left\| z _ {k} - z ^ {*} \right\| ^ {2} \\ - \frac {2 (\alpha - 2)}{\alpha - 1} \gamma k ^ {2} \left\langle z _ {k} - z _ {k - 1}, F (z _ {k}) - F (w _ {k - 1}) \right\rangle \\ + \frac {\alpha - 2}{\alpha - 1} \gamma^ {2} k \sqrt {k} \left(4 \gamma L \sqrt {k} + \alpha\right) \| v _ {k} - v _ {k - 1} \| ^ {2}. \\ \end{array} +$$ + +Now we use relation (18) and apply Lemma A.5 with $(a,b,c)\coloneqq \left(1 / 2, - 2\gamma ,{}^{2}\gamma /L\right)$ to verify that for every $k\geq 1$ + +$$ +\begin{array}{l} \frac {1}{2} \left\| z _ {k} - z _ {k - 1} \right\| ^ {2} - 4 \gamma \left\langle z _ {k} - z _ {k - 1}, F (z _ {k}) - F (w _ {k - 1}) \right\rangle + 8 \gamma^ {3} L \left\| v _ {k} - v _ {k - 1} \right\| ^ {2} \\ \geq \frac {1}{2} \left\| z _ {k} - z _ {k - 1} \right\| ^ {2} - 4 \gamma \left\langle z _ {k} - z _ {k - 1}, F (z _ {k}) - F (w _ {k - 1}) \right\rangle + \frac {2 \gamma}{L} \left\| F (z _ {k}) - F (w _ {k - 1}) \right\| ^ {2} \\ \geq 0. \\ \end{array} +$$ + +Combining the last two estimates, for every $k \geq 1$ one can easily conclude that + +$$ +\begin{array}{l} \mathcal {G} _ {\lambda , k} \geq \frac {\alpha - 2}{4 (3 \alpha - 2)} \left\| 4 \lambda (z _ {k} - z ^ {*}) + 2 k (z _ {k} - z _ {k - 1}) + \frac {2 (3 \alpha - 2)}{\alpha - 1} \gamma k v _ {k} \right\| ^ {2} \\ + (\alpha - 2) \left(\frac {1}{3 \alpha - 2} - \frac {1}{4 (\alpha - 1)}\right) k ^ {2} \| z _ {k} - z _ {k - 1} \| ^ {2} \\ + 2 (\alpha - 1) \lambda \left(1 - \frac {4 \lambda}{3 \alpha - 2}\right) \| z _ {k} - z ^ {*} \| ^ {2} \\ = \frac {\alpha - 2}{4 (3 \alpha - 2)} \left\| 4 \lambda (z _ {k} - z ^ {*}) + 2 k (z _ {k} - z _ {k - 1}) + \frac {2 (3 \alpha - 2)}{\alpha - 1} \gamma k v _ {k} \right\| ^ {2} \\ + \frac {(\alpha - 2) ^ {2}}{4 (3 \alpha - 2) (\alpha - 1)} k ^ {2} \| z _ {k} - z _ {k - 1} \| ^ {2} + 2 (\alpha - 1) \lambda \left(1 - \frac {4 \lambda}{3 \alpha - 2}\right) \| z _ {k} - z ^ {*} \| ^ {2}, \\ \end{array} +$$ + +which is the desired inequality. + +![](images/1895766234a34387c9d12a732aa36dee1f5a44f2b6c585607a3ef03c2f81afb5.jpg) + +The following lemma will be helpful in the main proof. + +Lemma B.3. The following statements are true: + +(i) there exist two parameters + +$$ +0 \leq \underline {{\lambda}} (\alpha) < \bar {\lambda} (\alpha) \leq \frac {3 \alpha - 2}{4} \tag {44} +$$ + +such that for every $\lambda$ satisfying $\underline{\lambda} (\alpha) < \lambda < \overline{\lambda} (\alpha)$ one can find an integer $k_{\lambda}\geq 1$ with the property that the following inequality holds for every $k\geq k_{\lambda}$ + +$$ +\begin{array}{l} R _ {k} := \sqrt {\frac {5 \alpha - 2}{2 (3 \alpha - 2)}} \left(\eta_ {2} k + \kappa_ {0} \sqrt {k}\right) \| z _ {k + 1} - z _ {k} \| ^ {2} + 4 \gamma \left(\eta_ {0} k + \eta_ {1}\right) \langle z _ {k + 1} - z _ {k}, v _ {k + 1} \rangle \tag {45} \\ + 4 \sqrt {\frac {5 \alpha - 2}{2 (3 \alpha - 2)}} \gamma^ {2} \left(\eta_ {3} k + \kappa_ {1} \sqrt {k}\right) \| v _ {k + 1} \| ^ {2} \leq 0; \\ \end{array} +$$ + +(ii) there exists a positive integer $k_{\varepsilon}$ such that for every $k \geq k_{\varepsilon}$ we have + +$$ +\mu_ {k} \geq \frac {\varepsilon}{2} (k + 1) ^ {2}. \tag {46} +$$ + +Proof. (i) For the quadratic expression in $R_{k}$ we calculate + +$$ +\begin{array}{l} \frac {\Delta_ {k} ^ {\prime}}{4 \gamma^ {2}} := (\eta_ {0} k + \eta_ {1}) ^ {2} - \frac {5 \alpha - 2}{2 (3 \alpha - 2)} k (\eta_ {2} \sqrt {k} + \kappa_ {0}) (\eta_ {3} \sqrt {k} + \kappa_ {1}) \\ = \left(\eta_ {0} ^ {2} - \frac {5 \alpha - 2}{2 (3 \alpha - 2)} \eta_ {2} \eta_ {3}\right) k ^ {2} - \frac {5 \alpha - 2}{2 (3 \alpha - 2)} \left(\eta_ {2} \kappa_ {1} + \kappa_ {0} \eta_ {3}\right) k \sqrt {k} \\ + \left(2 \eta_ {0} \eta_ {1} - \frac {5 \alpha - 2}{2 (3 \alpha - 2)} \kappa_ {0} \kappa_ {1}\right) k + \eta_ {1} ^ {2}. \\ \end{array} +$$ + +Since $\left(\eta_0^2 -\frac{5\alpha - 2}{2(3\alpha - 2)}\eta_2\eta_3\right)k^2$ is the dominant term in the above polynomial, it suffices to guarantee that + +$$ +\eta_ {0} ^ {2} - \frac {5 \alpha - 2}{2 (3 \alpha - 2)} \eta_ {2} \eta_ {3} < 0 \tag {47} +$$ + +holds in order to ensure the existence of some integer $k_{\lambda} \geq 1$ such that $\Delta_k' \leq 0$ for every $k \geq k_{\lambda}$ and to obtain from here, due to Lemma A.5 (ii), that $R_k \leq 0$ for every $k \geq k_{\lambda}$ . + +It remains to show that there exists a choice of $\lambda$ for which (47) is true. We set $\xi \coloneqq \lambda + 1 - \alpha \leq 0$ and get + +$$ +\begin{array}{l} \eta_ {0} = \frac {1}{2 (\alpha - 1)} \left(4 (\alpha - 1) (\lambda + 1 - \alpha) - \alpha (\alpha - 2)\right) \\ = \frac {1}{2 (\alpha - 1)} (4 (\alpha - 1) \xi - \alpha (\alpha - 2)), \\ \eta_ {2} \eta_ {3} = - \frac {2}{\alpha - 1} (\alpha - 2) (3 \alpha - 2) (\lambda + 1 - \alpha) = - \frac {2}{\alpha - 1} (\alpha - 2) (3 \alpha - 2) \xi . \\ \end{array} +$$ + +This means that we have to guarantee that there exists a choice for $\xi$ satisfying + +$$ +\begin{array}{l} \eta_ {0} ^ {2} - \frac {5 \alpha - 2}{2 (3 \alpha - 2)} \eta_ {2} \eta_ {3} \\ = \frac {1}{4 (\alpha - 1) ^ {2}} \left(\left(4 (\alpha - 1) \xi - \alpha (\alpha - 2)\right) ^ {2} + 4 (5 \alpha - 2) (\alpha - 1) (\alpha - 2) \xi\right) \\ = \frac {1}{4 (\alpha - 1) ^ {2}} \left(1 6 (\alpha - 1) ^ {2} \xi^ {2} + 4 (\alpha - 1) (\alpha - 2) (3 \alpha - 2) \xi + \alpha^ {2} (\alpha - 2) ^ {2}\right) < 0, \\ \end{array} +$$ + +which is nothing else than + +$$ +1 6 (\alpha - 1) ^ {2} \xi^ {2} + 4 (\alpha - 1) (\alpha - 2) (3 \alpha - 2) \xi + \alpha^ {2} (\alpha - 2) ^ {2} < 0. \tag {48} +$$ + +A direct computation shows that + +$$ +\Delta_ {\xi} := 1 6 (\alpha - 1) ^ {2} (\alpha - 2) ^ {2} ((3 \alpha - 2) ^ {2} - 4 \alpha^ {2}) = 1 6 (\alpha - 1) ^ {2} (\alpha - 2) ^ {3} (5 \alpha - 2) > 0. +$$ + +Hence, in order to get (48), we have to choose $\xi$ between the two roots of the quadratic function arising in this formula, in other words + +$$ +\begin{array}{l} \xi_ {1} (\alpha) := \frac {1}{3 2 (\alpha - 1) ^ {2}} \left(- 4 (\alpha - 1) (\alpha - 2) (3 \alpha - 2) - \sqrt {\Delta_ {\xi}}\right) \\ = - \frac {1}{8 (\alpha - 1)} (\alpha - 2) (3 \alpha - 2 + \sqrt {(\alpha - 2) (5 \alpha - 2)}) \\ < \xi = \lambda + 1 - \alpha < \xi_ {2} (\alpha) := \frac {1}{3 2 (\alpha - 1) ^ {2}} \left(- 4 (\alpha - 1) (\alpha - 2) (3 \alpha - 2) + \sqrt {\Delta_ {\xi}}\right) \\ = - \frac {1}{8 (\alpha - 1)} (\alpha - 2) \left(3 \alpha - 2 - \sqrt {(\alpha - 2) (5 \alpha - 2)}\right). \\ \end{array} +$$ + +Obviously $\xi_1(\alpha) < 0$ and from Vieta's formula $\xi_1(\alpha) \cdot \xi_2(\alpha) = \frac{\alpha^2 (\alpha - 2)^2}{16 (\alpha - 1)^2}$ , it follows that we must have $\xi_2(\alpha) < 0$ as well. + +Therefore, going back to $\lambda$ , in order to be sure that $\eta_0^2 - \frac{5\alpha - 2}{2(3\alpha - 2)}\eta_2\eta_3 < 0$ this must be chosen such that + +$$ +\alpha - 1 + \xi_ {1} (\alpha) < \lambda < \alpha - 1 + \xi_ {2} (\alpha). +$$ + +Next we will show that + +$$ +0 < \alpha - 1 - \frac {1}{8 (\alpha - 1)} (\alpha - 2) (3 \alpha - 2) < \frac {3 \alpha}{4} - \frac {1}{2}. \tag {49} +$$ + +Indeed, the inequality on the left-hand side follows immediately, since + +$$ +\begin{array}{l} \alpha - 1 - \frac {1}{8 (\alpha - 1)} (\alpha - 2) (3 \alpha - 2) = \frac {1}{8 (\alpha - 1)} (5 \alpha^ {2} - 8 \alpha + 4) \\ = \frac {1}{8 (\alpha - 1)} \left(\alpha^ {2} + 4 (\alpha - 1) ^ {2}\right) > 0. \\ \end{array} +$$ + +Using this relation, one can notice that the inequality on the right hand side of (49) can be equivalently written as + +$$ +5 \alpha^ {2} - 8 \alpha + 4 < 2 (\alpha - 1) (3 \alpha - 2) \Leftrightarrow 0 < \alpha^ {2} - 2 \alpha = \alpha (\alpha - 2), +$$ + +which is true as $\alpha > 2$ . + +From (49) we immediately deduce that + +$$ +0 < \alpha - 1 + \xi_ {2} (\alpha) \quad \text {a n d} \quad \alpha - 1 + \xi_ {1} (\alpha) < \frac {3 \alpha}{4} - \frac {1}{2}. +$$ + +This allows us to choose $\underline{\lambda} < \overline{\lambda}$ , where + +$$ +\begin{array}{l} \underline {{\lambda}} (\alpha) := \alpha - 1 + \xi_ {1} (\alpha) \\ = \frac {1}{8 (\alpha - 1)} \alpha^ {2} + \frac {1}{2} (\alpha - 1) - \frac {1}{8 (\alpha - 1)} (\alpha - 2) \sqrt {(\alpha - 2) (5 \alpha - 2)} \\ \end{array} +$$ + +$$ +\begin{array}{l} \bar {\lambda} (\alpha) := \min \left\{\frac {3 \alpha}{4} - \frac {1}{2}, \alpha - 1 + \xi_ {2} (\alpha) \right\} \\ = \min \left\{\frac {3 \alpha}{4} - \frac {1}{2}, \frac {1}{8 (\alpha - 1)} \alpha^ {2} + \frac {1}{2} (\alpha - 1) + \frac {1}{8 (\alpha - 1)} (\alpha - 2) \sqrt {(\alpha - 2) (5 \alpha - 2)} \right\}, \\ \end{array} +$$ + +since + +$$ +\frac {1}{8 (\alpha - 1)} \alpha^ {2} + \frac {1}{2} (\alpha - 1) - \frac {1}{8 (\alpha - 1)} (\alpha - 2) \sqrt {(\alpha - 2) (5 \alpha - 2)} > 0. +$$ + +Indeed, as $(\alpha - 1)\sqrt{\alpha - 1} > (\alpha - 2)\sqrt{\alpha - 2}$ and $4\sqrt{\alpha - 1} > \sqrt{5\alpha - 2}$ we can easily deduce that + +$$ +\alpha^ {2} + 4 (\alpha - 1) ^ {2} > 4 (\alpha - 1) ^ {2} > (\alpha - 2) \sqrt {(\alpha - 2) (5 \alpha - 2)} +$$ + +and the claim follows. + +In conclusion, choosing $\lambda$ to satisfy $\underline{\lambda} (\alpha) < \lambda < \overline{\lambda} (\alpha)$ , we have + +$$ +\eta_ {0} ^ {2} - \frac {5 \alpha - 2}{2 (3 \alpha - 2)} \eta_ {2} \eta_ {3} < 0 +$$ + +and therefore there exists some integer $k_{\lambda} \geq 1$ such that $R_{k} \leq 0$ for every $k_{\lambda}$ . + +(ii) For every $k\geq 1$ we have + +$$ +\mu_ {k} - \frac {\varepsilon}{2} (k + 1) ^ {2} = \frac {\varepsilon}{2} (k + 1) ^ {2} + \alpha^ {2} (k + 1) \sqrt {k + 1} + (\alpha - 4) (k + 1) - (\alpha - 2), +$$ + +and the conclusion is obvious. + +The following proposition plays a key role in proving the convergence rates in Proposition B.5 which will be used to prove Theorem 3.2. + +Proposition B.4. Let $z^{*} \in \Omega$ and $(z_{k})_{k \geq 0}$ , $(w_{k})_{k \geq 0}$ , $(\zeta_{k})_{k \geq 0}$ be the sequences generated by Algorithm 1 and let $(v_{k})_{k \geq 0}$ be the sequence defined by (15). Then the following statements are true: + +(i) the following hold: + +$$ +\sum_ {k \geq 1} \left\langle z _ {k} - z ^ {*}, F \left(z _ {k}\right) + \zeta_ {k} \right\rangle < + \infty , \tag {50a} +$$ + +$$ +\sum_ {k \geq 1} k ^ {2} \| v _ {k + 1} - v _ {k} \| ^ {2} < + \infty , \tag {50b} +$$ + +$$ +\sum_ {k \geq 1} k \| z _ {k + 1} - z _ {k} \| ^ {2} < + \infty , \tag {50c} +$$ + +$$ +\sum_ {k \geq 1} k \| F \left(w _ {k}\right) + \zeta_ {k + 1} \| ^ {2} < + \infty ; \tag {50d} +$$ + +(ii) the sequence $(z_k)_{k\geq 0}$ is bounded and the following hold as $k\to +\infty$ .. + +$$ +\left\| z _ {k} - z _ {k - 1} \right\| = \mathcal {O} \left(\frac {1}{k}\right), \quad \left\| \zeta_ {k} + F \left(w _ {k - 1}\right) \right\| = \mathcal {O} \left(\frac {1}{k}\right), \quad \left\| \zeta_ {k} + F \left(z _ {k}\right) \right\| = \mathcal {O} \left(\frac {1}{k}\right), +$$ + +$$ +\langle z _ {k} - z ^ {*}, \zeta_ {k} + F (z _ {k}) \rangle = \mathcal {O} \left(\frac {1}{k}\right), \quad \langle z _ {k} - z ^ {*}, F (z _ {k}) \rangle = \mathcal {O} \left(\frac {1}{k}\right); +$$ + +(iii) there exist $0 \leq \underline{\lambda}(\alpha) < \overline{\lambda}(\alpha) \leq (3\alpha - 2)/4$ such that for every $\underline{\lambda}(\alpha) < \lambda < \overline{\lambda}(\alpha)$ the sequences $(\mathcal{E}_{\lambda,k})_{k \geq 1}$ and $(\mathcal{G}_{\lambda,k})_{k \geq 2}$ converge. + +Proof. According to Lemma B.3 there exist $\underline{\lambda} (\alpha) < \overline{\lambda} (\alpha)$ such that (44) holds. We choose $\underline{\lambda} (\alpha) < \lambda < \overline{\lambda} (\alpha)$ and get, according to the same result, an integer $k_{\lambda}\geq 1$ such that for every $k\geq k_{\lambda}$ the inequality (45) holds. In addition, according to Lemma B.3(ii), we get a positive integer $k_{\varepsilon}$ such that (46) holds for every $k\geq k_{\varepsilon}$ . + +This means that for every $k \geq k_1 \coloneqq \max \{k_0, k_\lambda, k_\varepsilon\}$ , where $k_0$ is the positive integer provided by Lemma B.2(i), we have + +$$ +\begin{array}{l} \mathcal {G} _ {\lambda , k + 1} - \mathcal {G} _ {\lambda , k} \\ \leq \frac {(\alpha - 1) (\alpha - 2) \lambda^ {2}}{\varepsilon (k + 1) ^ {2}} \| z _ {k + 1} - z ^ {*} \| ^ {2} - 4 (\alpha - 2) \lambda \gamma \langle z _ {k + 1} - z ^ {*}, \zeta_ {k + 1} + F (z _ {k + 1}) \rangle \\ - \frac {\alpha - 2}{2 (\alpha - 1)} \varepsilon \gamma^ {2} (k + 1) ^ {2} \| v _ {k + 1} - v _ {k} \| ^ {2} + \left[ \left(1 - \sqrt {\frac {5 \alpha - 2}{2 (3 \alpha - 2)}}\right) \eta_ {2} k + \kappa_ {0} \sqrt {k} \right] \| z _ {k + 1} - z _ {k} \| ^ {2} \\ + \left[ \left(1 - \sqrt {\frac {5 \alpha - 2}{2 (3 \alpha - 2)}}\right) \eta_ {3} k + \kappa_ {1} \sqrt {k} \right] 4 \gamma^ {2} \| v _ {k + 1} \| ^ {2}. \\ \end{array} +$$ + +Since $\eta_2, \eta_3 < 0$ and $\kappa_0, \kappa_1 \geq 0$ , we can find some $k_2 \geq k_1$ large enough such that for every $k \geq k_2$ we get + +$$ +\begin{array}{l} \mathcal {G} _ {\lambda , k + 1} \leq \mathcal {G} _ {\lambda , k} + \frac {(\alpha - 1) (\alpha - 2) \lambda^ {2}}{\varepsilon (k + 1) ^ {2}} \left\| z _ {k + 1} - z ^ {*} \right\| ^ {2} - 4 (\alpha - 2) \lambda \gamma \left\langle z _ {k + 1} - z ^ {*}, \zeta_ {k + 1} + F (z _ {k + 1}) \right\rangle \\ - \frac {\alpha - 2}{2 (\alpha - 1)} \varepsilon \gamma^ {2} (k + 1) ^ {2} \| v _ {k + 1} - v _ {k} \| ^ {2} + \frac {1}{2} \left(1 - \sqrt {\frac {5 \alpha - 2}{2 (3 \alpha - 2)}}\right) \eta_ {2} k \| z _ {k + 1} - z _ {k} \| ^ {2} \tag {51} \\ + \left(1 - \sqrt {\frac {5 \alpha - 2}{2 (3 \alpha - 2)}}\right) 2 \gamma^ {2} \eta_ {3} k \| v _ {k + 1} \| ^ {2}. \\ \end{array} +$$ + +In view of (35), we get that $\mathcal{G}_{\lambda,k} \geq 0$ for every $k \geq 1$ thus the sequence $(\mathcal{G}_{\lambda,k})_{k \geq 2}$ is bounded from below. Moreover, by setting + +$$ +C _ {0} := \frac {1}{2 \varepsilon} (\alpha - 2) \lambda \left(1 - \frac {4 \lambda}{3 \alpha - 2}\right) ^ {- 1} > 0, +$$ + +we assert that + +$$ +\begin{array}{l} \frac {(\alpha - 1) (\alpha - 2) \lambda^ {2}}{\varepsilon (k + 1) ^ {2}} \| z _ {k + 1} - z ^ {*} \| ^ {2} = \frac {C _ {0}}{(k + 1) ^ {2}} \cdot 2 (\alpha - 1) \lambda \left(1 - \frac {4 \lambda}{3 \alpha - 2}\right) \| z _ {k} - z ^ {*} \| ^ {2} \\ \leq \frac {C _ {0}}{(k + 1) ^ {2}} \mathcal {G} _ {\lambda , k + 1}, \\ \end{array} +$$ + +Under these premises, we deduce from (51) that for every $k \geq k_2$ + +$$ +\begin{array}{l} \left(1 - \frac {C _ {0}}{(k + 1) ^ {2}}\right) \mathcal {G} _ {\lambda , k + 1} \leq \mathcal {G} _ {\lambda , k} - 4 (\alpha - 2) \lambda \gamma \langle z _ {k + 1} - z ^ {*}, \zeta_ {k + 1} + F (z _ {k + 1}) \rangle \\ - \frac {\alpha - 2}{2 (\alpha - 1)} \varepsilon \gamma^ {2} (k + 1) ^ {2} \| v _ {k + 1} - v _ {k} \| ^ {2} \tag {52} \\ + \frac {1}{2} \left(1 - \sqrt {\frac {5 \alpha - 2}{2 (3 \alpha - 2)}}\right) \eta_ {2} k \| z _ {k + 1} - z _ {k} \| ^ {2} + \left(1 - \sqrt {\frac {5 \alpha - 2}{2 (3 \alpha - 2)}}\right) 2 \gamma^ {2} \eta_ {3} k \| v _ {k + 1} \| ^ {2}. \\ \end{array} +$$ + +Taking $k_{3} \coloneqq \max \left\{k_{2}, \left\lceil \sqrt{C_{0}} - 1 \right\rceil \right\}$ we conclude for every $k \geq k_{3}$ that + +$$ +\left(1 - \frac {C _ {0}}{(k + 1) ^ {2}}\right) ^ {- 1} = \frac {(k + 1) ^ {2}}{(k + 1) ^ {2} - C _ {0}} = 1 + \frac {C _ {0}}{(k + 1) ^ {2} - C _ {0}} > 1. +$$ + +Hence, for every $k \geq k_3$ , the inequality (52) leads to + +$$ +\begin{array}{l} \mathcal {G} _ {\lambda , k + 1} \leq \left(1 + \frac {C _ {0}}{(k + 1) ^ {2} - C _ {0}}\right) \mathcal {G} _ {\lambda , k} - 4 (\alpha - 2) \lambda \gamma \langle z _ {k + 1} - z ^ {*}, \zeta_ {k + 1} + F (z _ {k + 1}) \rangle \\ - \frac {\alpha - 2}{2 (\alpha - 1)} \varepsilon \gamma^ {2} (k + 1) ^ {2} \| v _ {k + 1} - v _ {k} \| ^ {2} \\ + \frac {1}{2} \left(1 - \sqrt {\frac {5 \alpha - 2}{2 (3 \alpha - 2)}}\right) \eta_ {2} k \| z _ {k + 1} - z _ {k} \| ^ {2} + \left(1 - \sqrt {\frac {5 \alpha - 2}{2 (3 \alpha - 2)}}\right) 2 \gamma^ {2} \eta_ {3} k \| v _ {k + 1} \| ^ {2}, \\ \end{array} +$$ + +which is nothing else than the inequality (11) with + +$$ +\begin{array}{l} b _ {\lambda , k} := 4 (\alpha - 2) \lambda \gamma \left\langle z _ {k + 1} - z ^ {*}, \zeta_ {k + 1} + F (z _ {k + 1}) \right\rangle + \frac {\alpha - 2}{2 (\alpha - 1)} \varepsilon \gamma^ {2} (k + 1) ^ {2} \| v _ {k + 1} - v _ {k} \| ^ {2} \\ - \frac {1}{2} \left(1 - \sqrt {\frac {5 \alpha - 2}{2 (3 \alpha - 2)}}\right) \eta_ {2} k \| z _ {k + 1} - z _ {k} \| ^ {2} - \left(1 - \sqrt {\frac {5 \alpha - 2}{2 (3 \alpha - 2)}}\right) 2 \gamma^ {2} \eta_ {3} k \| v _ {k + 1} \| ^ {2} \\ \geq 0, \\ \end{array} +$$ + +$$ +d _ {\lambda , k} := \frac {C _ {0}}{(k + 1) ^ {2} - C _ {0}} > 0. +$$ + +Using Lemma A.2 we obtain (50) as well as convergence of the sequence $(\mathcal{G}_{\lambda ,k})_{k\geq 1}$ . + +Since $(\mathcal{G}_{\lambda ,k})_{k\geq 1}$ converges, it is also bounded from above, which, according to (35), implies that the following estimate holds for every $k\geq k_{3}$ + +$$ +\begin{array}{l} \frac {\alpha - 2}{3 \alpha - 2} \left\| 4 \lambda (z _ {k} - z ^ {*}) + 2 k (z _ {k} - z _ {k - 1}) + \frac {2 (3 \alpha - 2)}{\alpha - 1} \gamma k v _ {k} \right\| ^ {2} \\ + \frac {(\alpha - 2) ^ {2}}{4 (3 \alpha - 2) (\alpha - 1)} k ^ {2} \| z _ {k} - z _ {k - 1} \| ^ {2} + 2 (\alpha - 1) \lambda \left(1 - \frac {4 \lambda}{3 \alpha - 2}\right) \| z _ {k} - z ^ {*} \| ^ {2} \\ \leq \mathcal {G} _ {\lambda , k} \leq \sup _ {k \geq 1} \mathcal {G} _ {\lambda , k} < + \infty . \\ \end{array} +$$ + +From here we obtain the boundedness of the sequences + +$$ +\left(4 \lambda \left(z _ {k} - z ^ {*}\right) + 2 k \left(z _ {k} - z _ {k - 1}\right) + \frac {2 (3 \alpha - 2)}{\alpha - 1} \gamma k v _ {k}\right) _ {k \geq 1}, +$$ + +$$ +\left(k \left(z _ {k} - z _ {k - 1}\right)\right) _ {k \geq 1} \quad \text {a n d} \quad \left(z _ {k}\right) _ {k \geq 0}. +$$ + +In particular, for every $k \geq k_3$ we have + +$$ +\begin{array}{l} \left\| 4 \lambda (z _ {k} - z ^ {*}) + 2 k (z _ {k} - z _ {k - 1}) + \frac {2 (3 \alpha - 2)}{\alpha - 1} \gamma k v _ {k} \right\| \leq C _ {1} := \sqrt {\frac {3 \alpha - 2}{\alpha - 2} \sup _ {k \geq 1} \mathcal {G} _ {\lambda , k}} < + \infty , \\ k \| z _ {k} - z _ {k - 1} \| \leq C _ {2} := \frac {2}{\alpha - 2} \sqrt {(3 \alpha - 2) (\alpha - 1) \sup _ {k \geq 1} \mathcal {G} _ {\lambda , k}} < + \infty , \\ \left\| z _ {k} - z ^ {*} \right\| \leq C _ {3} := \sqrt {\frac {1}{2 (\alpha - 1) \lambda} \left(1 - \frac {4 \lambda}{3 \alpha - 2}\right) ^ {- 1} \sup _ {k \geq 1} \mathcal {G} _ {\lambda , k}} < + \infty . \tag {53} \\ \end{array} +$$ + +Using the triangle inequality, we deduce from here that for every $k \geq k_3$ + +$$ +\begin{array}{l} \| v _ {k} \| \leq \frac {\alpha - 1}{2 (3 \alpha - 2) \gamma k} \left\| 4 \lambda (z _ {k} - z ^ {*}) + 2 k (z _ {k} - z _ {k - 1}) + \frac {2 (3 \alpha - 2)}{\alpha - 1} \gamma k v _ {k} \right\| \\ + \frac {\alpha - 1}{(3 \alpha - 2) \gamma} \left\| z _ {k} - z _ {k - 1} \right\| + \frac {2 (\alpha - 1) \lambda}{(3 \alpha - 2) \gamma k} \left\| z _ {k} - z ^ {*} \right\| \leq \frac {C _ {4}}{k}, \\ \end{array} +$$ + +where + +$$ +C _ {4} := \frac {\alpha - 1}{2 (3 \alpha - 2) \gamma} \left(C _ {1} + 2 C _ {2} + 4 \bar {\lambda} (\alpha) C _ {3}\right) > 0. +$$ + +The statement (50b) yields + +$$ +\lim _ {k \rightarrow + \infty} k \| v _ {k + 1} - v _ {k} \| = 0 \quad \Rightarrow \quad C _ {5} := \sup _ {k \geq 1} \left\{k \| v _ {k + 1} - v _ {k} \| \right\} < + \infty , \tag {54} +$$ + +which, together with (18) implies that for every $k \geq k_3$ + +$$ +\begin{array}{l} \left\| \zeta_ {k + 1} + F \left(z _ {k + 1}\right) \right\| \leq \left\| \zeta_ {k + 1} + F \left(z _ {k + 1}\right) - v _ {k + 1} \right\| + \left\| v _ {k + 1} \right\| \\ \leq \left\| v _ {k + 1} - v _ {k} \right\| + \left\| v _ {k + 1} \right\| \leq \frac {C _ {6}}{k}, \tag {55} \\ \end{array} +$$ + +where + +$$ +C _ {6} := C _ {4} + C _ {5} > 0. +$$ + +The remaining assertion follows from the fact that $\zeta_k \in N_C(z_k)$ where $z_k \in C$ by definition, the Cauchy-Schwarz inequality and the boundedness of $(z_k)_{k \geq 0}$ , namely, for every $k \geq k_3$ we deduce + +$$ +0 \leq \langle z _ {k} - z ^ {*}, F (z _ {k}) \rangle \leq \langle z _ {k} - z ^ {*}, \zeta_ {k} + F (z _ {k}) \rangle \leq \| z _ {k} - z ^ {*} \| \| \zeta_ {k} + F (z _ {k}) \| \leq \frac {C _ {3} C _ {6}}{k - 1}. +$$ + +To complete the proof, we are going to show that in fact + +$$ +\lim _ {k \to + \infty} \mathcal {E} _ {\lambda , k} = \lim _ {k \to + \infty} \mathcal {G} _ {\lambda , k} \in \mathbb {R}. +$$ + +Indeed, we already have seen that + +$$ +\lim _ {k \to + \infty} (k + 1) \| v _ {k + 1} - v _ {k} \| = \lim _ {k \to + \infty} \| v _ {k + 1} \| = 0, +$$ + +which, by the Cauchy-Schwarz inequality and (18) yields + +$$ +\begin{array}{l} 0 \leq \lim _ {k \to + \infty} k ^ {2} | \langle z _ {k} - z _ {k - 1}, F (z _ {k}) - F (w _ {k - 1}) \rangle | \leq C _ {2} \lim _ {k \to + \infty} k \| F (z _ {k}) - F (w _ {k - 1}) \| \\ \leq C _ {2} \lim _ {k \rightarrow + \infty} k \left\| v _ {k} - v _ {k - 1} \right\| = 0. \\ \end{array} +$$ + +From here we obtain the desired statement. + +# B.3. Proofs of the Main Results + +Proof of Theorem 3.2. Let $\underline{\lambda} (\alpha) < \overline{\lambda} (\alpha)$ be the parameters provided by Lemma B.3 such that (44) holds and with the property that for every $\underline{\lambda} (\alpha) < \lambda < \overline{\lambda} (\alpha)$ there exists an integer $k_{\lambda}\geq 1$ such that for every $k\geq k_{\lambda}$ the inequality (45) holds. + +For every $k\geq 1$ we set + +$$ +p _ {k} := \frac {1}{2} (\alpha - 1) \| z _ {k} - z ^ {*} \| ^ {2} + k \left\langle z _ {k} - z ^ {*}, z _ {k} - z _ {k - 1} + 2 \gamma v _ {k} \right\rangle , \tag {56} +$$ + +$$ +q _ {k} := \frac {1}{2} \left\| z _ {k} - z ^ {*} \right\| ^ {2} + 2 \gamma \sum_ {i = 1} ^ {k} \left\langle z _ {i} - z ^ {*}, v _ {i} \right\rangle . \tag {57} +$$ + +Then one can see that for every $k \geq 2$ we have + +$$ +q _ {k} - q _ {k - 1} = \left\langle z _ {k} - z ^ {*}, z _ {k} - z _ {k - 1} \right\rangle - \frac {1}{2} \left\| z _ {k} - z _ {k - 1} \right\| ^ {2} + 2 \gamma \left\langle z _ {k} - z ^ {*}, v _ {k} \right\rangle , +$$ + +and thus + +$$ +\left(\alpha - 1\right) q _ {k} + k \left(q _ {k} - q _ {k - 1}\right) = p _ {k} + 2 \left(\alpha - 1\right) \gamma \sum_ {i = 1} ^ {k} \langle z _ {i} - z ^ {*}, v _ {i} \rangle - \frac {k}{2} \left\| z _ {k} - z _ {k - 1} \right\| ^ {2}. +$$ + +From (21) and (20), direct computation shows that for every $k \geq 1$ + +$$ +\begin{array}{l} \mathcal {E} _ {\lambda , k} = \frac {1}{2} \left\| 2 \lambda (z _ {k} - z ^ {*}) + 2 k (z _ {k} - z _ {k - 1}) + \frac {3 \alpha - 2}{\alpha - 1} \gamma k v _ {k} \right\| ^ {2} + 2 \lambda (\alpha - 1 - \lambda) \| z _ {k} - z ^ {*} \| ^ {2} \\ + \frac {2 (\alpha - 2)}{\alpha - 1} \lambda \gamma k \left\langle z _ {k} - z ^ {*}, v _ {k} \right\rangle + \frac {\alpha - 2}{\alpha - 1} \gamma^ {2} k \left(\frac {1}{2 (\alpha - 1)} (3 \alpha - 2) k + \alpha\right) \| v _ {k} \| ^ {2} \tag {58} \\ = 2 \lambda (\alpha - 1) \| z _ {k} - z ^ {*} \| ^ {2} + 4 \lambda k \left\langle z _ {k} - z ^ {*}, z _ {k} - z _ {k - 1} + 2 \gamma v _ {k} \right\rangle + \frac {\alpha - 2}{\alpha - 1} \alpha \gamma^ {2} k \| v _ {k} \| ^ {2} \\ + \frac {k ^ {2}}{2} \left(\left\| 2 (z _ {k} - z _ {k - 1}) + \frac {3 \alpha - 2}{\alpha - 1} \gamma v _ {k} \right\| ^ {2} + \frac {(\alpha - 2) (3 \alpha - 2)}{(\alpha - 1) ^ {2}} \gamma^ {2} \| v _ {k} \| ^ {2}\right). \\ \end{array} +$$ + +Therefore, for every $\underline{\lambda} (\alpha) < \lambda_1 < \lambda_2 < \overline{\lambda} (\alpha)$ we can conclude + +$$ +\begin{array}{l} \mathcal {E} _ {\lambda_ {2}, k} - \mathcal {E} _ {\lambda_ {1}, k} = 4 (\lambda_ {2} - \lambda_ {1}) \left(\frac {1}{2} (\alpha - 1) \| z _ {k} - z ^ {*} \| ^ {2} + 2 k \langle z _ {k} - z ^ {*}, (z _ {k} - z _ {k - 1}) + s v _ {k} \rangle\right) \\ = 4 \left(\lambda_ {2} - \lambda_ {1}\right) p _ {k}. \\ \end{array} +$$ + +Hence, according to the previous theorem, the limit $\lim_{k\to +\infty}\left(\mathcal{E}_{\lambda_2,k} - \mathcal{E}_{\lambda_1,k}\right)\in \mathbb{R}$ exists, which implies further that the limit + +$$ +\lim _ {k \rightarrow + \infty} p _ {k} \in \mathbb {R} \text {e x i s t s}. \tag {59} +$$ + +Further, we observe that for every $k \geq 2$ + +$$ +\begin{array}{l} \sum_ {i = 2} ^ {k} \left| \left\langle z _ {i} - z ^ {*}, F \left(w _ {i - 1}\right) - F \left(z _ {i}\right) \right\rangle \right| \leq \sum_ {i = 2} ^ {k} \| z _ {i} - z ^ {*} \| \| F \left(w _ {i - 1}\right) - F \left(z _ {i}\right) \| (60a) \\ \leq \frac {1}{2} \sum_ {i = 2} ^ {k} \frac {1}{i ^ {2}} \| z _ {i} - z ^ {*} \| ^ {2} + \frac {1}{2} \sum_ {i = 2} ^ {k} i ^ {2} \| F (w _ {i - 1}) - F (z _ {i}) \| ^ {2} \\ \leq \frac {1}{2} \sum_ {i = 2} ^ {+ \infty} \frac {1}{i ^ {2}} \| z _ {i} - z ^ {*} \| ^ {2} + \frac {1}{2} \sum_ {i = 2} ^ {+ \infty} i ^ {2} \| F (w _ {i - 1}) - F (z _ {i}) \| ^ {2} < + \infty , (60b) \\ \end{array} +$$ + +where (60a) comes from the Cauchy-Schwarz inequality, the first sum in (60b) is finite due to (53), while the second series is convergent because of (18) and (50b). This means the series $\sum_{k\geq 2}\langle z_k - z^*,F(w_{k - 1}) - F(z_k)\rangle$ is absolutely convergent, thus convergent. + +By taking into consideration (50a), it follows from here that the limit + +$$ +\begin{array}{l} \lim _ {k \rightarrow + \infty} \sum_ {i = 1} ^ {k} \left\langle z _ {i} - z ^ {*}, v _ {i} \right\rangle \\ = \lim _ {k \rightarrow + \infty} \sum_ {i = 1} ^ {k} \left\langle z _ {i} - z ^ {*}, \zeta_ {i} + F (z _ {i}) \right\rangle + \lim _ {k \rightarrow + \infty} \sum_ {i = 1} ^ {k} \left\langle z _ {i} - z ^ {*}, F (w _ {i - 1}) - F (z _ {i}) \right\rangle \in \mathbb {R} \\ \end{array} +$$ + +exists. In addition, thanks to (50c), we have $\lim_{k\to +\infty}k\left\| z_{k + 1} - z_k\right\|^2 = 0$ , consequently, + +$$ +\lim _ {k \rightarrow + \infty} (\alpha - 1) q _ {k} + k (q _ {k} - q _ {k - 1}) \in \mathbb {R} \text {e x i s t s}. +$$ + +According to Proposition B.4, we have that $(q_k)_{k\geq 1}$ is bounded due to the boundedness of $(z_k)_{k\geq 0}$ and the fact that $\lim_{k\to +\infty}\sum_{i = 1}^{k}\langle z_i - z^*,v_i\rangle \in \mathbb{R}$ exists. Therefore, we can apply Lemma A.1 to guarantee the existence of the limit $\lim_{k\to +\infty}q_k\in \mathbb{R}$ . By the definition of $q_{k}$ in (57) and the fact that the sequence $\left(\sum_{i = 1}^{k - 1}\langle z_i - z^*,v_i\rangle\right)_{k\geq 1}$ converges, we conclude that $\lim_{k\to +\infty}\| z_k - z^*\| \in \mathbb{R}$ exists. The hypothesis (i) in the Opial Lemma (see Lemma A.3) is fulfilled. + +Let $w$ be a cluster point of $(z_k)_{k \geq 0}$ , which means that there exists a subsequence $\{z_{k_n}\}_{n \geq 0}$ such that + +$$ +z _ {k _ {n}} \rightarrow w \mathrm {a s} n \rightarrow + \infty . +$$ + +It follows from Proposition B.4 that + +$$ +F \left(z _ {k _ {n}}\right) + \zeta_ {k _ {n}} \rightarrow 0 \quad \text {a s} n \rightarrow + \infty . +$$ + +The maximal monotonicity of $F + N_{C}$ implies that $0 \in (N_{C} + F)(w)$ , meaning that hypothesis (ii) of Lemma A.3 is also verified. The proof of the convergence of the iterates is therefore completed. + +Before finally proving Theorem 3.3 we show convergence rates of various helpful quantities. + +Proposition B.5. Let $z^{*} \in \Omega$ and $(z_{k})_{k \geq 0}$ be the sequence generated by Algorithm 1. Then, as $k \to +\infty$ , the following hold: + +$$ +\begin{array}{l} \left\| z _ {k} - z _ {k - 1} \right\| = o \left(\frac {1}{k}\right), \quad \langle F (z _ {k}), z _ {k} - z ^ {*} \rangle = o \left(\frac {1}{k}\right), \quad \langle \zeta_ {k} + F (z _ {k}), z _ {k} - z ^ {*} \rangle = o \left(\frac {1}{k}\right) \\ \| \zeta_ {k} + F (z _ {k}) \| = o \left(\frac {1}{k}\right), \quad \| \zeta_ {k} + F (w _ {k - 1}) \| = o \left(\frac {1}{k}\right). \\ \end{array} +$$ + +Proof. Let $\underline{\lambda}(\alpha) < \overline{\lambda}(\alpha)$ be the parameters provided by Lemma B.3 such that (44) holds and with the property that for every $\underline{\lambda}(\alpha) < \lambda < \overline{\lambda}(\alpha)$ there exists an integer $k_{\lambda} \geq 1$ such that for every $k \geq k_{\lambda}$ the inequality (45) holds. We fix $\underline{\lambda}(\alpha) < \lambda < \overline{\lambda}(\alpha)$ and recall that according to Proposition B.4(iii) the sequence $(\mathcal{E}_{\lambda,k})_{k \geq 1}$ converges. + +We set for every $k\geq 1$ + +$$ +h _ {k} := \frac {k ^ {2}}{2} \left(\left\| 2 (z _ {k} - z _ {k - 1}) + \frac {3 \alpha - 2}{\alpha - 1} \gamma v _ {k} \right\| ^ {2} + \frac {(\alpha - 2) (3 \alpha - 2)}{(\alpha - 1) ^ {2}} \gamma^ {2} \| v _ {k} \| ^ {2}\right), +$$ + +and notice that, in view of (58) and (56), we have + +$$ +\mathcal {E} _ {\lambda , k} = 4 \lambda p _ {k} + \frac {4 (\alpha - 2)}{\alpha - 1} \alpha \gamma^ {2} \gamma^ {2} k \| v _ {k} \| ^ {2} + h _ {k}. +$$ + +Proposition B.4 asserts that + +$$ +\lim _ {k \to \infty} k \| v _ {k} \| ^ {2} = 0, +$$ + +which, together with $\lim_{k\to +\infty}\mathcal{E}_{\lambda ,k}\in \mathbb{R}$ and $\lim_{k\to +\infty}p_k\in \mathbb{R}$ (see also (59)), yields the existence of + +$$ +\lim _ {k \to + \infty} h _ {k} \in \mathbb {R}. +$$ + +In addition, (50c) and (50d) in Proposition B.4 guarantee that + +$$ +\sum_ {k \geq 1} \frac {1}{k} h _ {k} \leq 4 \sum_ {k \geq 1} k \| z _ {k} - z _ {k - 1} \| ^ {2} + \frac {(3 \alpha - 2) (7 \alpha - 6)}{2 (\alpha - 1) ^ {2}} \gamma^ {2} \sum_ {k \geq 1} k \| v _ {k} \| ^ {2} < + \infty . +$$ + +Consequently, $\lim_{k\to +\infty}h_k = 0$ , which yields + +$$ +\lim _ {k \to \infty} k \left\| 2 (z _ {k} - z _ {k - 1}) + \frac {3 \alpha - 2}{\alpha - 1} \gamma v _ {k} \right\| = \lim _ {k \to \infty} k \| v _ {k} \| = 0. +$$ + +This immediately implies $\lim_{k\to +\infty}k\| z_k - z_{k - 1}\| = 0$ . The fact that + +$$ +\lim _ {k \rightarrow + \infty} k \| \zeta_ {k} + F (z _ {k}) \| = 0 +$$ + +follows from (18), (54) and (55), since + +$$ +0 \leq \lim _ {k \to + \infty} k \| \zeta_ {k} + F (z _ {k}) \| \leq \lim _ {k \to + \infty} k \| v _ {k} - v _ {k - 1} \| + \lim _ {k \to + \infty} k \| v _ {k} \| = 0. +$$ + +Finally, using the Cauchy-Schwarz inequality and the fact that $(z_k)_{k\geq 0}$ is bounded, we obtain that $\lim_{k\to +\infty}k\langle z_k - z^*,F(z_k)\rangle = \lim_{k\to +\infty}k\langle z_k - z^*,\zeta_k + F(z_k)\rangle = 0.$ + +Now we are able to prove the convergence rates in terms of the restricted gap and the natural gap. + +Proof of Theorem 3.3. For every $k \geq 1$ , using successively the monotonicity of $F$ , the fact that $\zeta_k \in N_C(z_k)$ , where $z_k \in C$ by its definition, and the Cauchy-Schwarz inequality, we deduce that for every $u \in C \cap \mathbb{B}(z^*; \delta(z_0))$ + +$$ +\begin{array}{l} \langle F (u), z _ {k} - u \rangle \leq \langle F (z _ {k}), z _ {k} - u \rangle \leq \langle \zeta_ {k} + F (z _ {k}), z _ {k} - u \rangle \\ = \left\langle \zeta_ {k} + F (z _ {k}), z _ {k} - z ^ {*} \right\rangle + \left\langle \zeta_ {k} + F (z _ {k}), z ^ {*} - u \right\rangle \\ \leq \left\langle \zeta_ {k} + F (z _ {k}), z _ {k} - z ^ {*} \right\rangle + \| \zeta_ {k} + F (z _ {k}) \| \| z ^ {*} - u \| \\ \leq \left\langle \zeta_ {k} + F (z _ {k}), z _ {k} - z ^ {*} \right\rangle + \delta (z _ {0}) \| \zeta_ {k} + F (z _ {k}) \|. \\ \end{array} +$$ + +Therefore, it follows from Proposition B.5 that + +$$ +\begin{array}{l} \operatorname {G a p} \left(z _ {k}\right) = \max _ {u \in C \cap \mathbb {B} \left(z ^ {*}; \delta \left(z _ {0}\right)\right)} \left\langle F (u), z _ {k} - u \right\rangle \leq \left\langle \zeta_ {k} + F \left(z _ {k}\right), z _ {k} - z ^ {*} \right\rangle + \delta \left(z _ {0}\right) \| \zeta_ {k} + F \left(z _ {k}\right) \| \\ = o \left(\frac {1}{k}\right) \quad \text {a s} k \rightarrow + \infty . \\ \end{array} +$$ + +Concluding, by (14) we obtain + +$$ +\operatorname {R e s} \left(z _ {k}\right) \leq \| \zeta_ {k} + F \left(z _ {k}\right) \| = o \left(\frac {1}{k}\right), \quad \text {a s} k \rightarrow + \infty , +$$ + +and the proof is complete. + +# C. Implementation Details + +In this section we report the details on the implementations for our GAN experiments. + +# C.1. Architecture + +In Table 3 we describe the architectures that were used in the experiments on CIFAR-10. The models were selected replicating the set-up of (Miyato et al., 2018; Chavdarova et al., 2021b). + +Table 3. ResNet architecture used for the CIFAR-10 experiments. + +
Generator (G)
Input: z ∈ R128 ~ N(0, I)
Linear 128 → 4,096
G-ResBlock
G-ResBlock
G-ResBlock
Batch Normalisation
ReLU
conv. (kernel: 3×3, 256 → 3, stride: 1, pad: 1)
tanh(·)
Discriminator (D)
Input: x ∈ R3×32×32
D-ResBlock
D-ResBlock
D-ResBlock
D-ResBlock
ReLU
Avg. Pool (kernel: 8 × 8) Linear 128 → 1
Spectral Normalisation
+ +# C.2. Hyperparameters + +In Table 4 we list the hyperparameters that were used for fOGDA-VI to obtain the results on CIFAR-10. The hyperparameters for LA-GDA were the same as in (Chavdarova et al., 2021b). + +Table 4. Hyperparameters used for the GAN experiments on CIFAR-10. + +
fOGDA-VI
Batch size= 128
Iterations= 500,000
Adam β1= 0.0
Adam β2= 0.9
Update ratio D/G= 5
Learning rate for discriminator= 1 × 10-4
Learning rate for generator= 1 × 10-4
fOGDA α= 100
fOGDA n= 1000
+ +# C.3. PyTorch Code + +In the following we report the code of the wrapper for the fOGDA-VI optimiser written using the PyTorch (Paszke et al., 2019) framework. + +```python +from torch optim import Optimizer +2 +4 class fOGDA(Optimizer): def init_self,optimizer, alpha $= 100$ ,incrementIteratorevery $\equiv 1000$ . print( f"Using fOGDA (alpha={alpha};" f"increment iterator every {increment_iteratoreverything} step(s))." +8 +10 selfOptimizer $=$ optimizer self.default $=$ selfOptimizer.defaults self param_groups $=$ selfOptimizer param_groups self.state $=$ selfOptimizer.state +``` + +```python +#fOGDA parameters +self.alpha $=$ alpha +self.incrementIteratorevery $=$ incrementIteratorevery +self_iteration $= 0$ +self.k $= 2$ +self.params_copy $= []$ +self.old_parameters_copy $= []$ +selfupdates $= []$ +self(old Updates $= []$ +self.ol_difference_of Updates $= []$ +def step(self,closure $\equiv$ None): +loss $=$ None +if closure is not None: +loss $=$ closure() +no_old_parameters $=$ len(self.ol_parameters_copy) $= = 0$ +no_old_updates $=$ len(self.ol_updates) $= = 0$ +no_old_diffrence_of Updates $=$ len(self.ol_diffrence_of Updates) $= = 0$ +#initialise (old) parameters +if len(self.params_copy) $>0$ .. raiseRuntimeError("Something bad happen here...") +for group in self.params_groups: for p in group["params']: self.params_copy.append(p.dataClone()) if no_old_parameters: self.ol_parameters_copy.append(p.dataClone()) +# reverse engineer update from optimizer step +selfOptimizer_STEP() +i=-1 +iflen(selfupdates) $>0$ .. raiseRuntimeError("Something bad happen here...") +for group in self.params_groups: for p in group["params']: i+=1 selfupdates.append(self.params_copy[i] -p.data) +#initialise old updates and difference of updates +if (not no_oldUpdates and no_old_diffrence_of Updates) or ( not no_old_diffrence_of Updates and no_old Updates ): raiseRuntimeError("Something bad happen here...") +if no_old Updates and no_old_diffrence_of Updates: for p in selfupdates: self.ol Updates.append(p.clone()) self.ol_diffrence_of Updates.append(torch.zeros_like(p)) +#compute fOGDA coefficients +theta_p $=$ self.alpha / (self.alpha $^+$ self.k+1) +theta $=$ self.alpha / (self.alpha $^+$ self.k) +theta_m $=$ self.alpha / (self.alpha $^+$ self.k-1) +#compute new weights with fOGDA update +i=-1 +for group in self.params_groups: for p in group["params']: i+=1 ( self.ol_parameters_copy[i], self.olupdates[i], self.ol_diffrence_of Updates[i], p.data, ) $= ($ +``` + +```python +self.params_copy[i], +selfupdates[i], +selfupdates[i] - self.old Updates[i], +self.params_copy[i] ++ (1 - theta_p) +* (self.params_copy[i] - self.old.params_copy[i]) +- theta_p * selfupdates[i] +- (2 - theta) +* (2 - theta_p) +* (selfupdates[i] - self.old Updates[i]) ++ (2 - theta_m) * (1 - theta_p) +* self.old_diffERENCE_of Updates[i], +) +} +self_iteration += 1 +if self_iteration % self.incrementIterator EVERY == 0: + # update iterator k + self.k += 1 +# free parameters +self.params_copy = [] +selfupdates = [] +return loss +``` \ No newline at end of file diff --git a/afastoptimisticmethodformonotonevariationalinequalities/images.zip b/afastoptimisticmethodformonotonevariationalinequalities/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3117fbd78767e757f3d380fd9cf5d35182a699f5 --- /dev/null +++ b/afastoptimisticmethodformonotonevariationalinequalities/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1c331999f2b335e973e68fde7160e95540df05d12742f1b3cec77a0c7ccac759 +size 2303698 diff --git a/afastoptimisticmethodformonotonevariationalinequalities/layout.json b/afastoptimisticmethodformonotonevariationalinequalities/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..cab224ca5a9d2bb1c06527440a76a1c2e0657666 --- /dev/null +++ b/afastoptimisticmethodformonotonevariationalinequalities/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0432218b13da937feca36daad76463d57f72cbf22cd6af4b628845cd38c6075e +size 1234520 diff --git a/afastwellfoundedapproximationtotheempiricalneuraltangentkernel/c95ae00c-a193-4edc-8ff3-45244ebe4fef_content_list.json b/afastwellfoundedapproximationtotheempiricalneuraltangentkernel/c95ae00c-a193-4edc-8ff3-45244ebe4fef_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..92edf15475fe1dc5cc97823b585d95fbcc7174d2 --- /dev/null +++ b/afastwellfoundedapproximationtotheempiricalneuraltangentkernel/c95ae00c-a193-4edc-8ff3-45244ebe4fef_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39680a476245fa87dee51cc10f860e4159b2e3e10ec37ed2f7e5b530e5333e42 +size 169818 diff --git a/afastwellfoundedapproximationtotheempiricalneuraltangentkernel/c95ae00c-a193-4edc-8ff3-45244ebe4fef_model.json b/afastwellfoundedapproximationtotheempiricalneuraltangentkernel/c95ae00c-a193-4edc-8ff3-45244ebe4fef_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5a086f0263cda2bc34c31bf0f831b89aa66eb145 --- /dev/null +++ b/afastwellfoundedapproximationtotheempiricalneuraltangentkernel/c95ae00c-a193-4edc-8ff3-45244ebe4fef_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c387c794d3ad290c33455163f363b58aac75dac70a35cdc07817d19b51425c92 +size 193229 diff --git a/afastwellfoundedapproximationtotheempiricalneuraltangentkernel/c95ae00c-a193-4edc-8ff3-45244ebe4fef_origin.pdf b/afastwellfoundedapproximationtotheempiricalneuraltangentkernel/c95ae00c-a193-4edc-8ff3-45244ebe4fef_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0eda2d1fb81a4a68d71e2b5f2503f138c76902e0 --- /dev/null +++ b/afastwellfoundedapproximationtotheempiricalneuraltangentkernel/c95ae00c-a193-4edc-8ff3-45244ebe4fef_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6fca3b8d7ad9776657b25af44ad794571d3ac3b8e5c6823a9e5b62edcba31283 +size 1333869 diff --git a/afastwellfoundedapproximationtotheempiricalneuraltangentkernel/full.md b/afastwellfoundedapproximationtotheempiricalneuraltangentkernel/full.md new file mode 100644 index 0000000000000000000000000000000000000000..80d7f1994cef0f679ff3f4a0a97fa3f6aba4ee27 --- /dev/null +++ b/afastwellfoundedapproximationtotheempiricalneuraltangentkernel/full.md @@ -0,0 +1,843 @@ +# A Fast, Well-Founded Approximation to the Empirical Neural Tangent Kernel + +Mohamad Amin Mohamadi1 Wonho Bae1 Danica J. Sutherland1,2 + +# Abstract + +Empirical neural tangent kernels (eNTKs) can provide a good understanding of a given network's representation: they are often far less expensive to compute and applicable more broadly than infinite-width NTKs. For networks with $O$ output units (e.g. an $O$ -class classifier), however, the eNTK on $N$ inputs is of size $NO \times NO$ , taking $\mathcal{O}\left((NO)^2\right)$ memory and up to $\mathcal{O}\left((NO)^3\right)$ computation to use. Most existing applications have therefore used one of a handful of approximations yielding $N \times N$ kernel matrices, saving orders of magnitude of computation, but with limited to no justification. We prove that one such approximation, which we call "sum of logits," converges to the true eNTK at initialization. Our experiments demonstrate the quality of this approximation for various uses across a range of settings. + +# 1. Introduction + +The pursuit of a theoretical foundation for deep learning has lead researches to uncover interesting connections between neural networks (NNs) and kernel methods. It has long been known that randomly initialized NNs in the infinite width limit are Gaussian processes with the Neural Network Gaussian Process (NNGP) kernel, and training the last layer with gradient flow under squared loss corresponds to the posterior mean (Neal, 1996; Williams, 1996; Hazan & Jaakkola, 2015; Lee et al., 2017; Matthews et al., 2018; Novak et al., 2018; Yang, 2019). More recently, Jacot et al. (2018) built off a line of closely related prior work to show that the same is true with a different kernel, the Neural Tangent Kernel (NTK), if we train all the parameters of the network. Yang (2020); Yang & Littwin (2021) showed this holds not just for fully-connected NNs but universally across architectures, + +1Computer Science Department, University of British Columbia, Vancouver, Canada 2Alberta Machine Intelligence Institute, Edmonton, Canada. Correspondence to: Mohamad Amin Mohamadi , Wonho Bae , Danica J. Sutherland . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +including ResNets and Transformers. Lee et al. (2019) also showed that the dynamics of training wide but finite-width NNs with gradient descent can be approximated by a linear model obtained from the first-order Taylor expansion of that network around its initialization. Furthermore, they experimentally showed that this approximation approximation excellently holds even for networks that are not so wide. + +In addition to theoretical insights, NTKs have had significant impact in diverse practical settings. Arora et al. (2019b) show very strong performance of NTK-based models on a variety of low-data classification and regression tasks. The condition number of an NN's NTK has been shown correlation directly with the trainability and generalization capabilities of the NN (Xiao et al., 2018; 2020); thus, Park et al. (2020); Chen et al. (2021) have used this to develop practical algorithms for neural architecture search. Wei et al. (2022); Bachmann et al. (2022) estimate the generalization ability of a specific network, randomly initialized or pretrained on a different dataset, with efficient cross-validation. Zhou et al. (2021) use NTK regression for efficient meta-learning, and Wang et al. (2021); Holzmuller et al. (2022); Mohamadi et al. (2022) use NTKs for active learning. + +There has also been significant theoretical insight gained from empirical studies of networks' NTKs. For instance, Fort et al. (2020) use NTKs to study how the loss geometry the NN evolves under gradient descent. Franceschi et al. (2021) employ NTKs to analyze the behaviour of Generative Adversarial Networks (GANs). Nguyen et al. (2020; 2021) use NTKs for dataset distillation. He et al. (2020); Adlam et al. (2020) use NTKs to predict and analyze the uncertainty of a NN's predictions. Tancik et al. (2020) use NTKs to analyze the behaviour of MLPs in learning high frequency functions, leading to new insights into our understanding of neural radiance fields. We thus believe NTKs will continue to be used in both theoretical and empirical deep learning. + +Unfortunately, however, computing the NTK for practical networks is extremely challenging, and usually not computationally feasible. The "empirical" NTK (eNTK; we discuss the difference from what others term "the NTK" shortly) is + +$$ +\Theta_ {\theta} \left(x _ {1}, x _ {2}\right) = \left[ J _ {\theta} \left(f _ {\theta} \left(x _ {1}\right)\right) \right] \left[ J _ {\theta} \left(f _ {\theta} \left(x _ {2}\right)\right) \right] ^ {\top}, \tag {1} +$$ + +where $J_{\theta}\left(f_{\theta}(x)\right)$ denotes the Jacobian of the function $f$ at a point $x$ with respect to the flattened vector of all its + +![](images/7a8ecbab669dbfdbbf70c7118c701080473ead13304313cede24559e3020501e.jpg) +Figure 1: Wall-clock time to evaluate the eNTK and pNTK for one pair of inputs, across datasets and ResNet depths. + +![](images/087c894627deaaf56dc54b052bcce38c297625d459b81a2d3ea1222fbae687d7.jpg) + +![](images/4ee7b9c43bef7bbd3ad921c8299330214c6abf5bb3d7b65fded0665bb14b937c.jpg) + +parameters, $\theta \in \mathbb{R}^P$ . If $D$ is the input dimension of $f$ and $O$ the number of outputs, we have $J_{\theta}\left(f_{\theta}(x)\right) \in \mathbb{R}^{O \times P}$ and $\Theta_{\theta}(x_1, x_2) \in \mathbb{R}^{O \times O}$ . Thus, computing the eNTK between $N_1$ and $N_2$ data points yields $N_1N_2$ matrices, each of shape $O \times O$ ; we usually arrange this as an $N_1O \times N_2O$ matrix. + +When computing an eNTK on tasks involving large datasets and with multiple output neurons, e.g. in a classification model with $O$ classes, the eNTK quickly becomes impractical regardless of how fast each entry is computed due to its $NO \times NO$ size. The full eNTK of a classification model even on the relatively small CIFAR-10 dataset (Krizhevsky, 2009), stored in double precision, takes over 1.8 terabytes in memory. For practical usage, we need something better. + +This work presents a simple trick for a strong approximation of the eNTK that removes the $O^2$ from the size of the kernel matrix, resulting in a factor of $O^2$ improvement in the memory and up to $O^3$ in computation. Since for typical classification datasets $O$ is at least 10 (e.g. CIFAR-10) and potentially 1,000 or more (e.g. ImageNet, Deng et al., 2009), this provides multiple orders of magnitude savings over the full eNTK. We prove this approximation converges to the original eNTK at a rate of $\mathcal{O}(n^{-1/2})$ for a standard-initialization NN of depth $L$ and width $n$ in each layer, and the predictions of kernel regression with the approximate kernel do the same. We also conduct diverse experimental investigations to support our theoretical results, across a range of architectures and settings. We hope this approximation further enables researches to employ NTKs towards theoretical and empirical advances in wide networks. + +Infinite NTKs. In the infinite-width limit of appropriately initialized NNs, $\Theta_{\theta}$ converges almost surely at initialization to a particular kernel, and remains constant through training. Algorithms are available to compute this expectation exactly, but they tend to be substantially more expensive than computing (1) directly for all but extremely wide networks. The convergence to this infinite-width regime is slow in practice, and moreover it eliminates some of the interest of the framework: neural architecture search, predicting generalization of a pre-trained representation, and meta-learning are all + +considerably less interesting when we only consider infinitewidth networks that do essentially no feature learning. Thus we focus here only on the "empirical" eNTK as in (1). + +# 2. Related Work + +Among the numerous recent works that have used eNTKs either to gain insights about various phenomena in deep learning or to propose new algorithms, not many have publicized the computational costs and implementation details of computing eNTKs. Nevertheless, all are in agreement about the expense of such computations (Park et al., 2020; Holzmuller et al., 2022; Fort et al., 2020). + +Several recent works have, mostly "quietly," employed various techniques to avoid dealing with the full eNTK matrix; however, to the best of our knowledge, none provide any rigorous justifications. Wei et al. (2022, Section 2.3) point out that if the final layer of a NN is randomly initialized, the expected eNTK can be written as $K_{0} \otimes I_{O}$ for some kernel $K_{0}$ , where $I_{O}$ is the $O \times O$ identity matrix and $\otimes$ is the Kronecker product. Thus, they use the approximation in which they only compute the eNTK with respect to one of the logits of the NN. Although their approach to approximating the eNTK is similar to ours, they don't provide any rigorous bounds or empirical study of how closely the actual eNTK is approximated by its expectation in this regard. Wang et al. (2021) employs the same "single-logit" strategy, though they only mention the infinite-width limit as a motivation supporting their trick. Despite these claims, we will see in our experiments that the eNTK is generally not diagonal. We will, however, prove upper bounds on distance of our approximation to the eNTK, and provide experimental support that this approximation captures the behaviour of the eNTK even when the NN's weights are not at initialization. Park et al. (2020) and Chen et al. (2021) also seem to use a form of "single-logit" approximation to eNTK, without explicitly mentioning it. Lee et al. (2019), by contrast, do use the full eNTK, and hence never compute the kernel on more than 256 datapoints. + +Novak et al. (2022) recently performed an in-depth analy- + +sis of computational and memory complexity required for computing the eNTK, and proposed two new approaches (depending on the NN architecture) to reduce the time complexity, but not the memory burden, of computing the eNTK over explicitly implementing (1). Our approaches are complementary; in fact, we use their "structured derivatives" method to help compute our approximation. + +# 3. Pseudo-NTK + +We define the pseudo-NTK (pNTK), which we denote as $\hat{\Theta}_{\theta}(x_1,x_2)$ , as + +$$ +\underbrace {\left[ \nabla_ {\theta} \frac {1}{\sqrt {O}} \sum_ {i = 1} ^ {O} f _ {\theta} ^ {(i)} \left(x _ {1}\right) \right]} _ {1 \times P} \underbrace {\left[ \nabla_ {\theta} \frac {1}{\sqrt {O}} \sum_ {i = 1} ^ {O} f _ {\theta} ^ {(i)} \left(x _ {2}\right) \right] ^ {\top}} _ {P \times 1}, \tag {2} +$$ + +where $f_{\theta}^{(i)}(x)$ denotes the $i$ -th output of $f_{\theta}$ on the input $x$ . While the eNTK is a matrix-valued kernel for each pair of inputs, the pNTK is a traditional scalar-valued kernel. + +Some recent work (Arora et al., 2019a; Yang, 2020; Wei et al., 2022; Wang et al., 2021) has pointed out that in the infinite width limit $\lim_{n\to \infty}\Theta (x_1,x_2)$ , the NTK becomes a constant-diagonal matrix, where the class-class component becomes identity. Thus, one can avoid computing the off-diagonal entries of the infinite-width NTK of each pair through using $\Theta_{\theta}(x_1,x_2) = \hat{\Theta}_{\theta}(x_1,x_2)\otimes I_O$ , giving a drastic $\mathcal{O}(O^2)$ time and memory complexity decrease. + +Practitioners have accordingly used the same approach in computing the eNTK of a finite width network, but with little to no further justification. We see in our experiments that for finite width networks, the NTK is not diagonal. In fact, we show that for most practical networks, it is very far from being diagonal, casting doubts on the validity of arguments justifying the approximation with asymptotic diagonality. We justify this category of approximation with theoretical bounds on the difference of the true NTK from the approximation (2), which we also call "sum of logits." + +Before our formal results and experimental evaluation, we give some intuition. First, suppose $f_{\theta}^{(i)}(x) = \phi (x)\cdot v_i$ , so that $v_{i}\in \mathbb{R}^{n_{L - 1}}$ is the $i$ th row of a linear read-out layer; then $\frac{1}{\sqrt{O}}\sum_{i = 1}^{O}f_{\theta}^{(i)}(x) = \phi (x)\cdot \left[\frac{1}{\sqrt{O}}\sum_{i = 1}^{O}v_i\right]$ . If the $v_{i}\sim \mathcal{N}(0,\sigma^{2}I_{n_{L - 1}})$ are independent, $\frac{1}{\sqrt{O}}\sum_{i = 1}^{O}v_{i}$ has the same normal distribution as, say, $v_{1}$ . Thus, at initialization, our sum of logits approximation agrees in distribution with the first-logit approximation. Our proof uses the sum-of-logits form, though, and we believe it may be more sensible for networks that are not at random initialization. + +Calling this vector (whether the first logit or sum of logits) $v$ , we can think of (2) as the NTK of a model with a single scalar output as a function of $\phi$ , whose last layer has + +weights $v$ . When we linearize a network with that kernel for an $O$ -class classification problem, getting the formula (5) discussed in Section 4.3, we end up effectively using a one-vs-rest classifier scheme. Thus, we can think of the pseudo-NTK as approximating the process of training $O$ one-vs-rest classifiers, rather than a single $O$ -way classifier. + +# 4. Approximation Quality of Pseudo-NTK + +We will now study various aspects of the approximation of (2) to (1), both in theory and empirically. Our experiments compare different characteristics of pNTK and eNTK, both at initialization and throughout training. We evaluate four widely-used architectures: FCN (a fully-connected network of depth 3, as in Lee et al., 2019; 2020), ConvNet (a fully-convolutional network of depth 8, as in Arora et al., 2019a;b; Lee et al., 2020), ResNet18 (He et al., 2016), and WideResNet-16-k (Zagoruyko & Komodakis, 2016). We evaluate each architecture at different widths, as mentioned in the plot legends: we show exact widths for FCN, while for others we show a widening factor. For consistency with most other recent papers studying NTKs and properties of NNs in general, we focus on data from CIFAR-10 (Krizhevsky, 2009). Each experiment is repeated using three seeds; means and corresponding error bars are also shown, except when they interfered with clear interpretation of the plots. All models are trained for 200 epochs, using stochastic gradient descent (SGD), on 32GB NVIDIA V100 GPUs. More details on models and optimization are provided in Appendix A. The measured statistic for each experiment are reported after 0, 50, 100, 150, and 200 epochs. + +A Note On Parameterization In order to be maximally applicable to practical implementations, both our experiments and our theoretical results are based on standard parameterization ("fan-in" variance). Although most related work uses the so-called NTK parameterization ("fan-out" variance), this is rarely used in practice while training NNs, mostly due to the poor generalization results achieved in comparison to training with standard parameterization (Park et al., 2019, Section I). We encountered similarly poor behaviour when training NNs with NTK parameterization, but note that our theorems could also be adapted to the fan-out case. + +# 4.1. pNTK Converges to eNTK as Width Grows + +The first crucial thing to verify is whether the pNTK kernel matrix approximates the true eNTK as a whole. We study this first in terms of Frobenius norm. + +Theorem 4.1 (Informal). Let $f_{\theta} : \mathbb{R}^{D} \to \mathbb{R}^{O}$ be a fully-connected network with layers of width $n$ whose parameters are initialized as in He et al. (2015), with ReLU-type activations. Let $\hat{\Theta}_{\theta}(x_1, x_2)$ be the pNTK of $f_{\theta}$ as in (2) and $\Theta_{\theta}(x_1, x_2)$ the eNTK as in (1) for a fixed pair of inputs $x_1, x_2$ . + +![](images/a8d0a4e611fa0676bfa22078464720c622e7c35d0b91b9fc4ba3d246e0190d4c.jpg) +Figure 2: Comparing the magnitude of sum of on-diagonal and off-diagonal elements of $\Theta_{\theta}(\mathcal{D},\mathcal{D})$ at initialization and throughout training, based on $\mathcal{D}$ being 1000 random points from CIFAR-10. The reported numbers are the average of $1000\times 1000$ matrices, each having a shape of $10\times 10$ . The same subset has then been used to train the NN using SGD. As the NN's width grows, the eNTK converges to being diagonal at initialization among all different architectures. + +![](images/6d03e1504bd11a51248a2ca308545ea4accb340e818684ddb39c79d4eeac59cd.jpg) + +![](images/cec7568138eafcc58b7a749ba124db7314fa38b39e3b33756ba761a628c4ea74.jpg) + +![](images/ed0903ed125332aad596a2d7c92e5e184d94a0b7589cb2ddb53dcfb10a2df92e.jpg) + +![](images/791940e5f53e7cb9e8283d2fce2b13617a43e54661bfe93dc767b2fbbf14cebc.jpg) +Figure 3: Evaluating the relative difference of Frobenius norm of $\Theta_{\theta}(\mathcal{D},\mathcal{D})$ and $\hat{\Theta}_{\theta}(\mathcal{D},\mathcal{D})\otimes I_O$ at initialization and throughout training, based on $\mathcal{D}$ being 1000 random points from CIFAR-10. Wider nets have more similar $\| \Theta_{\theta}\|_{F}$ and $\| \hat{\Theta}_{\theta}\otimes I_O\| _F$ at initialization. + +![](images/6d20c2465b5083aa892ae25da179a19dd5646acfca29f1c58788f52599f2b116.jpg) + +![](images/6fadbca655fcdef4981216712c58fd03ac9f63375ae6f550aafaa0472c4c8cda.jpg) + +![](images/3490c362bddf4e3c16a79b98eeccf67b7704e779904a689d975a8dcb10247f28.jpg) + +$x_{2}$ . With high probability over the initialization, + +$$ +\frac {\left\| \hat {\Theta} _ {\theta} \left(x _ {1} , x _ {2}\right) \otimes I _ {O} - \Theta_ {\theta} \left(x _ {1} , x _ {2}\right) \right\| _ {F}}{\left\| \Theta_ {\theta} \left(x _ {1} , x _ {2}\right) \right\| _ {F}} \in \mathcal {O} \left(n ^ {- \frac {1}{2}}\right). \tag {3} +$$ + +Remark 4.2. All of the results in the paper can be straightforwardly extended to networks with different widths, as long as the consecutive layers' widths satisfy $n_{l+1} = \Theta(n_l)$ . Moreover, the results can be made architecturally universal with the techniques of Yang (2020); Yang & Littwin (2021). + +Theorem 4.1 provides the first upper bound on the convergence rate of pNTK towards eNTK. A formal statement for Theorem 4.1 and its proof are in Appendix B. + +Remark 4.3. Based on the provided proof, it is straightforward that the ratio of information between off-diagonal and on-diagonal elements of the eNTK matrix converges to zero with a rate of $\mathcal{O}(n^{-\frac{1}{2}})$ with high probability over random initialization, as depicted in Figure 2. + +Figure 2 shows that as the width grows, the sum of off-diagonal elements of $\Theta_{\theta}(x_1,x_2)$ becomes small compared to the diagonal. Furthermore, Figure 3 provides experimental support that as width grows, $\hat{\Theta}_{\theta}\otimes I_O$ converges to $\Theta_{\theta}$ in terms of relative Frobenius norm. + +Theorem 4.1 only applies to epoch zero of these figures, as it assumes networks with weights at initialization. As can be seen in the figures, these results don't necessarily apply + +to the NNs not at initialization (i.e., after a few epochs of training). This naturally gives rise to the question: Can the pNTK be used to analyze and represent NNs whose parameters are far from initialization? We will now take various experimental approaches towards studying this question. + +# 4.2. Largest Eigenvalue Converges as Width Grows + +As discussed before, the conditioning of a network's eNTK has been shown to be closely related to generalization properties of the network, such as trainability and generalization risk (Xiao et al., 2018; 2020; Wei et al., 2022). Thus, we would like to know how well the pNTK's eigenspectrum approximates that of the eNTK. The following theorem gives a bound on the rate of convergence between the maximum eigenvalues of the two kernels. + +Theorem 4.4 (Informal). Let $f_{\theta} : \mathbb{R}^{D} \to \mathbb{R}^{O}$ be a fully-connected network with layers of width $n$ whose parameters are initialized according to He et al. (2015) initialization, with ReLU-type activations. Let $\hat{\Theta}_{\theta}(x_1, x_2)$ be the corresponding pNTK of $f_{\theta}$ as in (2) and $\Theta_{\theta}(x_1, x_2)$ the corresponding eNTK as in (1) for a fixed pair of inputs $x_1, x_2$ . With high probability over the initialization, + +$$ +\frac {\lambda_ {\operatorname* {m a x}} (\hat {\Theta} _ {\theta} (x _ {1} , x _ {2}) \otimes I _ {O}) - \lambda_ {\operatorname* {m a x}} (\Theta_ {\theta} (x _ {1} , x _ {2}))}{\lambda_ {\operatorname* {m a x}} (\Theta_ {\theta} (x _ {1} , x _ {2}))} \in \mathcal {O} (n ^ {- \frac {1}{2}}). +$$ + +Theorem 4.4 bounds the difference between the maxi + +![](images/b81d3ef8e1057881eb3b2a73c48ac67b158a23be213f5849e981af1565b3ffca.jpg) +Figure 4: Evaluating the relative difference of $\lambda_{\mathrm{max}}$ of $\Theta_{\theta}(\mathcal{D},\mathcal{D})$ and $\hat{\Theta}_{\theta}(\mathcal{D},\mathcal{D})$ at initialization and throughout training, based on $\mathcal{D}$ being 1000 random points from CIFAR-10. Wider nets have more similar $\lambda_{\mathrm{max}}(\Theta_{\theta}(\mathcal{D},\mathcal{D}))$ and $\lambda_{\mathrm{max}}(\hat{\Theta}_{\theta}(\mathcal{D},\mathcal{D}))$ . + +![](images/ff235f9630097dcc2fce01cf42702fb9f2d539cff8732334c419d08fd5313f2d.jpg) + +![](images/aa5c9d5ec475f83753c61805684ad029515d71bfb95612c2f52bd2f56020754f.jpg) + +![](images/37218d3d3488995cf6c487f381be96cf3141ffbcf9912d197d48341f00b56d13.jpg) + +![](images/7b0cfd5e704b112810b06f2057589756f1bd3b8218de5505a0dc1681bb2e6aab.jpg) +Figure 5: Evaluating the relative difference of $\lambda_{\mathrm{min}}$ of $\Theta_{\theta}(\mathcal{D},\mathcal{D})$ and $\hat{\Theta}_{\theta}(\mathcal{D},\mathcal{D})$ based on $\mathcal{D}$ being 1000 random points from CIFAR-10. Wider nets have more similar $\lambda_{\mathrm{min}}(\Theta_{\theta}(\mathcal{D},\mathcal{D}))$ and $\lambda_{\mathrm{min}}(\hat{\Theta}_{\theta}(\mathcal{D},\mathcal{D}))$ . Note, though, the extremely large values reported for ConvNet; as observed by Lee et al. (2020) and Xiao et al. (2020), it is ill-conditioned and $\lambda_{\mathrm{min}}(\Theta_{\theta}(\mathcal{D},\mathcal{D})) \to 0$ while $\lambda_{\mathrm{min}}(\hat{\Theta}_{\theta}(\mathcal{D},\mathcal{D})) > 0.001$ , causing the huge discrepancy. More details in Appendix C. + +![](images/1c6cd0f02eeb48ef3a377f4647b5cc83e8a16ff259dc2055a1c8598594d095eb.jpg) + +![](images/a678cd279fde05413e27f1e5212581c3a67475acbd74ebb1a4d91c7d4b2a9a4d.jpg) + +![](images/c111b99eaf865433a4fbc6310bcc0ac30ec9a25ee3310228d9e54719b60d41bd.jpg) + +![](images/3e5a477c69d18155b488b85594b694e1ff8e9836c330b669844767cdfc919913.jpg) +Figure 6: The relative difference in condition number, $\kappa(K) = \lambda_{\max}(K) / \lambda_{\min}(K)$ , decreases for wider nets. The strange ConvNet plot is due to the issue in Figure 5; more details in Appendix C. + +![](images/12d33d4eb52adc55ebb0630045abb314adf919395462c2b4b76e30779260a581.jpg) + +![](images/e8b67c1f045fe71bf88696ae8fb3676af8126e7c6c55f8740397651961da29c1.jpg) + +![](images/ae85043a0bc7ccbd20b3a5c84baf58cd6e660d025b8d3e48732a2a731c49de62.jpg) + +mum eigenvalue of pNTK and the maximum eigenvalue of eNTK based on the NN's width, for networks at initialization. A formal statement for Theorem 4.4 and its proof are given in Appendix C. Figure 4 also supports this trend experimentally. + +Figure 5 shows a similar trend for the minimum eigenvalues, although we have not found a proof of this convergence. This suggests that the condition number $\kappa = \lambda_{\mathrm{max}} / \lambda_{\mathrm{min}}$ should become similar as width grows; this is also supported by results in Figure 6. + +Interestingly, the rate of increase/decrease in the difference between maximum and minimum eigenvalues and the condition numbers between pNTK and eNTK do not necessarily have a monotonic behaviour as the training goes on. Observing the exact values of $\lambda_{\mathrm{min}}$ , $\lambda_{\mathrm{max}}$ , and $\kappa$ for different + +architectures, widths at initialization and throughout training reveals that in ConvNet, WideResNet and ResNet18 architectures, $\lambda_{\mathrm{min}}$ is close to zero at initialization, but grows during training; the inverse phenomenon is observed with FCNs. Further investigations of these statistics might reveal interesting insights about the behaviour of NNs trained with SGD and the connections between eNTK and trainability of the architecture. + +# 4.3. Kernel Regression Using pNTK vs. eNTK + +Lee et al. (2019) proved that as a finite NN's width grows, its training dynamics can be well approximated using the first-order Taylor expansion of that NN around its initialization (a linearized neural network). Informally, they showed that when $f$ is wide enough and trained on $\mathcal{D}$ with a suitably + +![](images/59f8f13c23c53c998238a5b8b7c7ef64f7c259bf58df180f4116a43e757baac0.jpg) +Figure 7: The relative difference of kernel regression outputs, (4) and (5), when training on $|\mathcal{D}| = 1000$ random CIFAR-10 points and testing on $|\mathcal{X}| = 500$ . For wider NNs, the relative difference in $\hat{f}^{lin}(\mathcal{X})$ and $f^{lin}(\mathcal{X})$ decreases at initialization. Surprisingly, the difference between these two continues to quickly vanish while training the network. + +![](images/56364704eb7b1c4aa15ecd6ae676919f98db58f7fabf9114ab9b122a60636b57.jpg) + +![](images/61c195428756e5ecc76e95c51c67b0a4cf6f6c4888ee4729c6ddd2633c577883.jpg) + +![](images/fe4363ecbaebd4642b544d3e09ec4cad8d5768341d3dc10f46f62a8764817753.jpg) + +![](images/536d0f83b5bc79df234fcad176ea944d0188600bdf9777609f0f91c07c3e439c.jpg) +Figure 8: Using pNTK in kernel regression (as in Figure 7) almost always achieves a higher test accuracy than using eNTK. Wider NNs and trained nets have more similar prediction accuracies of $\hat{f}^{lin}$ and $f^{lin}$ at initialization. Again, the difference between these two continues to vanish throughout the training process using SGD. + +![](images/5fcab946039f57e98b412538751f74bd6ee9dfff9b92eb9ea63ae0b39415dd1a.jpg) + +![](images/11f833105e5121e53af0b4645be7ef7a38a0bea9a734b1786c38671414b01f06.jpg) + +![](images/52a2aaa8c2750548a6861d7dd2e087f795ccf048a2458dd67423e14fe568d733.jpg) + +small learning rate, its predictions on $x$ can be approximated by those of the linearized network $f^{lin}$ given by + +$$ +\underbrace {f _ {0} (x)} _ {O \times 1} + \underbrace {\Theta_ {0} (x , \mathcal {D})} _ {O \times N O} \underbrace {\Theta_ {0} (\mathcal {D} , \mathcal {D}) ^ {- 1}} _ {N O \times N O} \underbrace {(\mathcal {Y} _ {\mathcal {D}} - f _ {0} (\mathcal {D}))}, \quad (4) +$$ + +where $\mathcal{V}_{\mathcal{D}}$ is the matrix of one-hot labels for the training points $\mathcal{D}$ , and $\Theta_0$ is the eNTK of $f$ at initialization $f_0$ . This is simply kernel regression on the training data $\mathcal{D}$ using the kernel $\Theta_0$ and prior mean $f_0$ . Wei et al. (2022) use the same kernel in a generalized cross-validation estimator (Craven & Wahba, 1978) to predict the generalization risk of the NN. As discussed before, using the eNTK in these applications is practically infeasible, due to huge time and memory complexity of the kernel, but we show the pNTK approximates $f^{lin}(x)$ with much improved time and memory complexity. + +Theorem 4.5 (Informal). Let $f_{\theta} : \mathbb{R}^{D} \to \mathbb{R}^{O}$ be a fully-connected network with layers of width $n$ whose parameters are initialized as in He et al. (2015), with ReLU-type activations. Let $\hat{\Theta}_{\theta}(x_1, x_2)$ be the corresponding pNTK of $f_{\theta}$ as in (2) and $\Theta_{\theta}(x_1, x_2)$ the corresponding eNTK as in (1) for a fixed pair of inputs $x_1, x_2$ . Define $\hat{f}^{\text{lin}}(x)$ as + +$$ +\underbrace {f _ {0} (x)} _ {1 \times O} + \underbrace {\hat {\Theta} _ {\theta} (x , \mathcal {D})} _ {1 \times N} \underbrace {\hat {\Theta} _ {\theta} (\mathcal {D} , \mathcal {D}) ^ {- 1}} _ {N \times N} \underbrace {\left(\mathcal {Y} _ {\mathcal {D}} - f _ {0} (\mathcal {D})\right)} _ {N \times O}. \tag {5} +$$ + +After proper reshaping, with high probability over random initialization, + +$$ +\left\| \hat {f} ^ {\text {l i n}} (x) - f ^ {\text {l i n}} (x) \right\| _ {F} \in \mathcal {O} \left(n ^ {- \frac {1}{2}}\right). \tag {6} +$$ + +A formal statement is given and proved in Appendix D. + +Figure 7 also supports that as width grows, the predictions of kernel regression using $\hat{\Theta}_{\theta}$ converge to the prediction of those obtained from using $\Theta_{\theta}$ , while requiring orders of magnitude of less memory and time to compute. Figure 8 shows similar results for the difference in prediction accuracies achieved using kernel regression through $\hat{\Theta}_{\theta}$ and $\Theta_{\theta}$ kernels. Appendix D also shows further analysis of how well the linearized network predicts the final accuracy of the trained model for each architecture and width pair. Although $\| \hat{f}^{lin}(x) - f^{lin}(x)\|_F$ decreases with width of the network in Figure 7 at initialization, this does not necessarily translate to a monotonic behaviour in prediction accuracies, a non-smooth function of the vector of predictions; we do see that the expected pattern more or less holds, however. + +A surprising outcome depicted in Figures 7 and 8 is that while training the model's parameters, predictions of $\hat{f}^{lin}$ and $f^{lin}$ converge very quickly. This is particularly intriguing, as it contrasts the results depicted in Figures 2 to 4 and 6. In other words, although the kernels $\Theta_{\theta}$ and $\hat{\Theta}_{\theta} \otimes I_O$ seem to be diverging in Frobenius norm, eigenspectrum, and so on, kernel regression using those two kernels converges quickly, so that after 50 epochs the difference in predictions almost totally vanishes. We believe further investigation of why this phenomenon is observed could lead to new interesting insights about the training dynamics of NNs. + +![](images/58a1e77091e746cd618842765eff81dd2fb9645006adaa82d371eb41ae65f3e4.jpg) +Figure 9: Evaluating the test accuracy of kernel regression predictions using pNTK as in (5) on the full CIFAR-10 dataset. As the NN's width grows, the test accuracy of $\hat{f}^{lin}$ also improves, but eventually saturates with the growing width. Using trained weights in computation of pNTK results in improved test accuracy of $\hat{f}^{lin}$ . + +![](images/bd124e670f37d86d1aa480244db89609fae2d4ae41ac2ef90f067b0a77c09f1d.jpg) + +![](images/9c9bcf0cba8d8d73deca6c4f5896a8676aa1cd9a09448976fcd2359df7942aca.jpg) + +![](images/8593e1736a59bb80e43aeafc02461f15ca02d66203c126d9a82b05d47ce8ec33.jpg) + +![](images/5e05187822644a7c5fb73bb07612317815273fd8292dd3ff62b66527407a6d0e.jpg) +Figure 10: Evaluating the test accuracy of model $f$ throughout SGD training on the full CIFAR-10 dataset. In contrast to $\hat{f}^{lin}$ , the test accuracy of $f$ does not significantly improve with growing width. + +![](images/a41589f87605eae0b7bd830b831b4392064a2541aed6db0eab4fc98813023272.jpg) + +![](images/ab8679421d853242ab8278e25fe4018c147a5ca2f06d855864eeff1fc15078f6.jpg) + +![](images/c9f77a4fd1f8af3aa1668a4958a791245f64e4e7cf2e42041f3337f34565789d.jpg) + +![](images/fa4f829951844ab20142fa60348ed1242f20872c13d534fc890af0e26025014c.jpg) +Figure 11: Evaluating the difference in test accuracy of kernel regression using pNTK as in (5) vs the current model $f$ throughout SGD training on the full CIFAR-10 dataset: how much does a linearized predictor with the current representation improve prediction accuracy over the current model obtained by SGD? + +![](images/b2ff43a0f043534e4e9fce64f28e0c86f81252c91b442d94c9ec764711703e8c.jpg) + +![](images/3e9c2d28a1eb4656c07306f09a56f93afe197d9dd9fcb09bef4b32ac2bb7da38.jpg) + +![](images/ffa1dcb4e911d10e9c22d75624dde0a1845fc961f5d27567fde2c20a67a0f6c4.jpg) + +# 5. Application: Full Regression on CIFAR-10 + +Thanks to the reduction in time and memory complexity of pNTK over eNTK, motivated by Theorem 4.5 and experimental findings in Figure 8, we finally evaluate the corresponding pNTKs of the four architectures that we have used in our experimental evaluations in different widths using full CIFAR-10 data, at initialization and throughout training the models under SGD. As mentioned previously, running kernel regression with eNTK on all of CIFAR-10 would require evaluating $25 \times 10^{10}$ Jacobian-vector products and more than $\approx 1.8$ terabytes of memory; using pNTK, this can be done with a far more reasonable $25 \times 10^{8}$ Jacobian-vector products and $\approx 18$ gigabytes of memory. This is still a heavy load compared to, say, direct SGD training, but is within the reach of standard compute nodes. + +Figure 9 shows the test accuracy of $\hat{f}^{lin}$ on the full train and test sets of CIFAR-10. In the infinite-width limit, the + +test accuracy of $\hat{f}^{lin}$ at initialization (and later, because the kernel stays constant in this regime) should match the final test accuracy of $f$ : that is, the epoch 0 points in Figure 9 would agree with roughly the epoch 200 points in Figure 10. This comparison is plotted directly in Appendix F. Furthermore, the test accuracies of predictions of kernel regression using the pNTK are lower than those achieved by the NTKs of infinite-width counterparts for fully-connected and fully-convolution networks. This is consistent with results on eNTK by Arora et al. (2019a) and Lee et al. (2020), although Arora et al. (2019a) studied only a "CIFAR-2" version. + +It is worth noting from Figures 9 and 11 that, in contrast to the findings of Fort et al. (2020), we observe that the corresponding pNTK of the NN continues to change even after epoch 50 of SGD training. Although for fully-connected networks and some versions of ResNet18, this change is not significant, in fully-convolutional networks and WideRes + +![](images/6fc5cf1ff99565a7cad3239dfd74d9a15447c04fb71c6ac196258daf02f57e98.jpg) +Figure 12: Comparison of pNTK with eNTK on a look-ahead active learning task. pNTK is much faster than eNTK without losing performance. + +Nets, the pNTK continues to exhibit changes until epoch 150, where the training error has vanished. We remark that Fort et al. (2020) analyzed eNTKs based on only 500 random samples from CIFAR-10, while the pNTK approximation has enabled us to run our analysis on the 100-times larger full dataset. + +Lastly, to help the community better analyze the properties of NNs and their training dynamics, and avoid wasting computation by redoing this work, we plan to share computed pNTKs for all the mentioned architectures and widths, as well as ResNets with 34, 50, 101 and 152 layers on CIFAR-10 and CIFAR-100 (Xiao et al., 2017) datasets, both at initialization and using pretrained ImageNet (Deng et al., 2009) weights. We hope that our contribution will enable further analyses and breakthroughs towards a stronger theoretical understanding of the training dynamics of deep NNs. + +# 6. Application: Active Learning + +One use case for eNTK is in active learning, where the model requests annotations for specific data points in an "active" manner. Recently, Mohamadi et al. (2022) used kernel regression with the pNTK to approximate a re-trained model for the computation of "look-ahead" data acquisition functions in active learning. That is, a model requests annotations for data points where training on them is likely to make the model's output on test data change the most. In Figure 12, we compare the performance of their scheme using pNTK (as they did in their work) to using the full eNTK, in terms of both active learning performance on the MNIST dataset (left axis), and wall-clock time needed to compute the acquisition function (right axis). Similarly to that work, we measure accuracy on the test set for a model which begins with 100 randomly trained points, then acquires 20 + +additional labelled points in each cycle. The acquisition performance with the pNTK matches that with the eNTK, but computation is much faster, taking about $15\%$ as long in total on this problem with $O = 10$ . Active learning performance is measured in accuracy on the test set using a model trained on a labelled set. + +# 7. Discussion + +Our pNTK approach to approximating the eNTK has provable bounds, good empirical performance, and multiple orders of magnitude improvement in runtime speed and memory requirements over the direct eNTK. We evaluate our claims and the quality of the approximation under diverse settings, giving new insights into the behaviour of the eNTK with trained representations. We help justify the correctness of recent approximation schemes, and hope that our rigorous results and thorough experimentation will help researchers develop a deeper understanding of the training dynamics of finite networks, and develop new practical applications of the NTK theory. + +One major remaining question is to theoretically analyze what happens to the pNTK or eNTK during SGD training of the network. In particular, the fast convergence of $\hat{f}^{lin}$ and $f^{lin}$ when training the network, as seen in Figures 7 and 8, runs counter to our expectations based on the approximation worsening in Frobenius norm (Figure 7), maximum eigenvalue (Figure 4), and condition number (Figure 6). This seems likely to be important to practical use of the pNTK. + +Perhaps relatedly, it is also unclear why pNTK consistently results in higher prediction accuracies than when kernel regression is done using eNTK, given that our motivation for pNTK is entirely in terms of approximating the eNTK (Figure 8). Intuitively, this may be related to a regularization-type effect: the pNTK corresponds to a particularly limited choice of a "separable" operator-valued kernel (see, e.g., Álvarez et al., 2012). Separable kernels are a common choice in that literature for both computational and regularization reasons; by enforcing this particularly simple form, we remove many degrees of freedom relating to the interaction between "tasks" (different classes) that may be unnecessary or hard to estimate accurately with the eNTK. This might, in some sense, correspond to a one-vs-one rather than one-vs-rest framework in the intuitive sense discussed in Section 3. Understanding this question in more detail might require a more detailed understanding of the structure of the eNTK at finite width, and/or a much more detailed understanding of the interaction between classes in the dataset with learning in the NTK regime. Finally, even the pNTK is still rather expensive compared to running SGD on neural networks. It might make for a better starting point than the full eNTK for other speedup methods, however, like kernel approximation or advanced linear algebra solvers (e.g. Rudi et al., 2017). + +# References + +Adlam, B., Lee, J., Xiao, L., Pennington, J., and Snoek, J. Exploring the uncertainty properties of neural networks' implicit priors in the infinite-width limit. arXiv preprint arXiv:2010.07355, 2020. +Arora, S., Du, S. S., Hu, W., Li, Z., Salakhutdinov, R., and Wang, R. On exact computation with an infinitely wide neural net. In NeurIPS, 2019a. +Arora, S., Du, S. S., Li, Z., Salakhutdinov, R., Wang, R., and Yu, D. Harnessing the power of infinitely wide deep nets on small-data tasks. arXiv preprint arXiv:1910.01663, 2019b. +Arpit, D. and Bengio, Y. The benefits of overparameterization at initialization in deep relu networks, 2019. +Bachmann, G., Hofmann, T., and Lucchi, A. Generalization through the lens of leave-one-out error. In ICLR, 2022. +Chen, X., Hsieh, C.-J., and Gong, B. When vision transformers outperform resnets without pre-training or strong data augmentations. arXiv preprint arXiv:2106.01548, 2021. +Craven, P. and Wahba, G. Smoothing noisy data with spline functions. Numerische mathematik, 31(4):377-403, 1978. +Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009. +Epperly, E. Note to self: Hanson-wright inequality, 2022. URL https://www.ethaneppperly.com/index.php/2022/10/04/ note-to-self-hanson-wright-inequality/. +Fort, S., Dziugaite, G. K., Paul, M., Kharaghani, S., Roy, D. M., and Ganguli, S. Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the neural tangent kernel. Advances in Neural Information Processing Systems, 33: 5850-5861, 2020. +Franceschi, J.-Y., de Bezenac, E., Ayed, I., Chen, M., Lamprier, S., and Gallinari, P. A neural tangent kernel perspective of gans. arXiv preprint arXiv:2106.05566, 2021. +Hazan, T. and Jaakkola, T. Steps toward deep kernel methods from infinite neural networks. arXiv preprint arXiv:1508.05133, 2015. +He, B., Lakshminarayanan, B., and Teh, Y. W. Bayesian deep ensembles via the neural tangent kernel. Advances in Neural Information Processing Systems, 33:1010-1022, 2020. + +He, K., Zhang, X., Ren, S., and Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification, 2015. URL https://arxiv.org/abs/1502.01852. +He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. +Holzmüller, D., Zaverkin, V., Kästner, J., and Steinwart, I. A framework and benchmark for deep batch active learning for regression. arXiv preprint arXiv:2203.09410, 2022. +Jacot, A., Gabriel, F., and Hongler, C. Neural tangent kernel: Convergence and generalization in neural networks. In NeurIPS, 2018. +Krizhevsky, A. Learning multiple layers of features from tiny images. 2009. +Lee, J., Bahri, Y., Novak, R., Schoenholz, S. S., Pennington, J., and Sohl-Dickstein, J. Deep neural networks as gaussian processes. arXiv preprint arXiv:1711.00165, 2017. +Lee, J., Xiao, L., Schoenholz, S., Bahri, Y., Novak, R., Sohl-Dickstein, J., and Pennington, J. Wide neural networks of any depth evolve as linear models under gradient descent. In NeurIPS, 2019. +Lee, J., Schoenholz, S., Pennington, J., Adlam, B., Xiao, L., Novak, R., and Sohl-Dickstein, J. Finite versus infinite neural networks: an empirical study. Advances in Neural Information Processing Systems, 33:15156-15172, 2020. +Matthews, A. G. d. G., Rowland, M., Hron, J., Turner, R. E., and Ghahramani, Z. Gaussian process behaviour in wide deep neural networks. arXiv preprint arXiv:1804.11271, 2018. +Mohamadi, M. A., Bae, W., and Sutherland, D. J. Making look-ahead active learning strategies feasible with neural tangent kernels. In NeurIPS, 2022. +Neal, R. M. *Priors for infinite networks*, pp. 29-53. Springer, 1996. +Nguyen, T., Chen, Z., and Lee, J. Dataset meta-learning from kernel ridge-regression. arXiv preprint arXiv:2011.00050, 2020. +Nguyen, T., Novak, R., Xiao, L., and Lee, J. Dataset distillation with infinitely wide convolutional networks. Advances in Neural Information Processing Systems, 34, 2021. + +Novak, R., Xiao, L., Lee, J., Bahri, Y., Yang, G., Hron, J., Abolafia, D. A., Pennington, J., and Sohl-Dickstein, J. Bayesian deep convolutional networks with many channels are gaussian processes. arXiv preprint arXiv:1810.05148, 2018. +Novak, R., Sohl-Dickstein, J., and Schoenholz, S. S. Fast finite width neural tangent kernel. In ICML, 2022. +Park, D., Sohl-Dickstein, J., Le, Q., and Smith, S. The effect of network width on stochastic gradient descent and generalization: an empirical study. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 5042-5051. PMLR, 09-15 Jun 2019. URL https://proceedings.mlr.press/v97/ park19b.html. +Park, D. S., Lee, J., Peng, D., Cao, Y., and Sohl-Dickstein, J. Towards NNGP-guided neural architecture search. arXiv preprint arXiv:2011.06006, 2020. +Rudi, A., Carratino, L., and Rosasco, L. FALKON: An optimal large scale kernel method. In NeurIPS, 2017. +Tancik, M., Srinivasan, P., Mildenhall, B., Fridovich-Keil, S., Raghavan, N., Singhal, U., Ramamoorthi, R., Barron, J., and Ng, R. Fourier features let networks learn high frequency functions in low dimensional domains. Advances in Neural Information Processing Systems, 33: 7537-7547, 2020. +Vershynin, R. High-dimensional probability: An introduction with applications in data science, volume 47. Cambridge university press, 2018. +Wainwright, M. J. High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2019. doi: 10.1017/9781108627771. +Wang, H., Huang, W., Margenot, A., Tong, H., and He, J. Deep active learning by leveraging training dynamics. arXiv preprint arXiv:2110.08611, 2021. +Wei, A., Hu, W., and Steinhardt, J. More than a toy: Random matrix models predict how real-world neural representations generalize. In ICML, 2022. +Williams, C. Computing with infinite networks. Advances in neural information processing systems, 9, 1996. +Xiao, H., Rasul, K., and Vollgraf, R. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017. + +Xiao, L., Bahri, Y., Sohl-Dickstein, J., Schoenholz, S., and Pennington, J. Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks. In International Conference on Machine Learning, pp. 5393-5402. PMLR, 2018. +Xiao, L., Pennington, J., and Schoenholz, S. Disentangling trainability and generalization in deep neural networks. In International Conference on Machine Learning, pp. 10462-10472. PMLR, 2020. +Yang, G. Wide feedforward or recurrent neural networks of any architecture are gaussian processes. Advances in Neural Information Processing Systems, 32, 2019. +Yang, G. Tensor programs ii: Neural tangent kernel for any architecture, 2020. +Yang, G. and Littwin, E. Tensor programs iib: Architectural universality of neural tangent kernel training dynamics, 2021. +Zagoruyko, S. and Komodakis, N. Wide residual networks. BMVC, 2016. +Zhou, Y., Wang, Z., Xian, J., Chen, C., and Xu, J. Metalearning with neural tangent kernels. arXiv preprint arXiv:2102.03909, 2021. +Álvarez, M. A., Rosasco, L., and Lawrence, N. D. Kernels for vector-valued functions: A review. Foundations and Trends® in Machine Learning, 4(3):195-266, 2012. doi: 10.1561/2200000036. + +# A. Details of Experimental Setup + +In this section, we present the details on the experimental setup used for the plots depicted in the main body of the paper. As mentioned, the exact width for FCNs have been reported. For WideResNet-16-k we use two block layers, and the initial convolution in the network has a width of 16WF where WF is the reported WF. For instance, $\mathrm{WF} = 16$ means that the first block layer has a width of 256 and the second block layer has a width of 512. For ResNet18, we also used the same approach, multiplying WF by 16. Thus, when $\mathrm{WF} = 4$ , the constructed network will have the exact architecture as the classical ResNet18 architecture reported. A WF of 16 means a ResNet18 with each layer being 4 times wider than the original width. + +When training the neural networks using SGD, a constant batch size of 128 was used across all different networks and different dataset sizes used for training. The learning rate for all networks was also fixed to 0.1. However, not all networks were trainable with this fixed learning rate, as the gradients would sometimes blow up and give NaN training loss, typically for the largest width of each mentioned architecture. In those cases, we decreased the learning rate to 0.01 to train the networks. + +Note that to be consistent with the literature on NTKs, techniques like data augmentation have been turned off, but a weight decay of 0.0001 along with a momentum of 0.9 for SGD is used. Data augmentation here plays an important role in the attained test accuracies of the fully trained networks. + +# B. Relative Convergence of Kernel Matrices + +This section will prove Theorem 4.1, in two parts. First, we give a generic analysis in Appendix B.1 bounding the difference between the $pNTK$ and $eNTK$ for any network whose last layer is linear and random, in terms of the norm of the eNTK of the previous parts of the network. Appendix B.3 then bounds the growth of the eNTK for "fan-in" ReLU networks; their combination gives a $\mathcal{O}_p(1 / \sqrt{n})$ bound on the Frobenius difference between the eNTK and pNTK for width- $n$ fan-in ReLU nets. Later subsections apply these results to various applications. + +# B.1. Linear Read-Out Layers + +Towards this, we first define some notation and show a simple recursive formula for computing the tangent kernel that we take advantage of to prove the theorems. Consider a NN $f: \mathbb{R}^D \to \mathbb{R}^O$ . We assume the final read-out layer of the NN $f$ is a dense layer with width $n$ . Assuming the NN $f$ has $L$ layers, we define $\theta_l$ to be the corresponding parameters of layer $l \in \{1, 2, \ldots, L\}$ . Furthermore, let's define $g: \mathbb{R}^D \to \mathbb{R}^n$ as the output of the penultimate layer of the NN $f$ , such that $f(x) = \theta_L g(x)$ for some $\theta_L \in \mathbb{R}^{O \times w}$ . + +As noted by Lee et al. (2019) and Yang (2020), the NTK can be reformulated as the layer-wise sum of gradients (when the parameters of each layer $\theta_{l}$ are assumed to be vectorized) of the output with respect to $\theta_{l}$ . Accordingly, we denote the eNTK of a NN $f$ as + +$$ +\Theta_ {f} \left(x _ {1}, x _ {2}\right) = \sum_ {l = 1} ^ {L} \nabla_ {\theta_ {l}} f \left(x _ {1}\right) \nabla_ {\theta_ {l}} f \left(x _ {2}\right) ^ {\top}. \tag {7} +$$ + +Now, noting that as the final layer of $f$ is a dense layer, we can use the chain rule to write $\nabla_{\theta_l}f(x)$ as $\frac{\partial f}{\partial g(x)}\frac{\partial g(x)}{\partial\theta_l}$ where $\frac{\partial f(x)}{\partial g(x)} = \theta_L$ . Thus, we can rewrite (7) as + +$$ +\begin{array}{l} \Theta_ {f} (x _ {1}, x _ {2}) = \sum_ {l = 1} ^ {L - 1} \theta_ {L} \nabla_ {\theta_ {l}} g (x _ {1}) \nabla_ {\theta_ {l}} g (x _ {2}) ^ {\top} \theta_ {L} ^ {\top} + \nabla_ {\theta_ {L}} f (x _ {1}) \nabla_ {\theta_ {L}} f (x _ {2}) ^ {\top} \\ = \theta_ {L} \left(\sum_ {l = 1} ^ {L} \nabla_ {\theta_ {l}} g \left(x _ {1}\right) \nabla_ {\theta_ {l}} g \left(x _ {2}\right) ^ {\top}\right) \theta_ {L} ^ {\top} + g \left(x _ {1}\right) ^ {\top} g \left(x _ {2}\right) I _ {O} \tag {8} \\ = \theta_ {L} \Theta_ {g} \left(x _ {1}, x _ {2}\right) \theta_ {L} ^ {\top} + g \left(x _ {1}\right) ^ {\top} g \left(x _ {2}\right) I _ {O}. \\ \end{array} +$$ + +Recall that we can view the pNTK of a network $f$ as simply adding a fixed, non-trainable dense layer to the network with weights $v$ , where $v$ is either a standard basis vector for the single-logit approximation, or $\frac{1}{\sqrt{O}} \mathbf{1}_O$ for the sum-of-logits form. + +Then (8) shows us that the pNTK is simply a weighted sum of the eNTK's elements, + +$$ +\hat {\Theta} _ {f} \left(x _ {1}, x _ {2}\right) = v ^ {\mathsf {T}} \Theta_ {f} \left(x _ {1}, x _ {2}\right) v; \tag {9} +$$ + +note that the second term does not appear since $v$ is not a trainable parameter. + +The key result of this subsection, which holds fairly generally, is based on showing that, when $\theta_L$ is at initialization, off-diagonal entries of $\theta_L\Theta_g\theta_L^\top$ are near the pNTK's corresponding value of zero, while diagonal entries are close to $\Theta_f$ . Specifically, we have the following result for the single-logit approximation, i.e. when $v$ in (9) is one-hot. + +Lemma B.1. Let $f$ be of the form $f(x) = Wg(x)$ , where $W \in \mathbb{R}^{O \times n}$ and $g$ is an arbitrary deep network. Suppose that $W$ has independent, zero-mean, sub-gaussian entries with variance parameter $\nu$ . Let $\delta > 0$ be smaller than a constant depending only on $O$ . Let $x_{1}, x_{2}$ be arbitrary input points. Then, with high probability over the random value of $W$ , + +$$ +\left\| \Theta_ {f} \left(x _ {1}, x _ {2}\right) - \hat {\Theta} _ {f} \left(x _ {1}, x _ {2}\right) I _ {O} \right\| _ {F} \leq \mathcal {O} _ {p} (\nu \| \Theta_ {g} \left(x _ {1}, x _ {2}\right) \| _ {F}). \tag {10} +$$ + +For standard He et al. (2015) initialization, $\nu = \frac{1}{n}$ . We will later show that for standard ReLU networks of width $n$ at initialization, $\| \Theta_g(x_1,x_2)\| _F = \mathcal{O}_p(\sqrt{n})$ , implying that the difference between the pNTK and the eNTK is $\mathcal{O}_p(1 / \sqrt{n})$ . + +For Gaussian weights $W$ , the same guarantees follow for the sum-of-logits approximation $(v = \frac{1}{\sqrt{O}} \mathbf{1}_O)$ by noting that for fixed $g$ and random $W$ , the joint distribution of $f(x)$ and $\hat{f}(x) = v^{\top} f(x)$ are identical for any unit-norm $v$ . + +Our proof is based on the following key tool. + +Proposition B.2 (Hanson-Wright Inequality; Vershynin 2018, Theorem 6.2.1). Let $x$ be a random vector with independent centered sub-gaussian entries with variance proxy $\nu$ , and let $A$ be a square matrix. Let $c > 0$ be a universal constant. Then + +$$ +\operatorname * {P r} \left(\left| x ^ {\top} A x - \mathbb {E} [ x ^ {\top} A x ] \right| \geq t\right) \leq 2 \exp \left(- c \min \left(\frac {t ^ {2}}{\nu^ {2} \| A \| _ {F} ^ {2}}, \frac {t}{\nu \| A \|}\right)\right). +$$ + +(A version with explicit constants, but in a slightly less convenient form, is given by Epperly 2022.) + +The following form converts to an error probability of $\delta$ , and gives a slightly simpler but weaker bound using $\| A \| \leq \| A \|_F$ . + +Corollary B.3. In the setup of Proposition B.2, it holds with probability at least $1 - \delta$ that + +$$ +| x ^ {\mathsf {T}} A x - \mathbb {E} [ x ^ {\mathsf {T}} A x ] | \leq \nu \| A \| _ {F} \max \left(\frac {1}{c} \log \frac {2}{\delta}, \sqrt {\frac {1}{c} \log \frac {2}{\delta}}\right). +$$ + +Corollary B.4. Let $x$ and $y$ be independent random vectors, whose entries are independent, centered, and sub-gaussian with variance proxy $\nu$ . Let $A$ be any square matrix, and $c > 0$ the universal constant of Proposition B.2. Then + +$$ +\operatorname * {P r} \left(| x ^ {\top} A y | \geq t\right) \leq 2 \exp \left(- 2 c \min \left(\frac {t ^ {2}}{\nu^ {2} \| A \| _ {F} ^ {2}}, \frac {t}{\nu \| A \|}\right)\right). +$$ + +This implies that with probability at least $1 - \delta$ , we have that + +$$ +| x ^ {\mathsf {T}} A x - \mathbb {E} [ x ^ {\mathsf {T}} A x ] | \leq \nu \| A \| _ {F} \max \left(\frac {1}{2 c} \log \frac {2}{\delta}, \sqrt {\frac {1}{2 c} \log \frac {2}{\delta}}\right). +$$ + +Proof. Notice that + +$$ +\left[ \begin{array}{l} x \\ y \end{array} \right] ^ {\mathsf {T}} \underbrace {\left[ \begin{array}{c c} 0 & \frac {1}{2} A \\ \frac {1}{2} A ^ {\mathsf {T}} & 0 \end{array} \right]} _ {\tilde {A}} \left[ \begin{array}{l} x \\ y \end{array} \right] = \frac {1}{2} x ^ {\mathsf {T}} A y + \frac {1}{2} y ^ {\mathsf {T}} A ^ {\mathsf {T}} x = x ^ {\mathsf {T}} A y. \tag {11} +$$ + +The vector $\left[ \begin{array}{l}x\\ y \end{array} \right]$ satisfies the conditions for Proposition B.2. We have $\| \tilde{A}\| _F = \sqrt{\frac{1}{4} + \frac{1}{4}}\| A\| _F = \frac{1}{\sqrt{2}}\| A\| _F$ , while + +$$ +\| \tilde {A} \| = \sup _ {\| x \| ^ {2} + \| y \| ^ {2} = 1} \sqrt {\| \frac {1}{2} A y \| ^ {2} + \| \frac {1}{2} A ^ {\intercal} x \| ^ {2}} = \frac {1}{2} \| A \| \sup _ {\| x \| ^ {2} + \| y \| ^ {2} = 1} \sqrt {\| x \| ^ {2} + \| y \| ^ {2}} = \frac {1}{2} \| A \|. +$$ + +Noting that $\mathbb{E}x^{\top}Ay = 0$ , the proof follows by applying Proposition B.2 and Corollary B.3 to (11). + +We are now ready to prove the previous result. + +Proof of Lemma B.1. Assume without loss of generality that $\hat{\Theta}_f$ uses the first-logit approximation, i.e. $v = (1,0,\dots ,0)$ . Also assume $O > 1$ , since for $O = 1$ the eNTK and pNTK are trivially identical. + +Define the difference matrix between the two kernels as, recalling (8) and (9), + +$$ +\begin{array}{l} D (x _ {1}, x _ {2}) = \Theta_ {f} (x _ {1}, x _ {2}) - \hat {\Theta} _ {f} (x _ {1}, x _ {2}) I _ {O} \\ = \left[ W \Theta_ {g} \left(x _ {1}, x _ {2}\right) W ^ {\top} + g \left(x _ {1}\right) ^ {\top} g \left(x _ {2}\right) I _ {O} \right] - v ^ {\top} \left[ W \Theta_ {g} \left(x _ {1}, x _ {2}\right) W - g \left(x _ {1}\right) ^ {\top} g \left(x _ {2}\right) \right] v I _ {O} \\ = W \Theta_ {g} (x _ {1}, x _ {2}) W ^ {\mathsf {T}} - W _ {1} ^ {\mathsf {T}} \Theta_ {g} (x _ {1}, x _ {2}) W _ {1} I _ {O}. \\ \end{array} +$$ + +We will use the Hanson-Wright inequality (Proposition B.2 and Corollaries B.3 and B.4) to bound each entry of this matrix with high probability, then use a union bound to bound the overall Frobenius norm of $D(x_{1},x_{2})$ . + +We begin by bounding the off-diagonal elements of $D$ . By (8) and (9), we can see that for any $i \neq j$ , $D_{ij}(x_1, x_2) = W_i^\top \Theta^{(L-1)} W_j$ . Because the matrix $D$ is symmetric, there are $\frac{O(O-1)}{2}$ distinct off-diagonal entries to bound. Using a union bound with Corollary B.4, we obtain that for any $\delta < 2O(O-1)e^{-2c}$ , it holds with probability at least $1 - \delta / 2$ that + +$$ +\forall i \neq j, \quad | D _ {i j} \left(x _ {1}, x _ {2}\right) | \leq \nu \| \Theta_ {g} \left(x _ {1}, x _ {2}\right) \| _ {F} \frac {1}{2 c} \log \frac {2 O (O - 1)}{\delta}. \tag {12} +$$ + +$D_{11}(x_1,x_2)$ is identically zero. For the other diagonal elements, we have that + +$$ +\begin{array}{l} D _ {i i} (x _ {1}, x _ {2}) = W _ {i} ^ {\top} \Theta^ {(L - 1)} W _ {i} - W _ {1} ^ {\top} \Theta_ {g} W _ {1} \\ = \left(W _ {i} - W _ {1}\right) ^ {\top} \Theta^ {(L - 1)} \left(W _ {i} + W _ {1}\right) \underbrace {- W _ {i} ^ {\top} \Theta_ {g} W _ {1} + W _ {1} ^ {\top} \Theta_ {g} W _ {i}} _ {= 0, \text {a s} \Theta_ {g} \text {i s s y m m e t r i c}} \tag {13} \\ = \left[ \begin{array}{c} W _ {i} \\ W _ {1} \end{array} \right] ^ {\top} \underbrace {\left(\left[ \begin{array}{c} I _ {n} \\ - I _ {n} \end{array} \right] \Theta_ {g} \left[ \begin{array}{c c} I _ {n} & I _ {n} \end{array} \right]\right)} _ {\Theta_ {g} ^ {*}} \left[ \begin{array}{c} W _ {i} \\ W _ {1} \end{array} \right]. \\ \end{array} +$$ + +Noting that $\Theta_g^* = \left[ \begin{array}{cc}\Theta_g & \Theta_g\\ -\Theta_g & -\Theta_g \end{array} \right],$ we have that $\| \Theta_g^*\| _F = 2\| \Theta_g\| _F$ . Thus, applying Corollary B.3 to each of the $O - 1$ entries and taking a union bound, we find that as long as $\delta < 4(O - 1)e^{-c}$ , it holds with probability at least $1 - \delta /2$ that + +$$ +\forall i, \quad \left| D _ {i i} \left(x _ {1}, x _ {2}\right) \right| \leq 2 \nu \| \Theta_ {g} \left(x _ {1}, x _ {2}\right) \| _ {F} \frac {1}{c} \log \frac {4 (O - 1)}{\delta}. \tag {14} +$$ + +Combining (12) and (14), we obtain that as long as $\delta < 2(O - 1)e^{-c}\min (Oe^{-c},2)$ + +$$ +\| D (x _ {1}, x _ {2}) \| _ {F} \leq \nu \| \Theta_ {g} (x _ {1}, x _ {2}) \| _ {F} \frac {1}{c} \sqrt {O (O - 1) \left(\frac {1}{2} \log \frac {2 O (O - 1)}{\delta}\right) ^ {2} + (O - 1) \left(2 \log \frac {4 (O - 1)}{\delta}\right) ^ {2}}. +$$ + +# B.2. Background on sub-exponential variables + +The following proofs rely heavily on concentration inequalities for sub-exponential random variables; we will first review some background on these quantities. + +A real-valued random variable $X$ with mean $\mu$ is called sub-exponential (see e.g. Wainwright, 2019) if there are non-negative parameters $(\nu, \alpha)$ such that + +$$ +\mathbb {E} \left[ e ^ {\lambda (X - \mu)} \right] \leq e ^ {\frac {\nu^ {2} \lambda^ {2}}{2}} \quad \text {f o r a l l} | \lambda | < \frac {1}{\alpha}. +$$ + +We use $X \sim SE(\nu, \alpha)$ to denote that $X$ is a sub-exponential random variable with parameters $(\nu, \alpha)$ , but note that this is not a particular distribution. + +One famous sub-exponential random variable is the product of the absolute value of two standard normal distributions, $z_{i} \sim \mathcal{N}(0,1)$ , such that the two factors are either independent ( $X_{1} = |z_{1}||z_{2}| \sim SE(\nu_{p},\alpha_{p})$ with mean $2 / \pi$ ) or the same ( $X_{2} = z^{2} \sim SE(2,4)$ with mean 1). We now present a few lemmas regarding sub-exponential random variables that will come in handy in the later subsections of the appendix. + +Lemma B.5. If a random variable $X$ is sub-exponential with parameters $(\nu, \alpha)$ , then the random variable $sX$ where $s \in \mathbb{R}^+$ is also sub-exponential with parameters $(s\nu, s\alpha)$ . + +Proof. Consider $X \sim SE(\nu, \alpha)$ and $X' = sX$ with $\mathbb{E}[X'] = s\mathbb{E}[X]$ , then according to the definition of a sub-exponential random variable + +$$ +\begin{array}{l} \mathbb {E} \left[ \exp \left(\lambda (X - \mu)\right) \right] \leq \exp \left(\frac {\nu^ {2} \lambda^ {2}}{2}\right) \quad \text {f o r a l l} | \lambda | < \frac {1}{\alpha} \\ \Longrightarrow \mathbb {E} \left[ \exp \left(\frac {\lambda}{s} (s X - s \mu)\right) \right] \leq \exp \left(\frac {\nu^ {2} s ^ {2} \frac {\lambda^ {2}}{s ^ {2}}}{2}\right) \quad \text {f o r a l l} | \frac {\lambda}{s} | < \frac {1}{s \alpha} \tag {15} \\ \xlongequal {\lambda^ {\prime} = \frac {\lambda}{s}} \mathbb {E} \left[ \exp \left(\lambda^ {\prime} \left(X ^ {\prime} - \mu^ {\prime}\right)\right) \right] \leq \exp \left(\frac {\nu^ {2} s ^ {2} \lambda^ {\prime 2}}{2}\right) \quad \text {f o r a l l} | \lambda^ {\prime} | < \frac {1}{s \alpha} \\ \end{array} +$$ + +Defining $\alpha' = s\alpha$ and $\nu' = s\nu$ we recover that $X' \sim SE(s\nu, s\alpha)$ . + +Proposition B.6. If the random variables $X_{i}$ for $i\in [1 - N]$ for $N\in \mathbb{N}^{+}$ are all sub-exponential with parameters $(\nu_{i},\alpha_{i})$ and independent, then $\sum_{i = 1}^{N}X_{i}\in SE(\sqrt{\sum_{i = 1}^{N}\nu_{i}^{2}},\max_{i}\alpha_{i})$ , and $\frac{1}{N}\sum_{i = 1}^{N}X_{i}\sim SE\left(\frac{1}{\sqrt{N}}\sqrt{\frac{1}{N}\sum_{i = 1}^{N}\nu_{i}^{2}},\frac{1}{N}\max_{i}\alpha_{i}\right)$ . + +Proof. This is a simplification of the discussion prior to equation 2.18 of Wainwright (2019). + +Proposition B.7. For a random variable $X \sim SE(\nu, \alpha)$ , the following concentration inequality holds: + +$$ +\operatorname * {P r} \left(| X - \mu | \geq t\right) \leq 2 \exp \left(- \min \left(\frac {t ^ {2}}{2 \nu^ {2}}, \frac {t}{2 \alpha}\right)\right). +$$ + +Proof. Direct from multiplying the result derived in Equation 2.18 of Wainwright (2019) by a scalar. + +Corollary B.8. For a random variable $X \sim SE(\nu, \alpha)$ , the following inequality holds with probability at least $1 - \delta$ : + +$$ +| X - \mu | < \max \left(\nu \sqrt {2 \log \frac {2}{\delta}}, 2 \alpha \log \frac {2}{\delta}\right). +$$ + +# B.3. Bound on Growth of ReLU Networks' NTKs + +We will now specialize to fully-connected ReLU networks, and show the remainder of the required results in that setting. + +Let's denote a neural network with $L$ dense hidden layers whose width is $n$ as: + +$$ +f ^ {0} (x) = x +$$ + +$$ +f ^ {l + 1} (x) = \phi \left(W ^ {(l + 1)} f ^ {l} (x)\right) \tag {16} +$$ + +$$ +f (x) = f ^ {L} (x) = W ^ {(L)} f ^ {L - 1} (x) +$$ + +such that $\phi$ is a differentiable coordinate-wise activation function. + +Setting A (ReLU-MLP). We make the following assumptions about the network $f$ : + +- We assume $W^{(l)} \in \mathbb{R}^{n_l \times n_{l-1}}$ for $l \in 1, \ldots, L$ is initialized according to the He et al. (2015) initialization ("fan-in"), meaning that each scalar parameter is distributed according to $\mathcal{N}(0, 1 / n_{l-1})$ . +- We assume the width of all hidden layers are identical (and equal to $n$ ). The proof extends naturally to the case of non-equal widths as long as $n_{l+1} / n_l \to c_l \in (0, \infty)$ for each consecutive pair of layers. + +- We assume $\phi$ is the ReLU activation. This can be generalized to 1-Lipschitz, ReLU-like functions such as GeLU, PReLU, and so on, as discussed in Appendix E. +- We assume the training data $\mathcal{X}$ is finite and contained in a compact set and there are no overlapping datapoints. + +A Note On Parameterization Although we assume a Gaussian distribution for each scalar variable, the proofs in this section apply to any other distribution used for scalar parameters as long as: + +- The variance of the parameters is set according to He et al. (2015), and the mean is zero. +Each scalar parameter is initialized independently of all other ones. +- The distribution used is sub-Gaussian; specifically, $w_{ij}^{l + 1} \in SG(1 / n_l)$ . + +This applies to all bounded initialization methods, like truncated normal or uniform on an interval. + +In general, the product of two sub-Gaussian distributions has a sub-exponential distribution. For the product of two independent weights, $w_{ij}^{l + 1}w_{ab}^{l + 1}$ with $i\neq a$ and/or $j\neq b$ , we denote the parameters as $\frac{1}{n_l} SE(\nu_p,\alpha_p)$ . For $(w_{ij}^{l + 1})^2$ , we use the parameters $\frac{1}{n_l} SE(\nu_s,\alpha_s)$ , whose mean is $\mu_s\neq 0$ . + +Note that we can recursively define the eNTK of $f^{l + 1}$ using the eNTK of $f^l$ as + +$$ +\begin{array}{l} \Theta^ {(l + 1)} (x _ {1}, x _ {2}) = \sum_ {i = 1} ^ {l} \frac {\partial f ^ {l + 1} (x _ {1})}{\partial W ^ {(i)}} \frac {\partial f ^ {l + 1} (x _ {2})}{\partial W ^ {(i)}} ^ {\top} + \overbrace {\frac {\partial f ^ {l + 1} (x _ {1})}{\partial W ^ {l + 1}} \frac {\partial f ^ {l + 1} (x _ {2})}{\partial W ^ {l + 1}}} ^ {K _ {D} ^ {l + 1} (x _ {1}, x _ {2})} \\ = \sum_ {i = 1} ^ {l} \frac {\partial \phi (W ^ {(l + 1)} f ^ {l} (x _ {1}))}{\partial W ^ {(i)}} \frac {\partial \phi (W ^ {(l + 1)} f ^ {l} (x _ {2}))}{\partial W ^ {(i)}} ^ {\top} + K _ {D} ^ {l + 1} (x _ {1}, x _ {2}) \\ = \sum_ {i = 1} ^ {l} \frac {\partial \phi \left(W ^ {(l + 1)} f ^ {l} \left(x _ {1}\right)\right)}{\partial f ^ {l} \left(x _ {1}\right)} \frac {\partial f ^ {l} \left(x _ {1}\right)}{\partial W ^ {(i)}} \frac {\partial f ^ {l} \left(x _ {2}\right)}{\partial W ^ {(i)}} ^ {\top} \frac {\partial \phi \left(W ^ {(l + 1)} f ^ {l} \left(x _ {2}\right)\right)}{\partial f ^ {l} \left(x _ {2}\right)} ^ {\top} + K _ {D} ^ {l + 1} \left(x _ {1}, x _ {2}\right) \tag {17} \\ = \frac {\partial \phi \left(W ^ {(l + 1)} f ^ {l} \left(x _ {1}\right)\right)}{\partial f ^ {l} \left(x _ {1}\right)} \left[ \sum_ {i = 1} ^ {l} \frac {\partial f ^ {l} \left(x _ {1}\right)}{\partial W ^ {(i)}} \frac {\partial f ^ {l} \left(x _ {2}\right)}{\partial W ^ {(i)}} ^ {\top} \right] \frac {\partial \phi \left(W ^ {(l + 1)} f ^ {l} \left(x _ {2}\right)\right)}{\partial f ^ {l} \left(x _ {2}\right)} ^ {\top} + K _ {D} ^ {l + 1} \left(x _ {1}, x _ {2}\right) \\ = \frac {\partial \phi (W ^ {(l + 1)} f ^ {l} (x _ {1}))}{\partial f ^ {l} (x _ {1})} \Theta^ {(l)} (x _ {1}, x _ {2}) \frac {\partial \phi (W ^ {(l + 1)} f ^ {l} (x _ {2}))}{\partial f ^ {l} (x _ {2})} ^ {\top} + K _ {D} ^ {l + 1} (x _ {1}, x _ {2}) \\ \end{array} +$$ + +where $K_D^{l+1}(x_1, x_2) = f^l(x_1)^\top f^l(x_2) I_n$ is a diagonal matrix, and + +$$ +\frac {\partial \phi \left(W ^ {(l + 1)} f ^ {l} (x)\right)}{\partial f ^ {l} (x)} = W ^ {(l + 1)} \odot \left[ \dot {\phi} \left(W ^ {(l + 1)} f ^ {l} (x)\right) \right] _ {1 \times n} = \left[ W _ {i j} ^ {(l + 1)} \dot {\phi} \left(W _ {i,:} ^ {(l + 1)} \cdot f ^ {l} (x)\right) \right] _ {i j} \tag {18} +$$ + +with $\odot$ the elementwise (Hadamard) product using "broadcasting," and $\dot{\phi}$ the derivative of $\phi$ . We can think of the last layer as following the same equations with $\phi$ the identity function, so that $\dot{\phi}(x) = 1$ . + +Before moving on, it's useful to first show a simple inequality on the elements of a tangent kernel based on the Lipschitz-ness of the activation function; this will help us further in deriving the aforementioned bounds. Define $V^{(l)}(x) = W^{(l)} \odot \left[\dot{\phi}(W^{(l)}f^{l-1}(x))\right]_{1 \times n}$ . We can write each entry of $\Theta^{(l+1)}(x_1, x_2)$ as + +$$ +\Theta^ {(l + 1)} (x _ {1}, x _ {2}) _ {i j} = \sum_ {a = 1} ^ {n} \sum_ {b = 1} ^ {n} V ^ {(l + 1)} (x _ {1}) _ {i a} V ^ {(l + 1)} (x _ {2}) _ {j b} \Theta^ {(l)} (x _ {1}, x _ {2}) _ {a b} + f ^ {l} (x _ {1}) ^ {\top} f ^ {l} (x _ {2}) \mathcal {I} (i = j) +$$ + +$$ +\left| \Theta^ {(l + 1)} \left(x _ {1}, x _ {2}\right) _ {i j} \right| \leq \left| \sum_ {a = 1} ^ {n} \sum_ {b = 1} ^ {n} V ^ {(l)} \left(x _ {1}\right) _ {i a} V ^ {(l)} \left(x _ {2}\right) _ {j b} \Theta^ {(l)} \left(x _ {1}, x _ {2}\right) _ {a b} \right| + \left| f ^ {l} \left(x _ {1}\right) ^ {\top} f ^ {l} \left(x _ {2}\right) \right| \mathcal {I} (i = j). \tag {19} +$$ + +Lemma B.9 (Diagonality of the first layer's tangent kernel). For a NN under Setting $A$ , the eNTK of the first layer $\Theta^{(1)}(x_1,x_2)$ is diagonal. Moreover, there is a corresponding constant $C^{(1)} > 0$ such that for all diagonal elements $\Theta^{(1)}(x_1,x_2)_{ij}$ , we have that + +$$ +| \Theta^ {(1)} (x _ {1}, x _ {2}) _ {i i} | \leq C ^ {(1)}. +$$ + +Proof. Consider the one layer $\mathrm{NN}f^{1}(x) = \phi (W^{(1)}x)$ . For this case, we have: + +$$ +\Theta^ {(1)} \left(x _ {1}, x _ {2}\right) _ {i j} = \left\{ \begin{array}{l l} \sum_ {a = 1} ^ {D} x _ {1 a} \dot {\phi} \left(W _ {i} x _ {1}\right) x _ {2 a} \dot {\phi} \left(W _ {i} x _ {2}\right) & \text {i f} i = j \\ 0 & \text {i f} i \neq j \end{array} \right. \tag {20} +$$ + +and thus, since the activation function $\phi$ is 1-Lipschitz we can conclude that for all $i,j$ + +$$ +\left| \Theta^ {(1)} \left(x _ {1}, x _ {2}\right) _ {i j} \right| \leq \left\{ \begin{array}{l l} \sum_ {a = 1} ^ {D} \left| x _ {1 a} \right| \left| x _ {2 a} \right| & \text {i f} i = j \\ 0 & \text {i f} i \neq j \end{array} . \right. \tag {21} +$$ + +Thus, the tangent kernel of the first layer is a diagonal matrix whose entries are independent of the width of the first layer $(n)$ , and can be bounded by a positive constant, given by $C^{(1)} = \max_{(x_1,x_2)\in \mathcal{X}\times \mathcal{X}}\sum_{a = 1}^{D}|x_{1a}x_{2a}|$ . + +Lemma B.10. Consider a NN $f$ under Setting A. For every small constant $\delta > 0$ , $l \in [L - 1]$ , and arbitrary datapoints $x_{1}$ and $x_{2}$ , it holds that + +$$ +\left\| \Theta^ {(l)} \left(x _ {1}, x _ {2}\right) \right\| _ {F} \leq \mathcal {O} (n \sqrt {n}) \tag {22} +$$ + +with probability at least $1 - \delta$ over random initialization for any $n > n_0$ , where this lower bound $n_0$ is $\mathcal{O}$ (polylog $(L / \delta)$ ). + +Proof. Using the recursive definition of eNTK in (19) we can see that for all $l \geq 2$ , + +$$ +\begin{array}{l} \left\| \Theta^ {(l)} \left(x _ {1}, x _ {2}\right) \right\| _ {F} \leq \left\| V ^ {(l) ^ {\top}} \Theta^ {(l - 1)} \left(x _ {1}, x _ {2}\right) V ^ {(l)} \right\| _ {F} + \left\| f ^ {(l - 1)} \left(x _ {1}\right) ^ {\top} f ^ {(l - 1)} \left(x _ {2}\right) I _ {n} \right\| _ {F} \\ \leq \mathcal {O} \left(\left\| \Theta^ {(l - 1)} \left(x _ {1}, x _ {2}\right) \right\| _ {F} \log \frac {2}{\delta}\right) + \Theta (n) \sqrt {n} \log \frac {2 l}{\delta} \tag {23} \\ \end{array} +$$ + +with probability at least $1 - 2\delta$ . To derive the inequality, note that for all $2 \leq l \leq L - 1$ , $\|V^{(l)}\|_F = \mathcal{O}(1)$ with high probability (Lee et al., 2019, Appendix G.3) and that the inner product of post-activations grows linearly as shown in Lemma B.12. Since for the first layer we have that $\|\Theta^{(1)}(x_1,x_2)\|_F = \mathcal{O}(\sqrt{n})$ (refer to Lemma B.9), we can conclude the proof as long as the minimum width $(n_0)$ is chosen appropriately to satisfy the linear growth of post-activations with high probability as in Lemma B.12. + +The following is a version of Theorem 4.1, which we are now ready to prove. + +Theorem B.11. Consider a NN $f$ under Setting A. For every arbitrary small $\delta > 0$ and the arbitrary datapoints $x_{1}$ and $x_{2}$ , there exists $n_{0}$ such that + +$$ +\frac {\left\| \Theta^ {(L)} \left(x _ {1} , x _ {2}\right) - \hat {\Theta} ^ {(L)} \left(x _ {1} , x _ {2}\right) \otimes I _ {O} \right\| _ {F}}{\left\| \Theta^ {(L)} \left(x _ {1} , x _ {2}\right) \right\| _ {F}} = \mathcal {O} \left(\frac {1}{\sqrt {n}}\right) \tag {24} +$$ + +with probability at least $1 - \delta$ for $n > n_0$ . + +Proof. We will start by showing that $\| \Theta^{(L)}(x_1,x_2)\| _F$ where $L$ denotes the last layer is $\Theta (n)$ with high probability over random initialization. + +Using Equation (19) we can show that for the off-diagonal elements we have + +$$ +\left| \Theta_ {i j} ^ {(L)} \left(x _ {1}, x _ {2}\right) \right| \leq \frac {\left\| \Theta^ {(L - 1)} \left(x _ {1} , x _ {2}\right) \right\| _ {F}}{n} \log \frac {2}{\delta} \tag {25} +$$ + +with probability at least $1 - \delta$ . Applying a union bound over the $O^2 - O$ off-diagonal elements we have: + +$$ +\forall i \neq j \in [ O ]; \left| \Theta_ {i j} ^ {(L)} \left(x _ {1}, x _ {2}\right) \right| \leq \frac {\left\| \Theta^ {(L - 1)} \left(x _ {1} , x _ {2}\right) \right\| _ {F}}{n} \log \frac {2 O ^ {2}}{\delta} \tag {26} +$$ + +Likewise, for the diagonal elements we have + +$$ +\left| \Theta_ {i i} ^ {(L)} \left(x _ {1}, x _ {2}\right) - \frac {1}{n} T r \left(\Theta^ {(L - 1)} \left(x _ {1}, x _ {2}\right)\right) \right| \leq \frac {\left\| \Theta^ {(L - 1)} \left(x _ {1} , x _ {2}\right) \right\| _ {F}}{n} \log \frac {2}{\delta} \tag {27} +$$ + +with probability at least $1 - \delta$ where $\mathbb{E}[V_i^{(L)}^\top \Theta^{(L-1)}(x_1, x_2)V_i^{(L)}] = \frac{1}{n} \text{Tr}(\Theta^{(L-1)}(x_1, x_2))$ . Using Lemma B.12, we can see that $\text{tr}(\Theta^{(L-1)}(x_1, x_2)) = \Theta(n^2) \log \frac{2n}{\delta}$ with probability at least $1 - \delta$ as long as minimum width $n_0$ satisfies the conditions for Lemma B.12. Putting it all together and applying a union bound over the diagonal elements of $\Theta^{(L)}(x_1, x_2)$ we can see that + +$$ +\forall i \in [ O ]; \left| \Theta_ {i i} ^ {(L)} \left(x _ {1}, x _ {2}\right) - \Theta (n \log n) \right| \leq \sqrt {n} \log \frac {2 O}{\delta} \tag {28} +$$ + +with probability at least $1 - \delta$ . Combining the bounds on off-diagonal and diagonal elements of the kernel matrix, we can see that $\| \Theta^{(L)} \|_F = \Theta(n)$ with high probability over random initialization. The result follows from Lemma B.1. + +Lemma B.12. Consider a NN under Setting $A$ with $L \geq 2$ and ReLU activation function. The dot product of two post-activations $|f^{(l)}(x_1)^\top f^{(l)}(x_2)|$ grows linearly with the width of the network with high probability over random initialization. + +Proof. We begin by showing that the dot product of the post-activations of the first layer of the NN under setting Setting A grow linearly using a simple Hoeffding bound. Next, we apply Theorem 1 of Arpit & Bengio (2019) to show that the magnitude of this dot product is preserved in the next layers. First, note that as we assume the data lies in a compact set and as the post-activations are all positive, one can easily see that for each $x_{1}, x_{2} \in \mathcal{X}$ and for all $l \in [L]$ we have that: + +$$ +\min _ {x \in \mathcal {X}} \| f ^ {(l)} (x) \| ^ {2} \leq f ^ {(l)} \left(x _ {1}\right) ^ {\top} f ^ {(l)} \left(x _ {2}\right) \leq \max _ {x \in \mathcal {X}} \| f ^ {(l)} (x) \| ^ {2}. \tag {29} +$$ + +To simplify the proofs in this Lemma, we use this fact and instead work with the norm of the post-activations and we note that the final result on the norms can be accordingly applied to dot products of post-activations of different inputs. For the first layer, we have that $f^{(1)}(x) = \phi(W^{(1)}x)$ where $W_{ij}^{(l+1)} \sim \mathcal{N}(0, \frac{1}{n_l})$ and $x \in \mathbb{R}^{n_0}$ . Hence, each $f^{(1)}(x)_i$ is i.i.d and distributed as $\mathcal{N}^R(0, \frac{\|x\|^2}{n_0})$ where $\mathcal{N}^R$ is the Rectified Normal Distribution. Using the properties of the Rectified Normal distribution we get that: + +$$ +\mathbb {E} [ \| f ^ {(1)} (x) \| ^ {2} ] = \frac {n \| x \| ^ {2}}{n _ {0}} \tag {30} +$$ + +Next, as the Rectified Normal is a sub-gaussian distribution, we can apply the Hoeffding bound to see that + +$$ +\Pr \left[ | \| f ^ {(1)} (x) \| ^ {2} - n \mu_ {1} | \leq \varepsilon_ {1} \right] \geq 1 - \delta_ {1} \tag {31} +$$ + +where $\delta_1 = 2\exp \left(-\frac{\varepsilon_1^2}{2\sigma^2}\right)$ , $\mu_{1} = \frac{\|x\|^{2}}{n_{0}}$ and $\sigma$ is the standard deviation of $\| f^{(1)}(x)\| ^2$ over random initialization of the weights of the first layer. Next, we can adapt Theorem 1 from (Arpit & Bengio, 2019) to see that for post activations of layer $l\in [2 - L]$ + +$$ +\Pr \left[ (1 - \varepsilon) ^ {l - 1} \| f ^ {(1)} (x) \| ^ {2} \leq \| f ^ {(l)} (x) \| ^ {2} \leq (1 + \varepsilon) ^ {l - 1} \| f ^ {(1)} (x) \| ^ {2} \right] \geq 1 - \delta_ {2} \tag {32} +$$ + +where $\delta_{2} = 2N(l - 1)\exp \left(-n\left(\frac{\varepsilon}{4} +\log \frac{2}{1 + \sqrt{1 + \varepsilon}}\right)\right)$ , $N$ is the size of our dataset and $\varepsilon$ is any positive small constant. Combining this with the result from the first layer's post-activations we can see that + +$$ +\left(1 - \varepsilon_ {2}\right) ^ {l - 1} \left(n \mu_ {1} - \varepsilon_ {1}\right) \leq \| f ^ {(l)} (x) \| ^ {2} \leq \left(1 + \varepsilon_ {2}\right) ^ {l - 1} \left(n \mu_ {1} + \varepsilon_ {1}\right) \tag {33} +$$ + +with probability at least $1 - \delta_{1} - \delta_{2}$ . Hence, for any $\delta > 0$ , $n = \Omega \left(\log \frac{1}{\delta}\right)$ and $(x_{1}, x_{2}) \in \mathcal{X} \times \mathcal{X}$ one can come up with constants $G_{1}^{(l)} = \Omega \left(\log \left(\frac{n}{\delta}\right)\right)$ , $G_{2}^{(l)} = \mathcal{O}\left(\log \left(\frac{n}{\delta}\right)\right)$ for post activations of layer $l$ such that + +$$ +G _ {1} ^ {(l)} n \leq f ^ {(l)} \left(x _ {1}\right) ^ {\top} f ^ {(l)} \left(x _ {2}\right) < G _ {2} ^ {(l)} n \tag {34} +$$ + +with probability at least $1 - \delta$ (note that exact values of $G_1^{(l)}$ and $G_1^{(l)}$ depend on $l$ and $N$ too). + +# C. pNTK's Maximum Eigenvalue Converges to eNTK's Maximum Eigenvalue as Width Grows + +Proof of Theorem 4.4. Note that, as both pNTK and eNTK are symmetric PSD matrices, their maximum eigenvalues are equal to their spectral norm. Furthermore, the spectral norm of a matrix is upper-bounded by its Frobenius norm. Now, note that according to the triangle inequality, we have + +$$ +\begin{array}{l} \left\| \Theta \left(x _ {1}, x _ {2}\right) \right\| = \left\| \hat {\Theta} \left(x _ {1}, x _ {2}\right) \otimes I _ {O} + \left(\Theta \left(x _ {1}, x _ {2}\right) - \hat {\Theta} \left(x _ {1}, x _ {2}\right) \otimes I _ {O}\right) \right\| \tag {35} \\ \leq \left\| \tilde {\Theta} \left(x _ {1}, x _ {2}\right) \otimes I _ {O} \right\| + \left\| \Theta \left(x _ {1}, x _ {2}\right) - \tilde {\Theta} \left(x _ {1}, x _ {2}\right) \otimes I _ {O} \right\| \\ \end{array} +$$ + +Thus + +$$ +\left\| \Theta \left(x _ {1}, x _ {2}\right) \right\| - \left\| \hat {\Theta} \left(x _ {1}, x _ {2}\right) \otimes I _ {O} \right\| \leq \left\| \Theta \left(x _ {1}, x _ {2}\right) - \hat {\Theta} \left(x _ {1}, x _ {2}\right) \otimes I _ {O} \right\|. \tag {36} +$$ + +which according to $(\ref{eq:1})$ together with the fact that for any matrix $A$ , $\lambda_{\max}(A \otimes I) = \lambda_{\max}(A)$ implies that with probability at least $1 - \delta$ , + +$$ +\begin{array}{l} \left| \lambda_ {\max } \left(\Theta \left(x _ {1}, x _ {2}\right)\right) - \lambda_ {\max } \left(\hat {\Theta} \left(x _ {1}, x _ {2}\right)\right) \right| \leq \\ 4 O \left(C _ {1} ^ {(L - 1)} + C _ {2} ^ {(L - 1)}\right) \sqrt {n} \max \left(\sqrt {\log \frac {4 O ^ {2}}{\delta}}, \sqrt {2} \log \frac {4 O ^ {2}}{\delta}\right). \tag {37} \\ \end{array} +$$ + +Moreover, as mentioned in the proof of ??, combining the previous inequality with the fact that $\lambda_{\max}(\Theta(x_1, x_2)) \geq \Omega(n)$ with high probability shows that there exists $\delta'$ and $n_0$ such that + +$$ +\left| \frac {\lambda_ {\max } (\Theta \left(x _ {1} , x _ {2}\right)) - \lambda_ {\max } (\hat {\Theta} \left(x _ {1} , x _ {2}\right))}{\lambda_ {\max } (\Theta \left(x _ {1} , x _ {2}\right))} \right| \leq \mathcal {O} (1 / \sqrt {n}) \tag {38} +$$ + +with probability $1 - \delta^{\prime}$ over random initialization for $n > n_0$ as desired. + +# D. Kernel Regression Using pNTK vs Kernel Regression Using eNTK + +Proof of Theorem 4.5. We start by proving a simpler version of a theorem, and then show a correspondence that expands the result of the simpler proof to the original Theorem. Assuming $|\mathcal{X}| = |\mathcal{Y}| = N$ (training data), we define + +$$ +h (x) = \Theta \left(x _ {1}, \mathcal {X}\right) \Theta \left(\mathcal {X}, \mathcal {X}\right) ^ {- 1} \mathcal {Y} \text {a n d} \hat {h} (x) = \left(\hat {\Theta} \left(x _ {1}, \mathcal {X}\right) \otimes I _ {O}\right) \left(\hat {\Theta} (\mathcal {X}, \mathcal {X}) \otimes I _ {O}\right) ^ {- 1} \mathcal {Y}. \tag {39} +$$ + +Note that as the result of kernel regression (without any regularization) does not change with scaling the kernel with a fixed scalar, we can use a weighted version of the kernels mentioned in the previous equation without loss of generality. Accordingly, we define + +$$ +\alpha = \left(\frac {1}{n} \Theta (\mathcal {X}, \mathcal {X})\right) ^ {- 1} \mathcal {Y} \text {a n d} \hat {\alpha} = \left(\frac {1}{n} \hat {\Theta} (\mathcal {X}, \mathcal {X}) \otimes I _ {O}\right) ^ {- 1} \mathcal {Y}. \tag {40} +$$ + +Using the fact that $\hat{M}^{-1} - M^{-1} = -\hat{M}^{-1}(\hat{M} - M)M^{-1}$ and $(A \otimes I)^{-1} = A^{-1} \otimes I$ we can show that + +$$ +\hat {\alpha} - \alpha = - \hat {\Theta} (\mathcal {X}, \mathcal {X}) ^ {- 1} \otimes I _ {O} \left(\frac {1}{n} \hat {\Theta} (\mathcal {X}, \mathcal {X}) \otimes I _ {O} - \frac {1}{n} \Theta (\mathcal {X}, \mathcal {X})\right) ^ {- 1} \Theta \left(x _ {1}, x _ {2}\right) \mathcal {Y} \tag {41} +$$ + +Assume $\lambda = \min \left(\lambda_{\min}(\Theta (\mathcal{X},\mathcal{X})),\lambda_{\min}(\hat{\Theta} (\mathcal{X},\mathcal{X}))\right)$ . Then + +$$ +\left\| \hat {\alpha} - \alpha \right\| \leq \frac {1}{\lambda^ {2}} \left\| \frac {1}{n} \hat {\Theta} (\mathcal {X}, \mathcal {X}) \otimes I _ {O} - \frac {1}{n} \Theta (\mathcal {X}, \mathcal {X}) \right\| \| \mathcal {Y} \| \tag {42} +$$ + +Plugging into the formula for kernel regression, we get that + +$$ +\begin{array}{l} \hat {h} (x) - h (x) = \left(\frac {1}{n} \hat {\Theta} (x, \mathcal {X}) \otimes I _ {O}\right) \hat {\alpha} - \frac {1}{n} \Theta (x, \mathcal {X}) \alpha \tag {43} \\ = \left(\frac {1}{n} \hat {\Theta} (x, \mathcal {X}) \otimes I _ {O} - \frac {1}{n} \Theta (x, \mathcal {X})\right) \hat {\alpha} + \frac {1}{n} \Theta (x, \mathcal {X}) (\hat {\alpha} - \alpha) \\ \end{array} +$$ + +Thus + +$$ +\begin{array}{l} \| \hat {h} (x) - h (x) \| \leq \| \frac {1}{n} \hat {\Theta} (x, \mathcal {X}) \otimes I _ {O} - \frac {1}{n} \Theta_ {f} (x, \mathcal {X}) \| \| \hat {\alpha} \| + \| \frac {1}{n} \Theta (x, \mathcal {X}) \| \| \hat {\alpha} - \alpha \| \\ \leq \frac {1}{\lambda} \| \frac {1}{n} \hat {\Theta} (x, \mathcal {X}) \otimes I _ {O} - \frac {1}{n} \Theta (x, \mathcal {X}) \| \| \mathcal {Y} \| \tag {44} \\ + \frac {1}{\lambda^ {2}} \| \frac {1}{n} \Theta (x, \mathcal {X}) \| \| \frac {1}{n} \hat {\Theta} (\mathcal {X}, \mathcal {X}) \otimes I _ {O} - \frac {1}{n} \Theta (\mathcal {X}, \mathcal {X}) \| \| \mathcal {Y} \|. \\ \end{array} +$$ + +Now, note that as for a block matrix $A$ of $A_{ij}$ blocks we have that $\| A \| \leq \sum_{i,j} \| A_{ij} \|$ it follows that for any matrix valued kernel $K$ + +$$ +\| K (\mathcal {X}, \mathcal {X}) \| \leq \sum_ {x _ {1}, x _ {2} \in \mathcal {X}} \| K (x _ {1}, x _ {2}) \|. \tag {45} +$$ + +Using this fact, we can rewrite the bound as + +$$ +\begin{array}{l} \left\| \hat {h} (x) - h (x) \right\| \leq \frac {N}{\lambda} \left\| \frac {1}{n} \hat {\Theta} \left(x, x _ {1} ^ {*}\right) \otimes I _ {O} - \frac {1}{n} \Theta \left(x, x _ {1} ^ {*}\right) \right\| \| \mathcal {Y} \| \tag {46} \\ + \frac {N ^ {2}}{\lambda^ {2}} \| \frac {1}{n} \Theta (x, \mathcal {X}) \| \| \frac {1}{n} \hat {\Theta} (x _ {2} ^ {*}, x _ {3} ^ {*}) \otimes I _ {O} - \frac {1}{n} \Theta (x _ {2} ^ {*}, x _ {3} ^ {*}) \| \| \mathcal {Y} \| \\ \end{array} +$$ + +for some particular $x_1^*, x_2^*, x_3^* \in \mathcal{X}$ . Using $(\ref{eq:1})$ , we can see with probability at least $1 - \delta$ that + +$$ +\left\| \hat {h} (x) - h (x) \right\| \leq \frac {4 N O \alpha}{\lambda \sqrt {n}} \max \left(\sqrt {\log \frac {4 O ^ {2}}{\delta}}, \sqrt {2} \log \frac {4 O ^ {2}}{\delta}\right) \| \mathcal {Y} \| \left(1 + \frac {N}{\lambda} \| \frac {1}{n} \Theta (x, \mathcal {X}) \|\right). \tag {47} +$$ + +To show the correspondence between $\hat{h} (x)$ and $\hat{f}^{lin}(x)$ , as in (5), note that + +$$ +\begin{array}{l} \hat {h} (x) = \left(\hat {\Theta} (x, \mathcal {X}) \otimes I _ {O}\right) \left(\hat {\Theta} (\mathcal {X}, \mathcal {X}) ^ {- 1} \otimes I _ {O}\right) \mathcal {Y} \\ = \left(\hat {\Theta} (x, \mathcal {X}) \hat {\Theta} (\mathcal {X}, \mathcal {X}) ^ {- 1} \otimes I _ {O}\right) \mathcal {Y} \tag {48} \\ = \operatorname {v e c} \left(I _ {O} \mathcal {Y} _ {v} \hat {\Theta} (x, \mathcal {X}) \hat {\Theta} (\mathcal {X}, \mathcal {X}) ^ {- 1}\right) \\ \end{array} +$$ + +where $\mathcal{V}_v = \operatorname{vec}^{-1}(\mathcal{V})$ is the result of inverse of the vectorization operation, converting the $NO \times 1$ vector to a $O \times N$ matrix. Thus, $\hat{h}(x) = \hat{\Theta}(x, \mathcal{X})\hat{\Theta}(\mathcal{X}, \mathcal{X})^{-1}\mathcal{Y}'$ , where $\mathcal{Y}'$ is the $N \times O$ matrix reshaped from the $NO \times 1$ vector $\mathcal{Y}$ . + +# E. Extending the Proofs to Other Architectures + +We start our description of how to extend the proofs to other architectures by providing a sketch on how the dense weight vectors can be replaced by other layers of choice like convolutions. First, note how the linear weights are used in Equation (17). As mentioned in Section 6 of Yang (2020), we can accordingly write the same expansion for other forward computational graphs and derive the corresponding canonical decomposition for them. In subsection 6.2.1, Yang (2020) + +![](images/f2278296b435f88b91297237cf8ef8889353b5d062e470e09575df773f2131ff.jpg) +Figure 13: Comparing the magnitude of sum of on-diagonal and off-diagonal elements of $\Theta_{\theta}$ at initialization and throughout training, based on 1000 points from CIFAR-10. The reported numbers are the average of $1000 \times 1000$ kernels each having a shape of $10 \times 10$ . The same subset has then been used to train the NN using SGD. + +![](images/fbcf3712da4c6242f2892823fd63905289732d530e4a53fae4ac0ff98f1ea6f1.jpg) + +![](images/eac00a38c81a9debcda3b2f6c7418afca0874d75c7eb71231051038d4eeb6e0d.jpg) + +![](images/c4064527a44a06ff44e2f48110c8bb36b526019595df8f291a701b6634ec51ca.jpg) + +![](images/30961a70b6259627e888755bf25015216d9b37721cc18a518e6c4b2b599c9140.jpg) +Figure 14: Evaluating the relative difference of Frobenius norm of $\Theta_{\theta}(\mathcal{D},\mathcal{D})$ and $\hat{\Theta}_{\theta}(\mathcal{D},\mathcal{D})\otimes I_O$ at initialization and throughout training, based on 1000 points from CIFAR-10. + +![](images/80ce3c1d3305a6eb641fabf2385971031768a8899c46dead0782d8e4fd44f6b6.jpg) + +![](images/6bf1ce7acc2758fa1eee0f4984f369935ee4fefaa23ac654e84410c5f9db5a74.jpg) + +![](images/2ad1697a984de23933fcbd719f47e316cee64eb5afea657b705d6d066c739f78.jpg) + +![](images/66ec755c85a3788c96aaf9f2001446aa5efc48b12a4d4f5efdf7c0bc06665e7a.jpg) +Figure 15: Evaluating the relative difference of $\lambda_{\mathrm{max}}$ of $\Theta_{\theta}(\mathcal{D},\mathcal{D})$ and $\hat{\Theta}_{\theta}(\mathcal{D},\mathcal{D})$ at initialization and throughout training, based on kernels on a subset ( $|\mathcal{D}| = 1000$ ) of points from CIFAR-10. + +![](images/c38262e2399b7c9dae6bd0cef4550477be9ef87ff2bfb29fef458e17ad9afb71.jpg) + +![](images/c70c4efb344370235e843a20c8a14b055bd5d826109c1a3d7ae3b1363922145b.jpg) + +![](images/7461c218d259b50e1ef95f0a758d1a7255f0a362c2e16d881351fd5d5a4052a6.jpg) + +![](images/f5003dfa00503f5a138f6e21c98215d36ab9d2ec86a53c07020b502045203cb9.jpg) +Figure 16: Evaluating the relative norm difference of kernel regression outputs using eNTK and pNTK as in Equation (4) and Equation (5) at initialization and throughout training. The kernel regression has been done on $|\mathcal{D}| = 1000$ training points and $|\mathcal{X}| = 500$ test points randomly selected from CIFAR-10's train and test sets. + +![](images/c4af42c1b4fbcb4a9e919bb0e912689c95bcd745a794bd32858b314f1e322114.jpg) + +![](images/7b28520bc8e114801efea2ad54c4b194444f9bd24f7c606df60173f496db5a2c.jpg) + +![](images/0d23f8a59f99aee1f21c4695e26f30b3682db82aa76802be50a92a63c910c5da.jpg) + +provides a concrete example on how one can derive this expansion for a general RNN-like architecture. As the proofs provided in this section depend on the MLP structure only by means of the canonical decomposition, one can extend them to a general architecture by deriving the corresponding canonical decomposition of that architecture. + +![](images/7deab891461035c43dcb8813ba347b5da96dd8ed15399a39f6fd9fae4c8a3d7a.jpg) +Figure 17: Evaluating the difference in test accuracy of kernel regression using pNTK as in (5) vs the final model $f$ throughout SGD training on the full CIFAR-10 dataset. How much worse would it be to "give up" on SGD at this point and train $\hat{f}^{lin}$ with the current representation? + +![](images/08ad2a327acde481b39f287ea9f34ee496f8cc3dca5088618936aa6b86d5b534.jpg) + +![](images/437755007dc6a8cc76bd871882db121182946e92c4b1ac108fc24be925d756e0.jpg) + +![](images/dc8e65a1e7beb215a1f2a27ce3b78324bfae710ac7cbfb2df83b66bef6906cea.jpg) + +![](images/56799e35c120a2c2f9c0334e45173cb25de9935fdf5a3b0f023fd0a91b7dcfd9.jpg) +Figure 18: Experimental evaluation of tightness of approximation bounds + +![](images/a96ffea7929bfd5b1de61a281d2e7063df08c8aa1a62487f831ef5779b252407.jpg) + +![](images/9dc696ff60618140d54ea2a002526895212d7fe7ea02ed7653fdf3b975cdd13c.jpg) + +![](images/66fa2ebaa301f60e9c127e507ee58131481a5238452139cc416f01edb0c02da3.jpg) + +Non-Gaussian Weights: According to the strategy used in the proofs, we need the individual weights to be distributed such that the product of two independent scalar weights (as in Equation (17)) remain sub-exponential. Hence, any sub-gaussian initialization method, such as any bounded initialization (e.g. truncated normal or uniform on an interval) can be used, and the same proof structure would support the same convergence rate, albeit with different constants in convergence (independent of $n$ ). + +Non-ReLU activations: In general, the proofs rely on the ReLU activation through Lemma B.12, which gives a concentration bound on the absolute value of the dot product of post-activations of each layer of the NN. To use other nonlinearities, we would only need an analogous result for that nonlinearity; the other proofs follow without requiring any other significant change. + +Experimental Evaluation: To provide further experimental support for this argument, we have conducted an ablation study on the FCN architecture with different nonlinearities and with truncated Gaussian initialization (Figures 3, 13, 15 and 16). As seen in the provided figures, the impact of nonlinearity and initialization method as long as they follow the provided setting in Setting A, is marginal. + +# F. More Details on Kernel Regression Using pNTK on Full CIFAR-10 Dataset + +Figure 17 compares the accuracy of $\hat{f}^{lin}(x)$ with parameters derived at epoch $E \in \{0, 50, 100, 150, 200\}$ of training the NN with SGD. On the y-axis, the reported number is $f^{lin}(x) - f^*(x)$ where $f^*$ denotes the final model obtained after training $f$ for 200 epochs. As seen in Figure 17 the architecture of the model has a significant impact on how good the linearization predicts the final accuracy of the fully-trained model. However, as proven in Theorem 4.1 in conjunction with the linearization approximations provided in Lee et al. (2019), as width grows, this approximation becomes more accurate. One unexplored fact regarding this experiment is that fact that lineraization with trained parameters significantly outperforms linearization at initialization, which is intuitive but not rigorously investigated yet. + +# G. Experimental Evaluation: Tightness of bounds + +Figure 18 presents experimental evaluations that analyze the tightness of the approximation bound. The results are presented for the fully connected network used in the experiment. \ No newline at end of file diff --git a/afastwellfoundedapproximationtotheempiricalneuraltangentkernel/images.zip b/afastwellfoundedapproximationtotheempiricalneuraltangentkernel/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..309858c34ae7bd505f2261d02244fc63563e4967 --- /dev/null +++ b/afastwellfoundedapproximationtotheempiricalneuraltangentkernel/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a2ce0cf824cd87624546eb7e01a20b396f99f968096ac9ec5dfb4104298afd70 +size 1610543 diff --git a/afastwellfoundedapproximationtotheempiricalneuraltangentkernel/layout.json b/afastwellfoundedapproximationtotheempiricalneuraltangentkernel/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..50eef760eae9aca7d26ce3baa9c3f599227eb80b --- /dev/null +++ b/afastwellfoundedapproximationtotheempiricalneuraltangentkernel/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:269f1cb514abd3198e0c24c2a3fff55471e8bf0308fe47cd85dd10db7359ca66 +size 1042137 diff --git a/aflexiblediffusionmodel/3ec89fc7-ac0c-47f3-8162-4115f44b8b84_content_list.json b/aflexiblediffusionmodel/3ec89fc7-ac0c-47f3-8162-4115f44b8b84_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..8786b3f7395fd2858865fb7aaaffed8c50360b64 --- /dev/null +++ b/aflexiblediffusionmodel/3ec89fc7-ac0c-47f3-8162-4115f44b8b84_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec664216502a82ac79a6d74e8f44c2c620fdbfd7a6b92d183ed4ece79b7e88e2 +size 135989 diff --git a/aflexiblediffusionmodel/3ec89fc7-ac0c-47f3-8162-4115f44b8b84_model.json b/aflexiblediffusionmodel/3ec89fc7-ac0c-47f3-8162-4115f44b8b84_model.json new file mode 100644 index 0000000000000000000000000000000000000000..647c1d2e02061abbda78329149b85cef290823ca --- /dev/null +++ b/aflexiblediffusionmodel/3ec89fc7-ac0c-47f3-8162-4115f44b8b84_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb0a94df3a5ba402cab519b77674f09561c5da533975054478c310d10d4a53f1 +size 164838 diff --git a/aflexiblediffusionmodel/3ec89fc7-ac0c-47f3-8162-4115f44b8b84_origin.pdf b/aflexiblediffusionmodel/3ec89fc7-ac0c-47f3-8162-4115f44b8b84_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8fdc2fec01519e32add89055611927da5c7f44be --- /dev/null +++ b/aflexiblediffusionmodel/3ec89fc7-ac0c-47f3-8162-4115f44b8b84_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a23124ca88e0b2f3451032cbd7ce2705e063c555fbd87a8eecc83ac54d3db9e4 +size 3098393 diff --git a/aflexiblediffusionmodel/full.md b/aflexiblediffusionmodel/full.md new file mode 100644 index 0000000000000000000000000000000000000000..8984787705be543348e1e5085249d2be94314dd7 --- /dev/null +++ b/aflexiblediffusionmodel/full.md @@ -0,0 +1,730 @@ +# Weitao Du $^{*1}$ He Zhang $^{*2}$ Tao Yang $^{*2}$ Yuanqi Du $^{*3}$ + +# Abstract + +Denoising diffusion (score-based) generative models have become a popular choice for modeling complex data. Recently, a deep connection between forward-backward stochastic differential equations (SDEs) and diffusion-based models has been established, leading to the development of new SDE variants such as sub-VP and critically-damped Langevin. Despite the empirical success of some hand-crafted forward SDEs, many potentially promising forward SDEs remain unexplored. In this work, we propose a general framework for parameterizing diffusion models, particularly the spatial part of forward SDEs, by leveraging the symplectic and Riemannian geometry of the data manifold. We introduce a systematic formalism with theoretical guarantees and connect it with previous diffusion models. Finally, we demonstrate the theoretical advantages of our method from a variational optimization perspective. We present numerical experiments on synthetic datasets, MNIST and CIFAR10 to validate the effectiveness of our framework. + +# 1. Introduction + +Denoising diffusion (score-based) models, which are originated from non-equilibrium statistical physics, have recently shown impressive success on sample generations of a wide range of modalities, including images (Ho et al., 2020; Nichol & Dhariwal, 2021; Song et al., 2020c; Dhariwal & Nichol, 2021; Rombach et al., 2022), 3D point clouds (Luo & Hu, 2021; Du et al., 2021), audio (Kong et al., 2020; Liu et al., 2021), and biomolecules generation (Xu et al., 2022; Hoogeboom et al., 2022; Schneuing et al., 2022). In addi + +*Equal contribution $^{1}$ Academy of Mathematics and Systems Science, Chinese Academy of Sciences $^{2}$ Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University $^{3}$ Cornell University. Correspondence to: Weitao Du, He Zhang, Tao Yang, Yuanqi Du . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +tion to practical applications of various diffusion generative models, it is also desirable to analyze them in an appropriate and flexible framework, by which novel improvements can be further developed. + +Currently, one of the promising formal frameworks for unifying different types of diffusion models is to utilize the stochastic differential equations (SDEs), as proposed in (Song et al., 2020c). Under this formalism, a diffusion model consists of a forward (noising) process and a backward (denoising) process. The forward process keeps adding noise to the real data, and the backward (generative) process can be viewed as reversing the forward process in terms of probability. Furthermore, with the help of the Feynman-Kac formula and Girsanov transform (Da Prato, 2014), the score-matching training scheme has been proved to be equivalent to certain log-likelihood (ELBO) training in the infinite-dimensional path space (Huang et al., 2021). + +From the variational optimization point of view, although the ELBO optimization function of diffusion models explicitly contains both the forward and backward ingredients, the forward (noising) process is usually hand-crafted and set to be fixed throughout the training process (Huang et al., 2021). If we treat the forward-backward processes as an encoder-decoder pair, then there exists an obvious mismatch between the current training framework of diffusion models and other log-likelihood based models (e.g., Hierarchical VAE (Vahdat & Kautz, 2020)) which also optimize the encoder. Moreover, since the reverse (generative) process is uniquely determined by the forward process, the total flexibility of the model actually lies in parameterizing the forward process. Given the fact that different noisy schedules have proven to affect the empirical performances (e.g., the different forward processes including VE, VP, sub-VP (Song et al., 2020c) and damped Langevin diffusion (Dockhorn et al., 2022) displayed distinct generation performances), freezing the forward process is both theoretical and practical incomplete. Therefore, the main research question of this paper is: Can we introduce a theoretically grounded parameterization for the forward process, so that the diffusion model can automatically optimize it from data? + +To address this problem, it is crucial to incorporate flexible parameterized forward processes into the general SDE framework in (Song et al., 2020c). Though the idea of training the forward process is intuitively reasonable, the imple + +mentation is far from straightforward. The first challenge is to find the appropriate sub-class within the grand function space consisting of the whole stochastic processes. A hard constraint is that the stationary distribution of the candidate stochastic processes must be simple (usually the centered Gaussian), which will be set as the generative SDE's prior distribution. The second challenge is how to make sure that our parameterization is flexible enough to include all proper SDEs. In fact, even parameterizing the noise schedule of the forward process (the one-dimensional time component) would improve the diffusion model's performance, as it was shown in (Kingma et al., 2021). However, how to efficiently parameterize the space components of the forward process remains to be explored, especially taking into account the complex structure of the data distribution (Narayanan & Mitter, 2010). + +This paper concentrates on both theoretical and practical aspects of solving the flexibility challenge of the diffusion model in a unified way, emphasizing the spatial components of the forward process. First of all, inspired by concepts from Riemannian geometry and Hamilton Monte-Carlo methods, we define a flexible class of diffusion processes (FP-Diffusion) that rigorously satisfies the fixed Gaussian stationary distribution condition with theoretical guarantees. To highlight the advantages of flexible diffusion models, we also discuss the theoretical motivations and properties of parameterized forward processes from the variational optimization perspective. Furthermore, by introducing the flexible diffusion model, all sorts of regularizers for smoothing the diffusion paths (e.g., methods from continuous normalizing flows: (Finlay et al., 2020; Onken et al., 2021)) can be implemented for designing better diffusion models. We empirically test some of them in the experiment section. + +Our major contributions are as follows: + +- We introduce a theoretically complete framework for parameterizing the forward process with the help of symplectic structures and the anisotropic Riemannian structure. Convergence properties as $t \to \infty$ are proved along the same route. +- To motivate the parameterization of the forward process, we analyze the implications of parameterizing the forward (noising) process from the variational optimization point of view and demonstrate how our method unifies previous diffusion models. Since this extension allows merging regularization terms into the training loss, we also provide experimental results in simulated scenarios and demonstrate how the diffusion path behaves under regularization. +Except considering the general diffusion parameterization framework, we also develop a corresponding simplified version of our method with explicit formu + +las for efficient Monte-Carlo training. It enables us to perform comparative studies on relatively large-scale datasets, e.g., CIFAR10. + +# 2. Preliminaries and Related Works + +Given a data distribution $p(x)$ , we associate it with a Gaussian diffusion process (forward) that increasingly adds noise to the data, then the high-level idea of diffusion generative models is to approximate the real data distribution by fitting a multi-step denoising (backward) process. In a discrete setting, the forward process is formulated as an N-steps Markov chain from real data $x$ to each noised $x_{t}$ : + +$$ +p \left(x _ {t} \mid x _ {t - 1}\right) = \mathcal {N} \left(\alpha_ {t} x, \beta_ {t} I\right), t \in \{1, \dots , N \}. +$$ + +For DDPM model (Ho et al., 2020), $\alpha_{t}$ is set to be $\alpha_{t} := \sqrt{1 - \beta_{t}}$ . Taking the continuous limit of $\beta_{t}$ (when $\sqrt{1 - \beta_t} \approx 1 - \frac{1}{2}\beta_t$ ), we find that $X_{t}$ satisfies the time-changed Ornstein-Uhlenbeck stochastic differential equation (SDE): + +$$ +d X _ {t} = - \frac {1}{2} \beta (t) X _ {t} d t + \sqrt {\beta (t)} d W _ {t}, \tag {1} +$$ + +which is exactly the so-called variance-preserving diffusion process (VP) in (Song et al., 2020c). Therefore, DDPM can be treated as a discretization of the Ornstein-Uhlenbeck process. Following this line, (Song et al., 2020c) proposed to characterize different types of diffusion models by formulating the underlying SDE of each model: + +$$ +d X _ {t} = f \left(X _ {t}, t\right) d t + g (t) d W _ {t}, 0 \leq t \leq T \tag {2} +$$ + +where $\{W_t\}_{t=0}^{\infty}$ denotes the standard Brownian motion, and the dimension is set to be the same as the data. Usually we choose a different time parameterization (time-change) for $t$ . Let $\beta(t)$ be a continuous function of time $t$ such that $\beta(t) > \beta(s) > 0$ for $0 < s < t$ , then $\beta(t)$ is called a specific time schedule (time-change) of $t$ . It can be further shown that when $t \to \infty$ , the stationary distribution of Eq. 1 is the standard multivariate Gaussian: $\mathcal{N}(0, I)$ (Hsu, 2002). On the other hand, SMLD diffusion models (Song & Ermon, 2019) can be seen as a discretization of the variance-exploring (VE) process ((9) of (Song et al., 2020c)) $\{X_t\}_{t=0}^{T}$ , which satisfies a different SDE: + +$$ +d X _ {t} = \sqrt {2 \sigma (t) \sigma^ {\prime} (t)} d W _ {t}. \tag {3} +$$ + +A remarkable property of all the above SDE solution classes is the existence of a reverse process $Y_{t}$ with respect to each forward SDE $X_{t}$ . In the sense that the marginal distributions at each time and its corresponding 'reverse' time match: + +$$ +p _ {t} (X _ {t}) \equiv q _ {T - t} (Y _ {T - t}), 0 \leq t \leq T. +$$ + +![](images/f6b4f0cb672c6d77812fcfb9e5450f7f88238c9765cee3e5b0210d6473ad7316.jpg) +Figure 1. Evolution trajectories of fixed and flexible forward SDEs + +We name $Y_{t}$ as the backward (denoising) stochastic process of the diffusion model. In other words, real data is generated by sampling from the Gaussian distribution and tracking the denoising process from time $T$ to 0. Surprisingly, the underlying equation of the reverse-time process $Y_{t}$ is derived analytically in (Anderson, 1982; Song et al., 2020c): + +$$ +d Y _ {t} = \left[ f \left(Y _ {t}, t\right) - g ^ {2} \left(Y _ {t}, t\right) \nabla \log p _ {t} \left(Y _ {t}\right) \right] d t + g (t) d W _ {t}, \tag {4} +$$ + +where $W_{t}$ is a Brownian motion running backward in time from $T$ to 0. Then, it's obvious that the unknown score function $s_t(x) \coloneqq \nabla \log p_t(x)$ depends on both the data distribution $p_0$ and the forward process $X_{t}$ . To estimate the score function and $Y_{t}$ , continuous diffusion models utilize various types of (weighted) score-matching procedures, we will briefly review some typical examples in section 3.2. + +Now we summarize more related works: + +Diffusion Probabilistic Models (DPMs) as a generative model (Kingma & Welling, 2013; Goodfellow et al., 2014; Yang et al., 2021; Ren et al., 2021) was first introduced in (Sohl-Dickstein et al., 2015), as a probabilistic model inspired by non-equilibrium thermodynamics. The high-level idea is to treat the data distribution as the Gibbs (Boltzmann) equilibrium distribution (Friedli & Velenik, 2017), then the generating process corresponds to transitioning from nonequilibrium to equilibrium states (De Groot & Mazur, 2013). DDPM (Ho et al., 2020) and (Nichol & Dhariwal, 2021; Song et al., 2020a; Watson et al., 2022; Jolicoeur-Martineau et al., 2021; Bao et al., 2022) further improve DPMs by introducing Gaussian Markov chains and various inference and sampling methods, through which the generative model is equivalent to a denoising diffusion model. (Vahdat et al., 2021) then introduces a latent space diffusion and the number of denoising steps is also increased to improve empirical performances. On the other hand, as we will show in this article, there are infinite processes (thermodynamical systems) that can connect non-equilibrium states to an equilibrium. + +Score Matching. Score-based energy models (Hyvärinen, 2005; Vincent, 2011) are based on minimizing the difference between the derivatives of the data and the model's log-density functions, which avoids calculating the normalization constant of an intractable distribution. Song & Ermon (2019); Song et al. (2020b) then introduced sliced score matching that enabled scalable generative training by leveraging different levels of Gaussian noise and several empirical tricks. Song et al. (2020c; 2021) further studied how to perturb the data by a continuous stochastic process. Under this framework, Kingma et al. (2021) proposed to reparameterize and optimize the time variable of the forward process (the spatial components remain fixed) by the signal-to-noise ratio (SNR). From this point of view, our model can be seen as a novel spatial parameterization of the forward process, which takes into account the spatial inhomogeneity of the data distribution. + +# 3. Methods + +# 3.1. A General Framework for Parameterizing Diffusion Models + +From the preliminary section, we realize that the stationary distribution of the forward process will also be the initial distribution of the denoising (generative) process. Therefore, it must be a simple distribution we know how to sample from, mainly set to be standard Gaussian. In this article, we parameterize the spatial components of the forward process by considering the following SDE: + +$$ +d X _ {t} = f (X _ {t}) d t + \sqrt {2 R (X _ {t})} d W _ {t}, \tag {5} +$$ + +under the hard constraint that the stationary distribution of $X_{t}$ is standard Gaussian (the scaled Gaussian case is included in Appendix). Introducing the time change $\beta(t)$ , then by Ito's formula, $X_{\beta(t)}$ satisfies a variant of Eq. 5: + +$$ +d X _ {\beta (t)} = f \left(X _ {t}\right) \beta^ {\prime} (t) d t + \sqrt {2 \beta^ {\prime} (t) R \left(X _ {t}\right)} d W _ {t}. \tag {6} +$$ + +Compared with black-box parameterizations (e.g. (Zhang & Chen, 2021)), it's obvious that the function class of $f(x)$ and $R(x)$ should be properly restricted to satisfy the diffusion model's theoretical assumptions. To solve this issue, we propose a flexible framework for parameterizing the forward processes, and the completeness of our parameterization will be proved in Appendix A.1. It turns out that the whole construction can be decomposed into two parts: the Riemannian metric and the symplectic form in $\mathbb{R}^n$ , inspired by ideas from the Riemannian Manifold Hamiltonian Monte-Carlo algorithm (Girolami & Calderhead, 2011; Betancourt, 2013; Seiler et al., 2014) and anisotropic diffusion technique of image processing, graph deep learning (Weickert, 1998; Perona & Malik, 1990; Alvarez et al., 1992). + +Intuitively, an anisotropic Riemannian metric implies that the space was curved, and the corresponding 'inhomogeneous' Brownian motion will inject non-uniform noise along different directions. On the other hand, the symplectic form is crucial for defining the dynamics of a given Hamiltonian. Both of them set the stage for performing diffusion on the data manifold, from real data distribution to the standard multivariate normal distribution, whose density under the canonical volume form $dx_{1} \ldots dx_{n}$ is + +$$ +\frac {1}{\sqrt {(2 \pi) ^ {n}}} \exp \left(- \frac {1}{2} \| x \| ^ {2}\right) d x _ {1} \dots d x _ {n}. \tag {7} +$$ + +Now we introduce these two geometric concepts in detail. In a coordinate system, a Riemannian metric can be identified as a symmetric positive-definite matrix: $R(x) \coloneqq \{R_{ij}(x)\}_{1 \leq i,j \leq n}$ (the Euclidean metric corresponds to the identity matrix). Given a smooth function $H(x)$ , recall that the Riemannian Langevin process satisfies the following SDE: + +$$ +d X _ {t} = - \tilde {\nabla} H (X _ {t}) d t + \sqrt {2} d B _ {t}, \tag {8} +$$ + +where $\tilde{\nabla} H(x) := R^{-1}(x)\nabla H(x)$ is the gradient vector field of $H$ , and $B_{t}$ denotes the Riemannian Brownian motion (Hsu, 2002). In local coordinates, $B_{t}$ equals (see (13) of (Girolami & Calderhead, 2011)): + +$$ +\begin{array}{l} d B _ {t} ^ {i} = | R (X _ {t}) | ^ {- 1 / 2} \sum_ {j = 1} ^ {n} \frac {\partial}{\partial x _ {j}} \left(R _ {i j} ^ {- 1} (X _ {t}) | R (X _ {t}) | ^ {1 / 2}\right) d t \\ + \sqrt {R ^ {- 1} \left(X _ {t}\right)} d W _ {t} ^ {i}, \tag {9} \\ \end{array} +$$ + +for $i \in \{1, 2, \dots, n\}$ . One crucial property of the Riemannian Langevin process (Wang, 2014) is that its stationary distribution $p(x)$ has the following form: + +$$ +p (x) \propto e ^ {- H (x)} d V (x), +$$ + +where $dV(x) \coloneqq \sqrt{|R(x)|} dx_1 \ldots dx_n$ is the Riemannian volume form. Transforming back to the canonical volume form and take $H(x) = \frac{1}{4}\|x\|^2 \cdot \log(|R(x)|)$ , we have proved the following lemma: + +Lemma 3.1. The stationary distribution of the SDE (Eq. 10) below is the standard Gaussian of $\mathbb{R}^n$ : + +$$ +\begin{array}{l} d X _ {t} = \frac {1}{2} \left[ - \sum_ {j} R _ {i j} ^ {- 1} \left(X _ {t}\right) \cdot \left(X _ {t}\right) _ {j} + \sum_ {j} \frac {\partial}{\partial x _ {j}} R _ {i j} ^ {- 1} \left(X _ {t}\right) \right] d t \\ + \sqrt {R ^ {- 1} \left(X _ {t}\right)} d W _ {t}. \tag {10} \\ \end{array} +$$ + +Remark 3.2. It's worth mentioning that the infinitesimal generator of (10) is the Riemannian Laplacian: $\Delta_R$ . When acting on a smooth function $f$ , + +$$ +\Delta_ {R} f := \frac {1}{\sqrt {| R (x) |}} \partial_ {i} (\sqrt {| R (x) |} (R ^ {- 1}) _ {i j} \partial_ {j} f). +$$ + +Indeed, it has the same form as the anisotropic diffusion defined by (1.27) of (Weickert, 1998). The effectiveness of anisotropic noise is explored in Section 4.1. + +On the other hand, introducing a symplectic form $\omega$ allows us to do Hamiltonian dynamics in an even-dimensional space $\mathbb{R}^{2d}$ . Since a symplectic form is a non-degenerate closed 2-form, it automatically becomes zero in odd-dimensional spaces. In this article, we will restrict ourselves to a special type of symplectic form, which consists of constant anti-symmetric matrices $\{\omega_{ij}\}_{1\leq i,j\leq 2d}$ . Then the corresponding Hamiltonian dynamics of $H(x)$ is: + +$$ +d X _ {t} = \omega \nabla H (X _ {t}) d t. \tag {11} +$$ + +We mainly focus on two remarkable properties of Hamiltonian dynamics: (i) It preserves the canonical volume form (the determinant of the corresponding Jacobi matrix always equals one); (ii) The Hamiltonian function $H(x)$ takes a constant value along the integral curves (see the remark in Appendix A). Using the change of variables formula, we conclude that the probabilistic density of $X_{t}$ preserves the equilibrium Gibbs distribution: + +$$ +p (x) \propto e ^ {- H (x)} d x _ {1} \dots d x _ {n}, +$$ + +where $X_0$ is sampled from the Gibbs distribution. + +Let $H(x) = \frac{1}{2} x^2$ , the potential energy of the Harmonic oscillator. Then by merging the Riemannian part (Eq. 10) and the symplectic part (Eq. 11) we obtain the following theorem: + +Theorem 3.3. Suppose $\omega$ is an anti-symmetric matrix, and $R^{-1}(x)$ is a symmetric positive-definite matrix-valued function of $x\in \mathbb{R}^n$ . Then the (unique) stationary distribution of (Eq. 12) below is the standard Gaussian (Eq. 7) of $\mathbb{R}^n$ : + +$$ +\begin{array}{l} d X _ {t} = \frac {1}{2} \left[ - \sum_ {j} R _ {i j} ^ {- 1} \left(X _ {t}\right) \cdot \left(X _ {t}\right) _ {j} - 2 \sum_ {j} \omega_ {i j} \cdot \left(X _ {t}\right) _ {j} \right. \\ + \sum_ {j} \frac {\partial}{\partial x _ {j}} R _ {i j} ^ {- 1} \left(X _ {t}\right) ] d t + \sqrt {R ^ {- 1} \left(X _ {t}\right)} d W _ {t}, \tag {12} \\ \end{array} +$$ + +We name Eq. 12 as our FP-Diffusion model, and previous diffusion models (e.g., Eq. 1) are included by setting $\omega \equiv 0$ , $R^{-1}(x) \equiv I$ . In Appendix A.1, Theorem 3.3 is extended to scaled Gaussian distributions by direct computation. For a graphical presentation, Fig. 1 plots the VP stochastic trajectories (the green curves) connected with our FP-Diffusion forward trajectories (the white curves) under random initialization. We also provide an informal argument on how the anisotropic FP-Diffusions mix with the low-dimensional data distribution in Appendix A.1. + +Furthermore, to unify the critical damped Langevin diffusion model (Dockhorn et al., 2022), our FP-Diffusion is straightforward to generalize to the case when the inverse Riemannian matrix $R^{-1}(x)$ degenerates (contains zero eigenvalues). Intuitively, the diffusion part $\sqrt{R^{-1}(x)} dW_{t}$ is the source of randomness (noise). Suppose $R^{-1}(x)$ degenerates along the $i$ -th direction (i.e., corresponding to the zero eigenvalue), then no randomness is imposed on this direction, and the $i$ -th component $X_{t}^{i}$ will be frozen at $X_{0}^{i}$ . In this case, $X_{t}$ may not converge to this Gaussian stationary distribution from a deterministic starting point. To remedy this issue, we impose additional restrictions, which lead us to the following corollary: + +Corollary 3.4. Under two additional conditions: (1) the symplectic form $\omega \in \mathbb{R}^{2d\times 2d}$ has the block form: $\omega = \left( \begin{array}{cc}0 & A\\ \hline -A & 0 \end{array} \right)$ with a positive-definite matrix $A\in \mathbb{R}^{d\times d}$ ; (2) the inverse (semi-) Riemannian matrix $R^{-1}(x)$ has the block form: $R^{-1}(x) = \left( \begin{array}{cc}0 & 0\\ \hline 0 & B \end{array} \right)$ with a constant positive-definite symmetric matrix $B\in \mathbb{R}^{d\times d}$ , we induce that the forward diffusion $X_{s}$ converges to the standard Gaussian distribution: + +$$ +p _ {s} (X _ {s}) \xrightarrow {s \to \infty} \mathcal {N} (0, I). +$$ + +We will demonstrate how the corollary derives the damped diffusion model in Appendix A. + +# 3.2. Parameterizing Diffusion Models from the Optimization Perspective + +In this section, we illustrate the benefits of parameterized diffusion models from the variational optimization perspective. Recall that the ground-truth reverse-time SDE of the forward process $X_{t}$ is denoted by $Y_{t}$ , and we parameterize $Y_{t}$ by $Y_{t}^{\theta}$ : + +$$ +d Y _ {t} ^ {\theta} = \left[ f (Y _ {t}, t) - g ^ {2} \left(Y _ {t}, t\right) \nabla \mathbf {s} _ {\theta} \left(Y _ {t}, t\right) \right] d t + g (t) d W _ {t}, \tag {13} +$$ + +where $\mathbf{s}_{\theta}$ is the score neural network parameterized by $\theta$ . Then the (explicit) score-matching loss function for opti + +mization is + +$$ +L _ {\mathrm {E S M}} := \int_ {0} ^ {T} \mathbb {E} _ {X _ {s}} \left[ \frac {1}{2} \| \mathbf {s} _ {\theta} \left(X _ {s}, s\right) - \nabla \log p _ {s} \left(X _ {s}\right) \| _ {\Lambda (s)} ^ {2} \right] d s, \tag {14} +$$ + +where $\Lambda(s)$ is a weighting positive definite matrix for the loss. Since $Y_{t}$ and $X_{t}$ share the same marginal distributions, then under the condition that the parameterized generative process $Y_{t}^{\theta}$ matches $Y_{t}$ perfectly: + +$$ +\mathbf {s} _ {\theta} (x, t) \equiv \nabla \log p _ {t} (x) \tag {15} +$$ + +for all $t \in [0, T]$ , we know the marginal distribution of $Y_{t}^{\theta}$ at $t = 0$ is exactly the data distribution. + +The major obstacle of optimizing Eq. 14 directly is that we don't have access to the ground truth score function $\nabla \log p_{s}(x,s)$ . Fortunately, $L_{\mathrm{ESM}}$ can be transformed to a loss based on the accessible conditional score function $\nabla \log p_{X_s|X_0}(X_s)$ plus a constant (Song et al., 2020b;c) (for a fixed forward process $X_{s}$ ). More precisely, given two time slices $0 < s < t < T$ , + +$$ +\begin{array}{l} \mathbb {E} _ {X _ {t}} \left\| \mathbf {s} _ {\theta} (X _ {t}, t) - \nabla \log p _ {t} (X _ {t}) \right\| ^ {2} \\ \equiv \mathbb {E} _ {X _ {s}, X _ {t}} \| \mathbf {s} _ {\theta} (X _ {t}, t) - \nabla \log p _ {t} (X _ {t} | X _ {s}) \| ^ {2} \\ + \underbrace {\mathbb {E} _ {X _ {t}} \| \nabla \log p _ {t} \left(X _ {t}\right) \| ^ {2} - \mathbb {E} _ {X _ {s} , X _ {t}} \| \nabla \log p _ {t} \left(X _ {t} \mid X _ {s}\right) \| ^ {2}} _ {\text {g a p t e r m s}}. \tag {16} \\ \end{array} +$$ + +Since the gap terms between the original and conditional score function loss only depend on the forward noising process, one theoretical advantage of FP-Diffusion is that the gap terms are also parameterized. This formula is adapted from (Song et al., 2020b; Huang et al., 2021) by modifying the initial time, and full derivations are given in Appendix A for completeness. + +On the other hand, compared with log-likelihood generative models like normalizing flows and VAE, the connection between score matching and the log-likelihood of data distribution $\log p_0(x)$ (also the initial distribution of (12)) is also not straightforward due to the additional forward process $X_{t}$ . Hence, we turn to the variational view established in (Huang et al., 2021), where the ELBO (evidence lower bound) of data's log-likelihood $\log p_0(x)$ is directly related with the score matching scheme. More precisely, we have + +$$ +\log p _ {0} (x) \geq \mathcal {E} ^ {\infty} (x), +$$ + +and the ELBO $\mathcal{E}^{\infty}(x)$ of the infinite-dimensional path space is defined by + +$$ +\begin{array}{l} \mathcal {E} ^ {\infty} (x) := \mathbb {E} _ {X _ {T}} [ \log p _ {T} (X _ {T}) | X _ {0} = x ] \\ - \int_ {0} ^ {T} \mathbb {E} _ {X _ {s}} \left[ \frac {1}{2} \| \mathbf {s} _ {\theta} \| _ {g ^ {2}} ^ {2} + \nabla \cdot \left(g ^ {2} \mathbf {s} _ {\theta} - f\right) | X _ {0} = x \right] d s. \tag {17} \\ \end{array} +$$ + +The above implies that learning a diffusion (score) model is equivalent to maximizing the ELBO in the variational path space defined by the generative process $Y_{t}^{\theta}$ . Thus, treating $f(x,t)$ and $g(x,t)$ as learnable functions results in enlarging the variational path space from pre-fixed $f$ and $g$ to flexible variational function classes, such that a lower value of ELBO is achieved in the extended space. + +By Eq. 12, in FP-Diffusion model, we set + +$$ +\begin{array}{l} f (x, t) := \frac {\beta^ {\prime} (t)}{2} \left[ - \sum_ {j} R _ {i j} ^ {- 1} (x) x _ {j} - 2 \sum_ {j} \omega_ {i j} x _ {j} \right. \\ + \sum_ {j} \frac {\partial}{\partial x _ {j}} R _ {i j} ^ {- 1} (x) ], g (x, t) := \sqrt {\beta^ {\prime} (t) R ^ {- 1} (x)}. \tag {18} \\ \end{array} +$$ + +Since our variational function class of the forward process defined in Eq. 12 is theoretically guaranteed to approach Gaussian when $T$ is large, the first term of $\mathcal{E}^{\infty}(x)$ is close to a small constant under Eq. 18. Therefore, we only need to investigate the second term (equivalent to the implicit score matching (Hyvarinen & Dayan, 2005)), which depends on both the parameterized $f$ , $g$ and the score function. Finally, learning $f$ and $g$ opens the opportunity of adding additional regularization penalties to filter out irregular forward paths in the extended variational path space. Similar techniques have been applied in continuous normalizing flows (Finlay et al., 2020). Preliminary exploration on applying regularization to FP-Diffusion models is clarified in the experimental section. + +# 3.3. A Simplified Formula of FP-Diffusion + +Although we can always numerically simulate the SDE to a given time $t$ , the empirical success of the Monte-Carlo training of (14) in (Ho et al., 2020) (see also (7) of (Song et al., 2020c)) indicates the importance of obtaining explicit solutions for direct sampling. In this section, we derive the solution formula for a simplified version of $X_{t}$ defined in Eq. 12 and implement it on the image generation task. + +To obtain the closed-form expression of the transition probabilistic density function for the forward process $X_{t}$ , we assume that $R^{-1}(x)$ of Eq. 12 is a constant symmetric positive-definite matrix independent of the spatial variable $x$ . Then within the linear SDE region (Särkkä & Solin, 2019), we have the following characterization of the marginal distributions (see Appendix A for a full derivation): + +Theorem 3.5. Suppose the forward diffusion process $X_{t}$ starting at $X_{0}$ satisfies the following linear stochastic differential equation: + +$$ +d X _ {t} = \frac {1}{2} \beta^ {\prime} (t) \left[ - R ^ {- 1} X _ {t} - 2 \omega X _ {t} \right] d t + \sqrt {\beta^ {\prime} (t) R ^ {- 1}} d W _ {t}, \tag {19} +$$ + +for symmetric positive-definite $R$ and anti-symmetric $\omega$ . Then the marginal distribution of $X_{t}$ at arbitrary time $t > 0$ follows the Gaussian distribution: + +$$ +X _ {t} \sim \mathcal {N} (e ^ {(- \frac {1}{2} R ^ {- 1} - \omega) \beta (t)} X _ {0}, {\bf I} - e ^ {- \beta (t) R ^ {- 1}}). +$$ + +In practice, we set $\omega$ in Eq. 11 to be an anti-symmetric matrix and name it by the FP-Drift parameterization. On the other hand, $R^{-1}$ in Eq. 10 is set to be a symmetric positive-definite matrix, and we name it by the FP-Noise parameterization. To effectively achieve both anti-symmetric and symmetric matrices, we utilize orthonormal diagonalization and take advantage of the fact that orthogonal matrices can be generated by the matrix exponential on the orthogonal group. Implementation details are provided in Appendix A.4. + +# 4. Experiment + +We first use a synthetic 3D dataset to illustrate the significance of parameterizing the forward process adapting to the data distribution, then validate the effectiveness of our FP-Diffusion model on standard image generation tasks. + +# 4.1. Flexible SDEs Learned from Synthetic 3D Examples + +According to the low-dimensional manifold hypothesis (Fefferman et al., 2016), the real data distribution concentrates on a low-dimensional sub-manifold. However, during the generation phase, the dimension of the ambient space we sample from is usually much higher. To fill in the gap, FP-Diffusion plays a nontrivial role. More precisely, note that only the diffusion part of Eq. 12 can blur the data sub-manifold to fill in the high-dimensional ambient space, which causes a distinction between the directions tangent to the data and the remaining normal directions during the (anisotropic) diffusion process. Since it is impossible to directly detect the complex data manifold, we design a simplified scenario to demonstrate how the parameterized diffusion process enhances generation. + +Assume the data lies in $\mathbb{R}^3$ , and its distribution follows a 2-dimensional Gaussian concentrated at a given hyperplane. Obviously, the simplest way to generate the 2-dimensional Gaussian is to directly project random points sampled from the 3-dimensional Gaussian to this plane. To make it rigorous, we consider the optimal transport problem from the 3-dimensional Gaussian distribution to the 2-dimensional Gaussian. Define the cost function as $c(x,y) \coloneqq \|x - y\|^2$ , then the Wasserstein distance between $\mathcal{N}(0,\mathbf{I})$ and $\mathcal{N}(\mu,\Sigma)$ (Mallasto & Feragen, 2017) equals $\mathcal{W}_2(\mathcal{N}(0,\mathbf{I},\mathcal{N}(\mu,\Sigma)) = \| \mu \|^2 + \mathrm{Tr}(\mathbf{I} + \Sigma - 2\Sigma^{1/2})$ . It implies that the corresponding optimal transport map $\nabla \phi$ is exactly the vertical projection. + +![](images/c448334343cfbcd242f269134c164f09dce9a3f602b3ac6147f046743f1b6fc7.jpg) +(a) VP + +![](images/330655dfbcefd650e366a72eb634cdc91455d6aa47032a4e906a55327290b12d.jpg) +(b) Non-reg. FP + +![](images/dc2507776b6357ea1acf7710df4551b7455abae9aad355959f314115ede40be8.jpg) +(c) Reg. FP +Figure 2. Vector fields projected into 2D-section of three SDEs + +For our FP-Diffusion model, the probabilistic flow of the generating process depends on the parameterized forward process. Then, to optimize the forward paths, we add regularization terms into the original score-matching loss. From (13) of (Song et al., 2020c), the vector field of the probability flow ODE is + +$$ +\begin{array}{l} v _ {p c} (x, t) := f (x, t) - \frac {1}{2} \nabla \cdot [ g ^ {2} (x, t) ] \\ - \frac {1}{2} g ^ {2} (x, t) \nabla \log p _ {t} (x), x \in \mathbf {R} ^ {3}. \tag {20} \\ \end{array} +$$ + +To control variables, we perform the experiment under three circumstances: (i) a fixed VP forward process (Eq. 1); (ii) our parameterized forward process with no regularization; (iii) our parameterized forward process with regularization terms. The regularization penalties are imposed on the vector field (Eq. 20), which are adapted from Section 4 of (Finlay et al., 2020): $L_{reg}(f,g) = \lambda_1\int \| v_{pc}(s)\|^2 ds + \lambda_2\mathbb{E}_{\epsilon \sim \mathbf{N}(0,1)}\left\| \epsilon^T v_{pc}(s)\right\|^2 ds$ . Notice that this term only regularizes parameters from $f$ and $g$ in the forward process. + +After training, we check whether the direction of the learned $v_{pc}$ is aligned with the ground-truth projection vector field. For the projection map $\nabla \phi$ from the 3-dimensional Gaussian + +Table 1. NLLs on MNIST + +
ModelNLL ↓
RealNVP (Dinh et al., 2016)1.06
Glow (Kingma & Dhariwal, 2018)1.05
FFJORD (Grathwohl et al., 2018)0.99
ResFlow (Chen et al., 2019)0.97
DiffFlow (Zhang & Chen, 2021)0.93
FP-Drift (Mix)1.01
+ +![](images/b4f08d48e87be520c16e72d459e1a49c27f4c6c7af7bac69d9d20e2d8b112f1a.jpg) +Figure 3. MNIST and CIFAR10 samples + +![](images/27d686dcaf9e1308739c42232ec177fd99c2f8be41ad9b2a34d515a3f4c5df9b.jpg) + +Table 2. Results on CIFAR10. * denotes the results reproduced locally. + +
ModelFID ↓NLL↓
DDPM++ cont. (deep, VP) (Song et al., 2020c)2.95*3.13*
NCSN++ cont. (deep, VE) (Song et al., 2020c)2.72*-
DDPM (Zhang & Chen, 2021)3.17≤ 3.75
Improved-DDPM (Nichol & Dhariwal, 2021)2.903.37
LSGM (Vahdat et al., 2021)2.10≤ 3.43
LSGM-100M (Dockhorn et al., 2022)4.60≤ 2.96
CLD-SGM (Dockhorn et al., 2022)2.25≤ 3.31
DiffFlow (Zhang & Chen, 2021)14.143.04
FP-Drift (Joint)4.173.30
FP-Noise (Joint)3.303.25
FP-Drift (Mix)2.993.28
FP-Noise (Mix)2.873.20
+ +to 2-dimensional Gaussian supported at the plane: $z = 2$ , the corresponding vector field at a spatial point $x = (x,y,z) \in \mathbb{R}^3$ equals: + +$$ +v _ {\operatorname {p r o j}} (x, t) := (0, 0, - 1) \text {i f} z > 2, +$$ + +$$ +\text {a n d} v _ {\operatorname {p r o j}} (x, t) := (0, 0, 1) \quad \text {i f} z < 2. \tag {21} +$$ + +The 2D visualization results of our comparative experiments are summarized in Fig. 2. We would like to note that the "ground-truth" vector field (Eq. 21) is strictly vertical. Therefore, we only plot the $x - z$ projection of the trained three vector fields at a given time for the three scenarios. + +As shown in Fig. 2, our flexible diffusion method (b) exhibits a visibly more vertical orientation compared to the forward-fixed VP (ddpm) model (a). However, the flexible model with explicit regularization (c) demonstrates even greater alignment with vertical lines compared to (b). To + +provide a comprehensive comparison, we also include sampled integration trajectories of the trained vector fields in Appendix B.1 (refer to Fig. 2). + +# 4.2. Image Generation + +In this section, we demonstrate the generative capacity of our FP-Diffusion models on two common image datasets: MNIST (LeCun, 1998) and CIFAR10 (Krizhevsky et al., 2009). + +Training Strategy. The flexible FP-Diffusion framework is designed to simultaneously learn a suitable forward diffusion process dependent on the data distribution as well as the corresponding reverse-time process. However, for some complex scenarios like image generation, it is challenging to balance the optimization of the forward and the backward processes. To compromise these two parts, we propose a two-stage training strategy. Particularly, in the first stage, we jointly optimize the parameters from both the FP-Diffusion forward process and the backward score neural network; in the second stage, we freeze all parameters from the FP-Diffusion and only tune the score neural network in the same way as prevailing score-based training approaches (Ho et al., 2020; Song & Ermon, 2019; Song et al., 2020c). Note that the two-stage strategy also makes the flexible diffusion scalable, in the sense that after the first stage, the parameters contained in the forward process are fixed and won't be counted in the gradient computational graph. Moreover, during the sampling process, only the score neural network is implemented. + +Implementation Details. For the forward diffusion process, we choose a linearly increasing time scheduler $\beta(t)$ (same as the VP-SDE setting in (Song et al., 2020c)), where $t \in [0,T]$ is a continuous time variable. To estimate the gradient vector field in the reverse-time process, we train a time-dependent score network $s_{\theta}(x(t),t)$ as described in Eq. 16. We adopt the same U-net style architecture used in (Ho et al., 2020) and (Song et al., 2020c) as our backbone neural network. Both the FP-Drift model and the FP-Noise model are implemented in two training paradigms: (i) Joint Training: the parameterized FP-Diffusion model and the score network are jointly optimized for $1.2M$ iterations; (ii) Mix Training: following the proposed two-stage training strategy, we separately train the model for $600k$ iterations in both stages, and the batch size is set to be 96 on all datasets. Following (Song et al., 2020c), we apply the Euler-Maruyama method in our reverse-time SDEs for sampling images, where the number of discretization steps is set to 1000. All the experiments are conducted on 4 Nvidia Tesla V100 16G GPUs. We provide further implementation details in Appendix B.2. + +Results. We show the sampled images generated by our FP-Noise (Mix training) model in Fig. 3. According to Eq 20, the negative log-likelihood (NLL) is explicitly calculated in bits per dimension for our models by the instantaneous change of variables formula (Grathwohl et al., 2018). Then we list the NLL metrics of our models in Tab. 1 and Tab 2. On MNIST, our FP-Drift model achieves comparable performance in terms of NLL, compared to five standard flow-based models (including DiffFlow (Zhang & Chen, 2021)). On CIFAR10, both the FP-Drift (Mix training) and the FP-Noise (Mix training) models achieve a competitive performance compared to the state-of-the-art (SOTA) diffusion models. These results illustrate the strong capacity of FP-Diffusion in density estimation tasks. + +To quantitatively evaluate the quality of the sampled images, we also report the Fenchel Inception Distance (FID) (Heusel et al., 2017) on CIFAR10. As shown in Tab. 2, the two variants of our FP-Diffusion model, FP-Drift (Mix) and FP-Noise (Mix), outperform DDPM (Ho et al., 2020) and Improved-DDPM (Nichol & Dhariwal, 2021) in FID and have a comparable performance with DDPM++ cont. (deep, VP) and NCSN++ cont. (deep, VE) (Song et al., 2020c). We notice that only LSGM and CLD-SGM have obviously better FID values than other models (including us). However, LSGM (Vahdat et al., 2021) adopts a more complicated framework and a large model with $\approx 475M$ parameters to achieve its high performance. With a comparable parameter size ( $\approx 100M$ ), our models could achieve a significantly better FID score than LSGM ("LSGM-100M"). CLD-SGM builds its diffusion model upon a larger phase space with a special training objective (given the data point $x \in \mathbb{R}^n$ , its phase space corresponding point $(x,v)$ belongs to $\mathbb{R}^{2n}$ ), which leads to a more expressive optimization space but brings extra computational cost as well. We leave testing our FP-Diffusion model on phase space (defined in Corollary 3.4) in future works. It should also be noted that we use a smaller batch size (96) compared to other baseline diffusion models (128) to train our models due to limited computational resources, which may influence our empirical performance. We also report the performance of our two model variants in two training paradigms in Tab. 2. The model variants with the joint training paradigm consistently achieve a better performance, demonstrating the necessity of the two-stage training strategy. A possible reason for this phenomenon is that it may be difficult for score models to match the reverse process of a dynamical forward process, so we need to tune the score model with extra training steps after fixing a suitable forward process. + +# 5. Conclusion and Future Works + +In this work, we propose the FP-Diffusion model, a novel method that parameterizes the spatial components of the + +diffusion (score) model with theoretical guarantees. Our approach combines insights from Riemannian geometry and Hamiltonian (symplectic) Monte-Carlo methods to obtain a complete forward diffusion parameterization that plays a nontrivial role from the variational optimization perspective. Empirical results on specially-designed datasets and standard benchmarks confirm the effectiveness of our method. However, the challenge of efficiently optimizing FP-Diffusion remains a critical issue, which presents opportunities for promising future research. For example, recent work (Sunada et al., 2016) has shown that the score function $\nabla_{x}p_{0}(x)$ indicates the tangential direction of the data manifold, and our flexible diffusion can take advantage of a trained score function of the original data (which can be obtained by the classical denoising score matching (Vincent, 2011), prior to the training of the generative diffusion model) as the initial parameterization of the Riemannian metric. Additionally, introducing Riemannian structure into non-Euclidean data has proven to be beneficial for a broad range of problems (e.g., (Sunada et al., 2016) for graph problems), and our framework has the potential to incorporate flexible diffusion models on non-Euclidean data (Shi et al., 2021; Du et al., 2022; Liu et al., 2023). + +# Acknowledgements + +We would like to express our gratitude to the anonymous reviewers for their insightful feedback and valuable suggestions. We are also grateful to Bowen Jing for generously sharing the visualization code that contributed to Fig. 1. Weitao Du appreciates Zhiming Ma, Qi Meng, and Wei Chen for engaging in productive and constructive discussions that significantly enriched our research. + +# References + +Alvarez, L., Lions, P.-L., and Morel, J.-M. Image selective smoothing and edge detection by nonlinear diffusion. ii. SIAM Journal on numerical analysis, 29:845-866, 06 1992. doi: 10.1137/0729052. +Anderson, B. D. Reverse-time diffusion equation models. Stochastic Processes and their Applications, 12(3):313-326, 1982. +Bao, F., Li, C., Zhu, J., and Zhang, B. Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models. arXiv preprint arXiv:2201.06503, 2022. +Bellet, L. R. Ergodic properties of markov processes. In Open quantum systems II, pp. 1-39. Springer, 2006. +Betancourt, M. A general metric for riemannian manifold hamiltonian monte carlo. In International Conference on + +Geometric Science of Information, pp. 327-334. Springer, 2013. +Betancourt, M. A conceptual introduction to hamiltonian monte carlo. arXiv preprint arXiv:1701.02434, 2017. +Chen, R. T., Behrmann, J., Duvenaud, D. K., and Jacobsen, J.-H. Residual flows for invertible generative modeling. Advances in Neural Information Processing Systems, 32, 2019. +Da Prato, G. Introduction to stochastic analysis and Malliavin calculus, volume 13. Springer, 2014. +De Groot, S. R. and Mazur, P. Non-equilibrium thermodynamics. Courier Corporation, 2013. +Dhariwal, P. and Nichol, A. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34, 2021. +Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016. +Dockhorn, T., Vahdat, A., and Kreis, K. Score-based generative modeling with critically-damped Langevin diffusion. In International Conference on Learning Representations (ICLR), 2022. +Du, W., Zhang, H., Du, Y., Meng, Q., Chen, W., Shao, B., and Liu, T.-Y. Equivariant vector field network for many-body system modeling, 2021. +Du, W., Zhang, H., Du, Y., Meng, Q., Chen, W., Zheng, N., Shao, B., and Liu, T.-Y. Se (3) equivariant graph neural networks with complete local frames. In International Conference on Machine Learning, pp. 5583-5608. PMLR, 2022. +Fefferman, C., Mitter, S., and Narayanan, H. Testing the manifold hypothesis. Journal of the American Mathematical Society, 29(4):983-1049, 2016. +Finlay, C., Jacobsen, J.-H., Nurbekyan, L., and Oberman, A. How to train your neural ode: the world of jacobian and kinetic regularization. In International Conference on Machine Learning, pp. 3154-3164. PMLR, 2020. +Friedli, S. and Velenik, Y. Statistical Mechanics of Lattice Systems: A Concrete Mathematical Introduction. Cambridge University Press, 2017. ISBN 978-1-107-18482-4. doi: 10.1017/9781316882603. +Girolami, M. and Calderhead, B. Riemann manifold Langevin and hamiltonian monte carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2):123-214, 2011. + +Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. Advances in neural information processing systems, 27, 2014. +Grathwohl, W., Chen, R. T., Bettencourt, J., Sutskever, I., and Duvenaud, D. Ffjord: Free-form continuous dynamics for scalable reversible generative models. arXiv preprint arXiv:1810.01367, 2018. +Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. +Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840-6851, 2020. +Hoogeboom, E., Satorras, V. G., Vignac, C., and Welling, M. Equivariant diffusion for molecule generation in 3d. In International Conference on Machine Learning, pp. 8867-8887. PMLR, 2022. +Hörmander, L. Hypoelliptic second order differential equations. Acta Mathematica, 119:147-171, 1967. +Hsu, E. P. Stochastic Analysis on Manifolds. Stochastic Analysis on Manifolds, 2002. +Huang, C.-W., Lim, J. H., and Courville, A. A variational perspective on diffusion-based generative models and score matching. arXiv preprint arXiv:2106.02808, 2021. +Hyvarinen, A. Estimation of non-normalized statistical models by score matching. J. Mach. Learn. Res., 6:695-709, 2005. URL http://jmlr.org/papers/v6/hyvarinen05a.html. +Hyvärinen, A. and Dayan, P. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(4), 2005. +Jolicoeur-Martineau, A., Li, K., Piché-Taillefer, R., Kachman, T., and Mitliagkas, I. Gotta go fast when generating data with score-based models, 2021. +Kingma, D. P. and Dhariwal, P. Glow: Generative flow with invertible 1x1 convolutions. Advances in neural information processing systems, 31, 2018. +Kingma, D. P. and Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. +Kingma, D. P., Salimans, T., Poole, B., and Ho, J. Variational diffusion models. arXiv preprint arXiv:2107.00630, 2021. + +Kong, Z., Ping, W., Huang, J., Zhao, K., and Catanzaro, B. Diffwave: A versatile diffusion model for audio synthesis. arXiv preprint arXiv:2009.09761, 2020. +Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. Toronto, ON, Canada, 2009. +LeCun, Y. The mnist database of handwritten digits. http://yann.lecun.com/exdb/mnist/, 1998. +Liu, S., Cao, Y., Su, D., and Meng, H. Diffsvc: A diffusion probabilistic model for singing voice conversion. arXiv preprint arXiv:2105.13871, 2021. +Liu, S., Du, W., Ma, Z., Guo, H., and Tang, J. A group symmetric stochastic differential equation model for molecule multi-modal pretraining. In International Conference on Machine Learning, 2023. +Luo, S. and Hu, W. Diffusion probabilistic models for 3d point cloud generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2837-2845, 2021. +Mallasto, A. and Feragen, A. Learning from uncertain curves: The 2-wasserstein metric for gaussian processes. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 5665-5674, 2017. +Narayanan, H. and Mitter, S. Sample complexity of testing the manifold hypothesis. Advances in neural information processing systems, 23, 2010. +Nichol, A. Q. and Dhariwal, P. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pp. 8162-8171. PMLR, 2021. +Onken, D., Wu Fung, S., Li, X., and Ruthotto, L. Oflow: Fast and accurate continuous normalizing flows via optimal transport. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 2021. +Perona, P. and Malik, J. Scale-space and edge detection using anisotropic diffusion. IEEE Trans. on Pattern Analysis and Machine Intelligence, 12(7):629-639, 1990. ISSN 0162-8828. doi: 10.1109/34.56205. +Ren, X., Yang, T., Wang, Y., and Zeng, W. Learning disentangled representation by exploiting pretrained generative models: A contrastive learning view. In International Conference on Learning Representations, 2021. +Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684-10695, 2022. + +Särkkä, S. and Solin, A. Applied stochastic differential equations, volume 10. Cambridge University Press, 2019. +Schneuing, A., Du, Y., Harris, C., Jamasb, A., Igashov, I., Du, W., Blundell, T., Lió, P., Gomes, C., Welling, M., et al. Structure-based drug design with equivariant diffusion models. arXiv preprint arXiv:2210.13695, 2022. +Seiler, C., Rubinstein-Salzedo, S., and Holmes, S. Positive curvature and hamiltonian monte carlo. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., and Weinberger, K. (eds.), Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc., 2014. URL https://proceedings.neurips.cc/paper/2014/file/beed13602b9b0e6ecb5b568ff5058f07-Paper.pdf. +Shi, C., Luo, S., Xu, M., and Tang, J. Learning gradient fields for molecular conformation generation. In International Conference on Machine Learning, pp. 9558-9568. PMLR, 2021. +Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., and Ganguli, S. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pp. 2256-2265. PMLR, 2015. +Song, J., Meng, C., and Ermon, S. Denoising diffusion implicit models. arXiv:2010.02502, October 2020a. URL https://arxiv.org/abs/2010.02502. +Song, Y. and Ermon, S. Generative modeling by estimating gradients of the data distribution. Advances in Neural Information Processing Systems, 32, 2019. +Song, Y., Garg, S., Shi, J., and Ermon, S. Sliced score matching: A scalable approach to density and score estimation. In Uncertainty in Artificial Intelligence, pp. 574-584. PMLR, 2020b. +Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2020c. +Song, Y., Durkan, C., Murray, I., and Ermon, S. Maximum likelihood training of score-based diffusion models. In Thirty-Fifth Conference on Neural Information Processing Systems, 2021. +Sunada, T., Kotani, M., and Shirai, T. Discrete geometric analysis. American Mathematical Society, 2016. +Vahdat, A. and Kautz, J. NVAE: A deep hierarchical variational autoencoder. In Neural Information Processing Systems (NeurIPS), 2020. + +Vahdat, A., Kreis, K., and Kautz, J. Score-based generative modeling in latent space. Advances in Neural Information Processing Systems, 34, 2021. +Vincent, P. A connection between score matching and denoising autoencoders. Neural Computation, 23:1661-1674, 2011. +Wang, F.-Y. Analysis for diffusion processes on Riemannian manifolds, volume 18. World Scientific, 2014. +Watson, D., Chan, W., Ho, J., and Norouzi, M. Learning fast samplers for diffusion models by differentiating through sample quality. ArXiv, abs/2202.05830, 2022. +Weickert, J. Anisotropic diffusion in image processing, volume 1. Teubner Stuttgart, 1998. +Xu, M., Yu, L., Song, Y., Shi, C., Ermon, S., and Tang, J. Geodiff: A geometric diffusion model for molecular conformation generation. arXiv preprint arXiv:2203.02923, 2022. +Yang, T., Ren, X., Wang, Y., Zeng, W., and Zheng, N. Towards building a group-based unsupervised representation disentanglement framework. In International Conference on Learning Representations, 2021. +Zhang, Q. and Chen, Y. Diffusion normalizing flow. Advances in Neural Information Processing Systems, 34, 2021. + +# A Flexible Diffusion Model + +# Appendix + +# A. Theory + +# A.1. Discussion of Section 3.1 + +# A.1.1. REMARK ON THE THEORETICAL PROPERTIES OF HAMILTONIAN DYNAMICS + +Suppose $X_{t}$ follows the Hamiltonian dynamics (11), then + +$$ +d H (X _ {t}) = \nabla H (X _ {t}) \omega \nabla H (X _ {t}) d t \equiv 0, +$$ + +by the anti-symmetry of $\omega$ . Therefore, the Hamiltonian dynamics without random perturbations is a deterministic motion that can explore within a constant Hamiltonian (energy) surface. It means that, only by adding a diffusion term, the Hamiltonian dynamical system is able to traverse different energy levels. + +# A.1.2. VERIFICATION OF THEOREM 3.3 + +We will verify the theorem under the more general case, when $H(x) = \frac{m}{2} x^2$ . The corresponding stationary distribution is the scaled Gaussian $\mathcal{N}(0,m\mathbf{I})$ , where $m > 0$ is the scale constant. In this case, Eq. 12 is modified to: + +$$ +\begin{array}{l} d X _ {t} = \frac {m}{2} \left[ - \sum_ {j} R _ {i j} ^ {- 1} \left(X _ {t}\right) \cdot \left(X _ {t}\right) _ {j} - 2 \sum_ {j} \omega_ {i j} \cdot \left(X _ {t}\right) _ {j} \right. \\ + \sum_ {j} \frac {\partial}{\partial x _ {j}} R _ {i j} ^ {- 1} \left(X _ {t}\right) ] d t + \sqrt {R ^ {- 1} \left(X _ {t}\right)} d W _ {t}. \tag {22} \\ \end{array} +$$ + +Note that only the drift term is scaled by $m$ . + +Proof. Since the covariance matrix of the diffusion part is positive-definite, the forward process Eq. 22 satisfies the Feller property and the existence and uniqueness of the stationary distribution are guaranteed (see (Wang, 2014)). By the Fokker-Plank-Kolmogorov equation, the stationary distribution $p_{s}(x)$ of Eq. 22 should satisfy + +$$ +0 = - \sum_ {i} \frac {\partial}{\partial x _ {i}} [ f _ {i} (x, t) p _ {s} (x) ] + \frac {1}{2} \frac {\partial^ {2}}{\partial x _ {i} \partial x _ {j}} [ (g g ^ {T}) _ {i j} p _ {s} (x) ], \tag {23} +$$ + +where we set $f(x,t) \coloneqq \frac{m}{2} \left[ -\sum_{j} R_{ij}^{-1}(x) \cdot x_{j} - 2\sum_{j}\omega_{ij} \cdot x_{j} + \sum_{j}\frac{\partial}{\partial x_{j}} R_{ij}^{-1}(x) \right]$ and $g(x,t) \coloneqq \sqrt{R^{-1}(x)}$ . To check whether $e^{-\frac{m}{2} x^2}$ satisfies condition (23), notice that by the anti-symmetry of $\omega_{ij}$ , we automatically have + +$$ +\sum_ {i} \sum_ {j} \frac {\partial}{\partial x _ {i}} \left(\omega_ {i j} x _ {j} e ^ {- \frac {m}{2} x ^ {2}}\right) = - \sum_ {i} \sum_ {j} \omega_ {i j} x _ {i} x _ {j} e ^ {- \frac {m}{2} x ^ {2}} = 0. +$$ + +On the other hand, + +$$ +\begin{array}{l} \sum_ {i} \sum_ {j} \frac {\partial^ {2}}{\partial x _ {i} \partial x _ {j}} [ R _ {i j} ^ {- 1} (x) e ^ {- \frac {m}{2} x ^ {2}} ] \\ = - m \operatorname {T r} \left(R _ {i j} ^ {- 1} (x)\right) e ^ {- \frac {m}{2} x ^ {2}} + m ^ {2} \sum_ {i} \sum_ {j} R _ {i j} ^ {- 1} (x) x _ {i} x _ {j} e ^ {- \frac {m}{2} x ^ {2}} \\ - \sum_ {i} \sum_ {j} \frac {\partial}{\partial x _ {j}} \left(R _ {i j} ^ {- 1} (x)\right) \left(\frac {\partial}{\partial x _ {i}} e ^ {- \frac {m}{2} x ^ {2}}\right) \\ + \sum_ {i} \sum_ {j} \frac {\partial^ {2}}{\partial x _ {i} \partial x _ {j}} (R _ {i j} ^ {- 1} (x)) e ^ {- \frac {m}{2} x ^ {2}} - m \sum_ {i} \sum_ {j} \frac {\partial}{\partial x _ {j}} (R _ {i j} ^ {- 1} (x)) x _ {i} e ^ {- \frac {m}{2} x ^ {2}} \\ = - m \operatorname {T r} \left(R _ {i j} ^ {- 1} (x)\right) e ^ {- \frac {m}{2} x ^ {2}} + m ^ {2} \sum_ {i} \sum_ {j} R _ {i j} ^ {- 1} (x) x _ {i} x _ {j} e ^ {- \frac {m}{2} x ^ {2}} \\ + \sum_ {i} \frac {\partial}{\partial x _ {i}} [ \sum_ {j} \frac {\partial}{\partial x _ {j}} R _ {i j} ^ {- 1} (x) e ^ {- \frac {m}{2} x ^ {2}} ] - m \sum_ {i} \sum_ {j} \frac {\partial}{\partial x _ {j}} R _ {i j} ^ {- 1} (x) x _ {i} e ^ {- \frac {m}{2} x ^ {2}}. \\ \end{array} +$$ + +Therefore, the last thing to check is that + +$$ +\sum_ {i} \frac {\partial}{\partial x _ {i}} [ \sum_ {j} R _ {i j} ^ {- 1} (x) x _ {j} e ^ {- \frac {m}{2} x ^ {2}} ] = \mathrm {T r} (R _ {i j} ^ {- 1} (x)) e ^ {- \frac {m}{2} x ^ {2}} - \sum_ {i} \sum_ {j} [ m R _ {i j} ^ {- 1} (x) x _ {i} x _ {j} + \frac {\partial}{\partial x _ {j}} R _ {i j} ^ {- 1} (x) x _ {i} ] e ^ {- \frac {m}{2} x ^ {2}}, +$$ + +which is obviously true, since the diffusion matrix $R_{ij}^{-1}$ is symmetric. Combining the above, we have proved that Eq. 23 holds if $p_s(x) \propto e^{-\frac{m}{2} x^2}$ . + +# A.1.3. COMPLETENESS OF FP-DIFFUSION PARAMETERIZATION + +From the last section's derivation, we can deduce the following corollary: + +Corollary A.1. Consider the following SDE: + +$$ +\begin{array}{l} d X _ {t} = A \left(X _ {t}\right) d t - \frac {1}{2} R ^ {- 1} \left(X _ {t}\right) \cdot X _ {t} d t \\ + \left(\nabla \cdot R ^ {- 1} \left(X _ {t}\right)\right) \cdot X _ {t} d t + \sqrt {R ^ {- 1} \left(X _ {t}\right)} d W _ {t}, \tag {24} \\ \end{array} +$$ + +and let the spatial function $A(x)$ be a linear function. Suppose we know its stationary distribution is standard Gaussian, then + +$$ +A (x) = - \sum_ {j} \omega_ {i j} \cdot x _ {j}, +$$ + +for some anti-symmetric matrix $\omega$ . + +Proof. In fact, every linear operator $A$ can be decomposed into a symmetric part plus an anti-symmetric part: + +$$ +A = \underbrace {\frac {A + A ^ {T}}{2}} _ {\text {s y m m e t r i c}} + \underbrace {\frac {A - A ^ {T}}{2}} _ {\text {a n t i - s y m m e t r i c}}. +$$ + +Let $\omega = \frac{A - A^T}{2}$ . Then we only need to prove that $A + A^T$ equals zero, if $X_{t}$ converges to Gaussian. + +From the proof of Theorem 3.3, we extract the fact that if $p_s(x) \propto e^{-\frac{1}{2} x^2}$ , + +$$ +\sum_ {i} \frac {\partial}{\partial x _ {i}} [ (A + A ^ {T}) _ {i j} \cdot x _ {j} e ^ {- \frac {1}{2} x ^ {2}} ] = 0, +$$ + +then + +$$ +\sum_ {i, j} \left[ \left(A + A ^ {T}\right) _ {i j} \cdot \frac {\partial}{\partial x _ {i}} \frac {\partial}{\partial x _ {j}} \left(e ^ {- \frac {1}{2} x ^ {2}}\right) \right] = 0, +$$ + +for all $x = (x_{1},\ldots ,x_{n})$ . Note that + +$$ +\frac {\partial}{\partial x _ {i}} \frac {\partial}{\partial x _ {j}} \left(e ^ {- \frac {1}{2} x ^ {2}}\right) = \left(x _ {i} x _ {j} - \delta_ {i j}\right) e ^ {- \frac {1}{2} x ^ {2}}. +$$ + +Since $A + A^T$ is symmetric (doesn't hold for arbitrary linear operator), it implies that $A + A^T \equiv 0$ . + +# A.1.4. ANISOTROPIC DIFFUSION ON LOW DIMENSIONAL DATA MANIFOLD + +In this section, we give an informal discussion on how an anisotropic diffusion starting at a low-dimensional data manifold mixes with its own stationary distribution (supported in the high dimension ambient space). + +Assume the marginal distribution of the diffusion process $X_{t}$ concentrates on a low dimensional manifold $M\hookrightarrow \mathbb{R}^n$ at a given time. Moreover, suppose $X_{t}$ already achieves the Gaussian stationary distribution on $M$ (defined with respect to the Laplacian operator of $M$ ). Now we want to informally investigate the most efficient way for $X_{t}$ to diffuse out of the low dimensional sub-manifold to the ambient space. By localizing in the Riemannian normal coordinates and by arranging the coordinates indexes, we can further assume that $M$ is isometric to the hyperplane of $\mathbb{R}^n$ defined by + +$$ +M = \{x \in \mathbb {R} ^ {n} | x = (x _ {1}, \dots , x _ {p}, 0, \dots , 0) \}. +$$ + +Then the coordinate components of each point $x \in \mathbb{R}^n$ can be decomposed into the tangential directions and the normal directions with respect to $M$ : + +$$ +x \in (\underbrace {x _ {1} , \ldots , x _ {p}} _ {\text {t a n g e n t t o M}}, \underbrace {x _ {p + 1} , \ldots , x _ {n}} _ {\text {n o r m a l t o M}}). +$$ + +Under the above conditions, we are ready to compare the convergence rate (to the high-dimensional stationary Gaussian distribution of $\mathbb{R}^n$ ) of different forward diffusions defined in (10). For a fair comparison, we set the norm of the noise matrix to be one: $\left\| R^{-1}\right\|_2 \equiv \sqrt{n}$ . Otherwise, the convergence can always be accelerated by increasing the noising scale $(\left\| R^{-1}\right\|_2 \to \infty)$ . + +Under our normal coordinates, the forward diffusion can be decomposed into two parts: $X(t) = X_{tan}(t) + X_{nor}(t)$ . For simplicity, suppose $R^{-1}$ is a diagonal matrix, then the tangential part and the normal part of $X(t)$ are completely decoupled. In other words, + +$$ +X _ {t a n} ^ {i} (t) = \frac {1}{2} \left[ - R _ {i i} ^ {- 1} \cdot \left(X _ {t}\right) ^ {i} \right] d t + \sqrt {R _ {i i} ^ {- 1}} d W _ {t} ^ {i}, 1 \leq i \leq p +$$ + +is a diffusion process on $M$ . Therefore, $(X_{tan}(t), X(t))$ is indeed a Markov coupling. Suppose $X_{tan}(t)$ at $t = 0$ already converges to its stationary distribution (low dimensional Gaussian), then by Ito's formula, + +$$ +\begin{array}{l} d (X _ {t a n} (t) - X (t)) ^ {2} \\ = d X _ {n o r} ^ {2} (t) \\ = 2 X _ {n o r} (t) \left(\frac {1}{2} \left[ - R _ {n o r} ^ {- 1} \cdot X _ {n o r} (t) \right] d t + \sqrt {R _ {n o r} ^ {- 1}} d W _ {t}\right) + \operatorname {T r} \left(R _ {n o r} ^ {- 1}\right) d t. \\ \end{array} +$$ + +Taking the expectation of both sides, it implies that + +$$ +\frac {d \mathbb {E} X _ {n o r} ^ {2} (t)}{d t} = - \mathbb {E} R _ {n o r} ^ {- 1} \cdot X _ {n o r} ^ {2} (t) + \operatorname {T r} (R _ {n o r} ^ {- 1}). +$$ + +Let $r_{min}$ denote the minimal eigenvalue of the normal part of $R^{-1}$ , then + +$$ +\frac {d \mathbb {E} X _ {n o r} ^ {2} (t)}{d t} \leq - r _ {m i n} \mathbb {E} X _ {n o r} ^ {2} (t) + \operatorname {T r} \left(R _ {n o r} ^ {- 1}\right). +$$ + +Applying Gronwall's inequality and note that $X_{nor}(0) = 0$ , we have + +$$ +\mathbb {E} X _ {n o r} ^ {2} (t) \leq e ^ {- r _ {m i n} \cdot t} \cdot \operatorname {T r} (R _ {n o r} ^ {- 1}) t. +$$ + +The above gives an upper bound on the convergence speed of the coupling $(X_{tan}(t), X(t))$ with respect to the $W_2$ distance (see (Wang, 2014)). Since the stationary distribution of $X(t)$ is exactly the high dimensional Gaussian distribution (the diffusion model's prior distribution), we hope the convergence rate to be as fast as possible (given a fixed noisig scale). For the VP-Diffusion, + +$$ +R _ {n o r} ^ {- 1} \equiv \operatorname {d i a g} \{1, \dots , 1 \}. +$$ + +However, in the FP-Diffusion model, the diagonal elements of $R_{nor}^{-1}$ are allowed to be inhomogeneous and greater than one (under the condition that $\mathrm{Tr}(R_{nor}^{-1}) < n$ ). This will lead to a smaller $r_{min}$ , which will speed up the convergence rate by our analysis. + +# A.1.5. VERIFICATION OF COROLLARY 3.4 + +The intuition of Corollary 3.4 can be stated as follows: To guarantee the geometric ergodicity property of FP-Diffusion on the phase space, we need enough noise such that the diffusion process can transverse the whole space. Suppose $R^{-1}(x)$ degenerates along the $i$ -th direction (corresponding to a zero eigenvalue), then no randomness (noise) is imposed on this direction. + +To remedy the issue, we require the symplectic form $\omega$ to be non-zero along the $i$ -th direction, which makes it possible to mix the noise originated along other directions (where $R^{-1}(x)$ is strictly positive-definite) with the $i$ -th direction. Now we give the formal proof: + +Proof. We only prove for the simplified case when $A$ and $B$ are both diagonal matrices with two sets of positive eigenvalues $\{a_i\}_{i=1}^d$ , $\{b_i\}_{i=1}^d$ . The general situation can be handled by a trivial linear transformation. By proposition 8.1 of (Bellet, 2006), the proof boils down to prove that the Hörmander's condition (Hörmander, 1967) holds for the forward process $X_t$ . When $R^{-1}(x)$ is a constant matrix, the infinitesimal generator $L$ of (12) is: + +$$ +L = \sum_ {i} \sum_ {j} \frac {1}{2} \left[ - R _ {i j} ^ {- 1} - 2 \omega_ {i j} \right] x _ {j} \frac {\partial}{\partial x _ {j}} + \frac {1}{2} \sum_ {i j} R _ {i j} ^ {- 1} \frac {\partial^ {2}}{\partial x _ {i} \partial x _ {j}}. +$$ + +For notation simplicity, denote $x \coloneqq (u,v) \in \mathbb{R}^{2d}$ , where $u,v \in \mathbb{R}^d$ . To put the second-order differential operator $L$ in Hörmander's form, set + +$$ +Y _ {j} (u, v) = - \frac {1}{2} \sqrt {b _ {j}} \frac {\partial}{\partial v _ {j}}, 1 \leq j \leq n, +$$ + +and + +$$ +Y _ {0} (u. v) = \sum_ {i} (- a _ {i} v _ {i} \frac {\partial}{\partial u _ {i}} + a _ {i} u _ {i} \frac {\partial}{\partial v _ {i}}). +$$ + +Then it suffices to show that the vector fields $\{[Y_0,Y_j],Y_j\}_{1\leq j\leq d}$ span the whole $\mathbb{R}^{2d}$ . By direct calculation, + +$$ +[ Y _ {0}, Y _ {j} ] = \frac {1}{2} a _ {j} \sqrt {b _ {j}} \frac {\partial}{\partial u _ {j}}, +$$ + +for $\forall j$ . Therefore, we conclude that Hörmander's condition holds for $X_{t}$ . Then the ergodic proposition 8.1 of (Bellet, 2006) implies that the forward diffusion $X_{s}$ converges to the standard Gaussian distribution. + +Remark A.2. A recent study (Dockhorn et al., 2022) proposed to improve the diffusion model by enlarging the spatial space (where the generated samples lie in) to the "phase" space: $x \rightarrow (x, v)$ . Then the corresponding joint forward diffusion $(x_{t}, v_{t})$ satisfies the Critically-Damped Langevin diffusion: + +$$ +\binom {d x _ {t}} {d v _ {t}} = \binom {M ^ {- 1} v _ {t}} {- x _ {t}} d t + \binom {\mathbf {0} _ {d}} {- \Gamma M ^ {- 1} v _ {t}} d t + \binom {0} {\sqrt {2 \Gamma}} d W _ {t}. \tag {25} +$$ + +If the coupling mass $M = 1$ , the drift part of Eq. 25 can be decomposed to a symmetric part $R^{-1}$ and an non-trivial anti-symmetric part $\omega$ of (19) by setting: + +$$ +R ^ {- 1} := \left( \begin{array}{c c} 0, & 0 \\ 0, & 2 \Gamma I \end{array} \right) , \omega := \left( \begin{array}{c c} 0, & - I \\ I, & 0 \end{array} \right). +$$ + +It's straightforward to check that they rigorously fit the conditions of Corollary 3.4. Therefore, we conclude from Corollary 3.4 that the Damped Langevin diffusion converges to the standard Gaussian distribution of the enlarged phase space $(x,v)\in \mathbb{R}^{2d}$ , which coincides with the results of Appendix B.2 in (Dockhorn et al., 2022). + +# A.2. Discussion of Section 3.2 + +In this section, following the arguments from (Huang et al., 2021), we demonstrate how to estimate the score gradient vector field $\nabla \log p(x)$ by the analytically tractable conditional score gradient vector field (conditioned on a previous time). + +To prove (16), by adapting Eq. 31 of (Huang et al., 2021), it's enough to show that + +$$ +\mathbb {E} _ {X _ {t}} \left[ s _ {\theta} ^ {T} \left(X _ {t}, t\right) \cdot \nabla \log p _ {t} \left(X _ {t}\right) \right] = \mathbb {E} _ {X _ {s}, X _ {t}} \left[ s _ {\theta} ^ {T} \left(X _ {t}, t\right) \cdot \nabla \log p _ {t} \left(X _ {t} \mid X _ {s}\right) \right]. +$$ + +Transforming the expectation to probabilistic integration, we have + +$$ +\begin{array}{l} \mathbb {E} _ {X _ {t}} \left[ s _ {\theta} ^ {T} \left(X _ {t}, t\right) \cdot \nabla \log p _ {t} \left(X _ {t}\right) \right] (26) \\ = \int p _ {t} (x) s _ {\theta} ^ {T} (x, t) \cdot \nabla \log p _ {t} (x) d x (27) \\ = \int s _ {\theta} ^ {T} (x, t) \int \nabla p _ {t} (x | x _ {s}) p _ {s} (x _ {s}) d x d x _ {s} (28) \\ = \iint p _ {s} \left(x _ {s}\right) p _ {t} \left(x \mid x _ {s}\right) \nabla p _ {t} \left(x \mid x _ {s}\right) d x d x _ {s} (29) \\ = \mathbb {E} _ {X _ {s}, X _ {t}} \left[ s _ {\theta} ^ {T} \left(X _ {t}, t\right) \cdot \nabla \log p _ {t} \left(X _ {t} \mid X _ {s}\right) \right], (30) \\ \end{array} +$$ + +for $0 \leq s < t$ . By quadratic expanding $\mathbb{E}_{X_t} \| \mathbf{s}_\theta(X_t, t) - \nabla \log p_t(X_t) \|^2$ and plugging in (26), equality (16) follows directly. + +To implement our discretized FP-diffusion forward diffusion, we usually choose $s = t - 1$ , the immediate time step before $t$ . Then from $t - 1$ to $t$ , the conditional score gradient vector field of $p_t(x_t|x_{t-1})$ is the Gaussian score function, which is analytically tractable. + +# A.3. Discussion of Section 3.3 + +In this section, we prove Theorem 3.5 by applying Ito's formula and martingale representation theorem. + +Recall that the time-change of Eq. 12 satisfies + +$$ +d X _ {t} = \beta^ {\prime} (t) \left(- \frac {1}{2} R ^ {- 1} - \omega\right) X _ {t} d t + \sqrt {\beta^ {\prime} (t) R ^ {- 1}} d W _ {t}, \tag {31} +$$ + +where $X_0$ is a fixed point. Let $Y_{t}\coloneqq e^{(\frac{1}{2} R^{-1} + \omega)\beta (t)}X_{t}$ , then by Ito's formula, + +$$ +Y _ {t} = \int_ {0} ^ {t} e ^ {\left(\frac {1}{2} R ^ {- 1} + \omega\right) \beta (s)} \sqrt {\beta^ {\prime} (s) R ^ {- 1}} d W _ {s}. \tag {32} +$$ + +From the martingale representation theorem, $Y_{t}$ is a Gaussian random variable for each $t$ . Therefore, to fully determine the distribution of $X_{t}$ , we only need to calculate the expectation and variance formulas of $X_{t}$ . By the definition of stochastic integration, we have + +$$ +\mathbf {E} [ X (t) ] = e ^ {(- \frac {1}{2} R ^ {- 1} - \omega) \beta (t)} X _ {0}. +$$ + +Utilizing the Ito's isometry to (32), we get + +$$ +V a r \left[ Y _ {t} \right] = \int_ {0} ^ {t} e ^ {\beta (s) \left(R ^ {- 1} + 2 \omega\right)} \beta^ {\prime} (s) R ^ {- 1} d s. +$$ + +Suppose $\omega = 0$ , then + +$$ +V a r \left[ X _ {t} \right] = \mathbf {I} - e ^ {- \beta (t) R ^ {- 1}}, +$$ + +where $\mathbf{I}$ denotes the identity matrix of $\mathbf{R}^d$ . Suppose $R^{-1} = \mathbf{I}$ , since the Lie bracket $[\mathbf{I} + 2\omega, \mathbf{I} - 2\omega] = 0$ , we further obtain + +$$ +V a r [ X _ {t} ] = \mathbf {I} - e ^ {- \beta (t) \mathbf {I}}. +$$ + +In conclusion, we have proved Theorem 3.5. + +![](images/4a8ed37f2759782a4e62b50a2266f0e8d305309c6f081b55abf08adcad8933d2.jpg) +(a) VP + +![](images/6934c65fef8f797816cd18b4a9e457bfc6bbc06cfb6803f7eca03eb2615f8996.jpg) +(b) Reg. FP +Figure 4. Integral trajectories of two SDEs + +# A.4. How to Parameterize Symmetric and Anti-symmetric Matrix + +To implement FP-Drift and FP-Noise models practically, we need to find an efficient way to parameterize positive-definite symmetric and anti-symmetric matrices. + +Given a full-rank anti-symmetric matrix $B$ , there always exists an orthogonal matrix $P$ such that + +$$ +B = P \mathrm {d i a g} \left\{\left[ \begin{array}{c c} 0 & \lambda_ {1} \\ - \lambda_ {1} & 0 \end{array} \right], \dots \left[ \begin{array}{c c} 0 & \lambda_ {n} \\ - \lambda_ {n} & 0 \end{array} \right] \right\} P ^ {T}, +$$ + +where $\{\lambda_1,\dots ,\lambda_n\}$ are nonzero numbers. Then, the inverse of $\mathbf{I} + B$ (appeared in subsection A.4) is: + +$$ +(B + I) ^ {- 1} = P \mathrm {d i a g} \left\{\left[ \begin{array}{c c} \frac {1}{1 + \lambda_ {1} ^ {2}} & \frac {- \lambda_ {1}}{1 + \lambda_ {1} ^ {2}} \\ \frac {\lambda_ {1}}{1 + \lambda_ {1} ^ {2}} & \frac {1}{1 + \lambda_ {1} ^ {2}} \end{array} \right], \dots \left[ \begin{array}{c c} \frac {1}{1 + \lambda_ {n} ^ {2}} & \frac {- \lambda_ {n}}{1 + \lambda_ {n} ^ {2}} \\ \frac {\lambda_ {n}}{1 + \lambda_ {n} ^ {2}} & \frac {1}{1 + \lambda_ {n} ^ {2}} \end{array} \right] \right\} P ^ {T}. +$$ + +For positive-definite symmetric matrices, there always exists an orthogonal matrix $P$ such that + +$$ +R = P \operatorname {d i a g} \left\{\lambda_ {1}, \dots , \lambda_ {n} \right\} P ^ {T}, +$$ + +where $\{\lambda_1,\dots ,\lambda_n\}$ are positive numbers. + +To apply the above method, we only need to parameterize orthogonal matrices in an efficient and expressive way. By treating orthogonal matrices as elements in $SO(n)$ orthogonal group, we utilize the exponential map to parameterize orthogonal matrices $P$ : + +$$ +P = \exp H. +$$ + +Note that $H$ is an element that belongs to the lie algebra $so(n)$ , which can be generated by upper triangular matrices. + +# B. Experiments + +# B.1. Learned FP SDEs from Synthetic 3D Examples + +Fig. 4 plots four 3D integration trajectories of the probabilistic flows (with respect to the fixed VP and learned FP-Diffusion models) starting at random initial positions. It's obvious that the trajectories of our flexible model are more straight than the fixed VP model, which demonstrates the power of selecting more regular generating paths of our FP-Diffusion model. + +# B.2. Image Generation + +Implementation Details. Following (Ho et al., 2020) and (Song et al., 2020c), we rescale the range of the images into $[-1, 1]$ before inputting them into the model. In the FP-Diffusion model, $\beta(t)$ is an linearly increasing function with respect to the time $t$ , i.e., $\beta(t) = \bar{\beta}_{min} + t(\bar{\beta}_{max} - \bar{\beta}_{min})$ for $t \in [0, 1]$ . It's worth mentioning that DDPM adopts a discretization form of this time scheduler, where $\beta_{i} = \frac{\bar{\beta}_{min}}{N} + \frac{i-1}{N(N-1)} (\bar{\beta}_{max} - \bar{\beta}_{min})$ . These two forms are actually equivalent when + +![](images/6ea19f88ec22ac930830a8d983533cb0d7f2e8644c719daa101b6bc60c050c8a.jpg) +Figure 5. CIFAR-10 samples from FP-Drift + +$N \to \infty$ . For all experiments, we set $\bar{\beta}_{max}$ as 20 and $\bar{\beta}_{min}$ as 0.1, which are also used in (Ho et al., 2020) and (Song et al., 2020c). As discussed in A.4, we only need to parameterize the upper triangular matrices $H$ and the diagonal elements $\Lambda = \text{diag}\{\lambda_1, \dots, \lambda_n\}$ in the FP-Drift and FP-Noise models. Particularly, both $H$ and $\Lambda$ are initialized with a multivariate normal distribution, and we adopt an exponential operation on $\Lambda$ to keep it a positive vector. As described in Section 4.2, we leverage a U-net style neural network to fit the score function of the reverse-time diffusion process. We keep the model architecture and the parameters of the score networks consistent with previous SOTA diffusion models (e.g., (Song et al., 2020c)) for a fair comparison. All models are trained with the Adam optimizer with a learning rate $2 \times 10^{-4}$ and a batch size 96. + +In the MNIST experiment, we first train the whole model for 50k iterations and train the score model for another 250k iterations with our Mix training strategy. We report the NLL of the model based on the last checkpoint. In the CIFAR10 experiment, the training iterations of both stage 1 and stage 2 are 600k. We also report the FIDs and NLL of the model based on the last checkpoint. + +Results. We present additional random samples generated by our best FP-Drift model in Fig. 5. These samples demonstrate the diversity and quality of the generated data. + +Furthermore, we visualize the learned forward process of the FP-Noise model in Fig. 6. This visualization provides insights into the underlying dynamics captured by the model during the noising process. + +![](images/aa96ae6d3db4801c944ae9e119d728b45765df82ce25a72d563562b94dc56540.jpg) +Figure 6. The learned forward process of FP-Noise on CIFAR-10 \ No newline at end of file diff --git a/aflexiblediffusionmodel/images.zip b/aflexiblediffusionmodel/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5cd413149871c07d5f5bfbf50c55154e3c482882 --- /dev/null +++ b/aflexiblediffusionmodel/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e4d5c3ccb321fc2d8f36600d2cb47fa20453c8358e35b5a3323c8643969f770 +size 1189427 diff --git a/aflexiblediffusionmodel/layout.json b/aflexiblediffusionmodel/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f961efd6a9708ac01c8718934beaaa5178b47d82 --- /dev/null +++ b/aflexiblediffusionmodel/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f48fcfd0894e3c517707b5da333b0e30d2a0c7500632404e5d4854aea67fc6af +size 789652 diff --git a/aframeworkforadaptingofflinealgorithmstosolvecombinatorialmultiarmedbanditproblemswithbanditfeedback/fa1a4a3b-950c-4761-8d48-fac0c38c24f4_content_list.json b/aframeworkforadaptingofflinealgorithmstosolvecombinatorialmultiarmedbanditproblemswithbanditfeedback/fa1a4a3b-950c-4761-8d48-fac0c38c24f4_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3b36a257e23108adf814564317a5e2144257e352 --- /dev/null +++ b/aframeworkforadaptingofflinealgorithmstosolvecombinatorialmultiarmedbanditproblemswithbanditfeedback/fa1a4a3b-950c-4761-8d48-fac0c38c24f4_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ab80a7cad7760efb0407530bfa9991481d6f105b623909a6bb9337893c050939 +size 243307 diff --git a/aframeworkforadaptingofflinealgorithmstosolvecombinatorialmultiarmedbanditproblemswithbanditfeedback/fa1a4a3b-950c-4761-8d48-fac0c38c24f4_model.json b/aframeworkforadaptingofflinealgorithmstosolvecombinatorialmultiarmedbanditproblemswithbanditfeedback/fa1a4a3b-950c-4761-8d48-fac0c38c24f4_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9fa14a9ba6ab0fd3cbe58e8f136f8552b227da90 --- /dev/null +++ b/aframeworkforadaptingofflinealgorithmstosolvecombinatorialmultiarmedbanditproblemswithbanditfeedback/fa1a4a3b-950c-4761-8d48-fac0c38c24f4_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:59998666a249728c9dc73bf378ce5c240660263a8ed6c024488e1d36ed40c57d +size 278078 diff --git a/aframeworkforadaptingofflinealgorithmstosolvecombinatorialmultiarmedbanditproblemswithbanditfeedback/fa1a4a3b-950c-4761-8d48-fac0c38c24f4_origin.pdf b/aframeworkforadaptingofflinealgorithmstosolvecombinatorialmultiarmedbanditproblemswithbanditfeedback/fa1a4a3b-950c-4761-8d48-fac0c38c24f4_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3f5d15c9228933cb4a31c8ced37adc1ea3fc7116 --- /dev/null +++ b/aframeworkforadaptingofflinealgorithmstosolvecombinatorialmultiarmedbanditproblemswithbanditfeedback/fa1a4a3b-950c-4761-8d48-fac0c38c24f4_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00652aae770b46106877a422f52a0bb3e6467dc52cbbace656359db9f69f0eea +size 2399036 diff --git a/aframeworkforadaptingofflinealgorithmstosolvecombinatorialmultiarmedbanditproblemswithbanditfeedback/full.md b/aframeworkforadaptingofflinealgorithmstosolvecombinatorialmultiarmedbanditproblemswithbanditfeedback/full.md new file mode 100644 index 0000000000000000000000000000000000000000..bd497f25baee5c2755a6dfb32144f00c236f1f85 --- /dev/null +++ b/aframeworkforadaptingofflinealgorithmstosolvecombinatorialmultiarmedbanditproblemswithbanditfeedback/full.md @@ -0,0 +1,1153 @@ +# A Framework for Adapting Offline Algorithms to Solve Combinatorial Multi-Armed Bandit Problems with Bandit Feedback + +Guanyu Nie1 Yididiya Y Nadew1 Yanhui Zhu1 Vaneet Aggarwal2,3 Christopher John Quinn1 + +# Abstract + +We investigate the problem of stochastic, combinatorial multi-armed bandits where the learner only has access to bandit feedback and the reward function can be non-linear. We provide a general framework for adapting discrete offline approximation algorithms into sublinear $\alpha$ -regret methods that only require bandit feedback, achieving $\mathcal{O}\left(T^{\frac{2}{3}}\log (T)^{\frac{1}{3}}\right)$ expected cumulative $\alpha$ -regret dependence on the horizon $T$ . The framework only requires the offline algorithms to be robust to small errors in function evaluation. The adaptation procedure does not even require explicit knowledge of the offline approximation algorithm — the offline algorithm can be used as black box subroutine. To demonstrate the utility of the proposed framework, the proposed framework is applied to multiple problems in submodular maximization, adapting approximation algorithms for cardinality and for knapsack constraints. The new CMAB algorithms for knapsack constraints outperform a full-bandit method developed for the adversarial setting in experiments with real-world data. + +# 1. Introduction + +Many real world sequential decision problems can be modeled using the framework of stochastic multi-armed bandits (MAB), such as scheduling, assignment problems, ad campaigns, and product recommendations, among others. The decision maker sequentially selects actions and receives stochastic rewards from an unknown distribution. The goal of the decision maker is to maximize the expected cumula + +1Computer Science Department, Iowa State University, Ames, IA, USA 2School of IE and School of ECE, Purdue University, West Lafayette, IN, USA 3Computer Science Program, KAUST, Saudi Arabia. Correspondence to: Christopher John Quinn . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +tive reward over a (possibly unknown) time horizon. Actions result both in the immediate reward and, more importantly, information about that action's reward distribution. Such problems result in a trade-off between trying actions the agent is uncertain of (exploring) or only taking the action that is empirically the best seen so far (exploiting). + +In the classic MAB setting, the number of possible actions is small relative to the time horizon, meaning each action can be taken at least once, and there is no assumed relationship between the reward distributions of different arms. The combinatorial multi-armed bandit (CMAB) setting involves a large but structured action space. For example, in product recommendation problems, the decision maker may select a subset of products (base arms) from among a large set. There are several aspects that can affect the difficulty of these problems. First, MAB methods are typically compared against a learner with access to a value oracle of the reward function (an offline problem). For some problems, it is NP-hard for the baseline learner with value oracle access to optimize. An example is if the expected/averaged reward function is submodular and actions are subsets constrained by cardinality. At best, for these problems, approximation algorithms may exist. Thus, unless the time horizon is large (exponentially long in the number of base arms, for instance), it would be more reasonable to compare the CMAB agent against the performance of the approximation algorithm for the related offline problem. Likewise, one could apply state of the art methods for (unstructured) MAB problems treating each subset as a separate arm, and obtain $\tilde{\mathcal{O}}(T^{\frac{1}{2}})$ dependence on the horizon $T$ for the subsequent regret bound. However, that dependence would only apply for exponentially large $T$ . + +Feedback plays an important role in how challenging the problem is. When the decision maker only observes a (numerical) reward for the action taken, that is known as bandit or full-bandit feedback. When the decision maker observes additional information, such as contributions of each base arm in the action, that is semi-bandit feedback. Semi-bandit feedback greatly facilitates learning. Suppose for instance that the reward function (on average) was monotone increasing over the inclusion lattice and there was a cardinality constraint of size $k$ . The agent would know from the start that no set of size smaller than $k$ could be optimal (or could + +even be the near-optimal solution the baseline learning using a value oracle would find). However, there would be $\binom{n}{k}$ sets of size $k$ . For $n = 100$ and $k = 10$ , the agent would need a horizon $T > 10^{12}$ to try each cardinality $k$ set even just once. If the reward function belongs to a certain class, such as the class of submodular functions, then one approach would be to use a greedy procedure based on base arm values. With semi-bandit feedback, the agent could on the one hand only take actions of cardinality $k$ (putatively optimal actions), gain the subsequent rewards, and yet also observe samples of the base arms' values to improve future actions. + +Bandit feedback is much more challenging, as only the joint reward is observed. In general, for non-linear reward functions, the individual values or marginal gains of base arms can only be loosely bounded if actions only consist of maximal subsets. Thus, to estimate values or marginal gains of base arms, the agent would need to deliberately spend time sampling actions (such as smaller sets) that are known to be sub-optimal in order to estimate their values to later better select actions of cardinality $k$ . Standard MAB methods like UCB or TS based methods by design do not take actions known to be sub-optimal. Thus, while such strategies could be used when semi-bandit feedback is available, it is less clear whether they can be effectively used when only bandit feedback is available. + +There are important applications where semi-bandit feedback may not be available, such as in influence maximization and recommender systems. Influence maximization models the problem of identifying a low-cost subset (seed set) of nodes in a (known) social network that can influence the maximum number of nodes in a network (Nguyen and Zheng, 2013; Leskovec et al., 2007; Bian et al., 2020). Recent research has generalized the problem to online settings where the knowledge of the network and diffusion model is not required (Wang et al., 2020; Perrault et al., 2020) but extra feedback is assumed. However, for many networks the user interactions and user accounts are private; only aggregate feedback (such as the count of individuals using a coupon code or going to a website) might be visible to the decision maker. + +In this work, we seek to address these challenges by proposing a general framework for adapting offline approximation algorithms into algorithms for stochastic CMAB problems when only bandit feedback is available. We identify that a single condition related to the robustness of the approximation algorithm to erroneous function evaluations is sufficient to guarantee that a simple explore-then-commit (ETC) procedure accessing the approximation algorithm as a black box results in a sublinear $\alpha$ -regret CMAB algorithm despite having only bandit feedback available. The approximation algorithm does not need to have any special structure (such + +as an iterative greedy design). Importantly, no effort is needed on behalf of the user in mapping steps in the offline method into steps of the CMAB method. + +We demonstrate the utility of this framework by assessing the robustness of several approximation algorithms in the submodular optimization literature (three approximation algorithms designed for knapsack constraints and one designed for cardinality constraints) which immediately result in sublinear $\alpha$ -regret CMAB algorithms that only rely on bandit-feedback, the first such algorithms for CMAB problems with submodular rewards and knapsack constraints. We also show that despite the simplicity and universal design of the adaptation, the resulting CMAB algorithms work well on budgeted influence maximization and song recommendation problems using real world data. + +The main contributions of this paper can be summarized as: 1. We provide a general framework for adapting discrete offline approximation algorithms into sublinear $\alpha$ -regret methods for stochastic CMAB problems where only bandit feedback is available. The framework only requires the offline algorithms to be robust to small errors in function evaluation, a property important in its own right for offline problems. The algorithms are not required to have a special structure — instead they are used as black boxes. Our procedure has minimal storage and time-complexity overhead, and achieves a regret bound with $\tilde{\mathcal{O}}(T^{\frac{2}{3}})$ dependence on the horizon $T$ . + +2. We illustrate the utility of the proposed framework by assessing the robustness of several approximation algorithms for (offline) constrained submodular optimization, a class of reward functions lacking simplifying properties of linear or Lipschitz reward functions. Specifically, we prove the robustness of approximation algorithms given in (Nemhauser et al., 1978; Badanidiyuru and Vondrak, 2014; Sviridenko, 2004; Khuller et al., 1999; Yaroslavtsev et al., 2020) with cardinality or knapsack constraints, and use the general framework to give regret bounds for the stochastic CMAB. In particular, we note that this paper gives the first regret bounds for stochastic submodular CMAB with knapsack constraints under bandit feedback. + +3. We evaluate the performance of proposed framework through the stochastic submodular CMAB with knapsack constraints problem for two applications: Budgeted Influence Maximization, and Song Recommendation. The evaluation results demonstrate that the proposed approach significantly outperforms a full-bandit method for a related problem in the adversarial setting. + +# 2. Related Work + +We now briefly discuss only the most closely related works. See the supplementary material for more discussion. + +Adversarial CMAB The closest related works are on adversarial CMAB. In (Niazadeh et al., 2021), the authors propose a framework for transforming greedy $\alpha$ -approximation algorithms for offline problems to online methods in an adversarial bandit setting, for both semi-bandit (achieving $\widetilde{O}(T^{1/2})$ $\alpha$ -regret) and full-bandit feedback (achieving $\widetilde{O}(T^{2/3})$ $\alpha$ -regret). Their framework requires the offline approximation algorithm to have an iterative greedy structure (unlike ours), satisfy a robustness property (like ours), and satisfy a property referred to as Blackwell reducibility (unlike ours). In addition to these conditions, the adaptation depends on the number of subproblems (greedy iterations) which for some algorithms can be known ahead of time (such as with cardinality constraints) but for other algorithms can only be upper-bounded. (Our adaptation uses the offline algorithm as a black box.) The authors check those conditions and explicitly adapt several offline approximation algorithms. In this paper, we consider an approach for converting offline approximation algorithm to online for stochastic CMAB, while requiring less assumptions. + +We also note that (Niazadeh et al., 2021) do not consider submodular CMAB with knapsack constraints, and thus do not verify whether any approximation algorithms for the offline problem satisfy the required properties (of sub-problem structure or robustness or Blackwell reducibility) to be transformed, and this is an example we consider for our general framework. Consequently, in our experiments for submodular CMAB with knapsack constraints in Section 7, we use the algorithm in (Streeter and Golovin, 2008) designed for a knapsack constraint (in expectation) as representative of methods for the adversarial setting. Other related works for adversarial stochastic CMAB are described in Appendix H. + +Stochastic Submodular CMAB with Full Bandit Feedback Recently, (Nie et al., 2022) propose an algorithm for stochastic MAB with submodular rewards, when there is a cardinality constraint. Their algorithm is a specific adaptation of an offline greedy method. In our work, we propose a general framework that employs the offline algorithm as a black box (and this result becomes a special case of our approach). While there are multiple results for semibandit feedback (see Appendix H.4), this paper considers full bandit feedback. + +# 3. Problem Statement + +We consider sequential, combinatorial decision-making problems over a finite time horizon $T$ . Let $\Omega$ denote the ground set of base elements (arms). Let $n = |\Omega|$ denote the number of arms. Let $D \subseteq 2^{\Omega}$ denote the subset of feasible actions (subsets), for which we presume membership can be efficiently evaluated. We will later consider applications with cardinality and knapsack constraints, though our meth + +ods are not limited to those. We will use the terminologies subset and action interchangeably throughout the paper. + +At each time step $t$ , the learner selects a feasible action $A_{t} \in D$ . After the subset $A_{t}$ is selected, the learner receives reward $f_{t}(A_{t})$ . We assume the reward $f_{t}$ is stochastic, bounded in $[0,1]$ , and i.i.d. conditioned on a given subset. Define the expected reward function as $f(A) = \mathbb{E}[f_t(A)]$ . + +The goal of the learner is to maximize the cumulative reward $\sum_{t=1}^{T} f_t(A_t)$ . To measure the performance of the algorithm, one common metric is to compare the learner to an agent with access to a value oracle for $f$ . However, if optimizing $f$ over $D$ is NP-hard, such a comparison would not be meaningful unless the horizon is exponentially large in the problem parameters. + +If there is a known approximation algorithm $\mathcal{A}$ with approximation ratio $\alpha \in (0,1]$ for optimizing $f$ over $D$ , a more natural alternative is to evaluate the performance of a CMAB algorithm against what $\mathcal{A}$ could achieve. Thus, we consider the expected cumulative $\alpha$ -regret $\mathcal{R}_{\alpha,T}$ , which is the difference between $\alpha$ times the cumulative reward of the optimal subset's expected value and the average received reward, (we write $\mathcal{R}_T$ when $\alpha$ is understood from context) + +$$ +\mathbb {E} [ \mathcal {R} _ {T} ] = \alpha T f (\mathrm {O P T}) - \mathbb {E} \left[ \sum_ {t = 1} ^ {T} f _ {t} \left(A _ {t}\right) \right], \tag {1} +$$ + +where OPT is the optimal solution, i.e., OPT $\in$ arg max $A\in D$ $f(A)$ and the expectations are over both the random rewards and the sequence of actions. + +# 4. Robustness of Offline Algorithms + +In this section, we introduce a criterion for an offline approximation algorithm's sensitivity to (bounded) additive perturbations to function evaluations. Investigating robustness of approximation algorithms in offline settings is valuable in its own right. Importantly, we will show that this property alone is sufficient to guarantee that the offline algorithm can be adapted to solve analogous combinatorial multi-armed bandit (CMAB) problems with just bandit feedback and yet achieve sub-linear regret. Furthermore, the CMAB adaptation will not rely on any special structure of the algorithm design, instead employing it as a black box. + +Definition 4.1 $((\alpha, \delta)$ -Robust Approximation). An algorithm $\mathcal{A}$ is an $(\alpha, \delta)$ -robust approximation algorithm for the combinatorial optimization problem of maximizing a function $f: D \to \mathbb{R}$ over a finite domain $D \subseteq 2^{\Omega}$ if its output $S^{*}$ using a value oracle for $\hat{f}$ satisfies the relation below with the optimal solution OPT under $f$ , provided that for any $\epsilon > 0$ that $|f(S) - \hat{f}(S)| < \epsilon$ for all $S \in D$ , + +$$ +f (S ^ {*}) \geq \alpha f (\mathrm {O P T}) - \delta \epsilon . +$$ + +Note that the perturbed $\hat{f}$ is not required to be in the same class as $f$ (linear, quadratic, submodular, etc.). Thus, this definition is a stronger notion of robustness than one limited to $\hat{f}$ in the same class that have bounded $L_{\infty}$ distance from $f$ . + +For (unstructured) $k$ armed bandit problems, one can view the analogous offline algorithm with access to a value oracle for the elements as first evaluating each arm $(D = \{\{1\}, \{2\}, \dots, \{k\}\})$ , so $N = k$ queries total, and then evaluating arg max over the $k$ values. That algorithm trivially is a (1, 2)-robust approximation algorithm. + +Remark 4.2. In (Niazadeh et al., 2021), there is a related definition of robustness for offline approximation algorithms. That definition and the subsequent offline-to-online adaptation procedure is restricted to approximation algorithms with an iterative greedy structure. The criterion Definition 4.1 we consider does not require the approximation algorithm to have an iterative greedy structure. + +To illustrate the utility of our proposed framework, in Section 6 we will show that several approximation algorithms from the constrained submodular maximization literature are $(\alpha, \delta)$ -robust, leading to new sublinear $\alpha$ -regret algorithms for related stochastic CMAB problems with submodular rewards. + +# 5. C-ETC Algorithm: Offline to Stochastic + +In this section, we present our proposed algorithm for adapting offline approximation to algorithms for stochastic CMAB, Combinatorial Explore-Then-Commit (C-ETC). The pseudo-code is shown in Algorithm 1. The algorithm takes an offline $(\alpha, \delta)$ robust algorithm $\mathcal{A}$ with an upper bound $N$ on the number of oracle queries by $\mathcal{A}$ . In the exploration phase, when the offline algorithm queries the value oracle for action $A$ , C-ETC will play action $A$ for $m$ times, where $m$ is a constant chosen to minimizing regret. C-ETC then computes the empirical mean $\bar{f}$ of rewards for $A$ and feeds $\bar{f}$ back to the offline algorithm $\mathcal{A}$ . In the exploitation phase, C-ETC keeps playing the solution $S$ output from algorithm $\mathcal{A}$ . Thus, the CMAB procedure does not need $\mathcal{A}$ to have any special structure. No careful construction is needed for the CMAB procedure beyond running $\mathcal{A}$ . All that is needed is checking robustness (Definition 4.1). Also, there is no over-heard in terms of storage and per-round time complexities—C-ETC is as efficient as the offline algorithm $\mathcal{A}$ itself. + +Now we analyze the $\alpha$ -regret for C-ETC (Algorithm 1). + +Theorem 5.1. For the sequential decision making problem defined in Section 2 and $T \geq \frac{2\sqrt{2}N}{\delta}$ , the expected cumulative $\alpha$ -regret of C-ETC using an $(\alpha, \delta)$ -robust approximation algorithm as subroutine is at most $\mathcal{O}\left(\delta^{\frac{2}{3}}N^{\frac{1}{3}}T^{\frac{2}{3}}\log (T)^{\frac{1}{3}}\right)$ + +# Algorithm 1 Combinatorial Explore-then-Commit + +Input: horizon $T$ , set of base elements $\Omega$ , an offline $(\alpha, \delta)$ -robust algorithm $\mathcal{A}$ , and an upper-bound $N$ on the number of $\mathcal{A}$ 's queries to the value oracle + +$$ +\text {I n i t i a l i z e} m \leftarrow \left[ \frac {\delta^ {2 / 3} T ^ {2 / 3} \log (T) ^ {1 / 3}}{2 N ^ {2 / 3}} \right] +$$ + +// Exploration Phase // + +while $\mathcal{A}$ queries the value of some $A\subseteq \Omega$ do + +For $m$ times, play action $A$ + +Calculate the empirical mean $\bar{f}$ + +Return $\tilde{f}$ to $\mathcal{A}$ + +end while + +// Exploitation Phase // + +for remaining time do + +Play action $S$ output by algorithm $\mathcal{A}$ + +end for + +where $N$ upper-bounds the number of value oracle queries made by the offline algorithm $\mathcal{A}$ . + +The detailed proof is in the supplementary material. We highlight some key steps. + +We show that with high probability, the empirical means of all actions taken during exploration phase will be within $\mathrm{rad} = \sqrt{\frac{\log T}{2m}}$ of their corresponding statistical means. As is common in proofs for ETC methods, we refer to this occurrence as the clean event $\mathcal{E}$ . Then, using an $(\alpha, \delta)$ -robust approximation algorithm as subroutine will guarantee the quality of the set $S$ used in the exploitation phase of Algorithm 1: + +$$ +f (S) \geq \alpha f (\mathrm {O P T}) - \delta \cdot \text {r a d .} \tag {2} +$$ + +We then break up the expected cumulative $\alpha$ -regret conditioned on the clean event $\mathcal{E}$ , + +$$ +\begin{array}{l} \mathbb {E} [ \mathcal {R} (T) | \mathcal {E} ] = \underbrace {\sum_ {i = 1} ^ {N} m \left(\alpha f \left(S ^ {*}\right) - \mathbb {E} [ f \left(S _ {t}\right) ]\right)} _ {\text {e x p l o r a t i o n p h a s e}} \\ + \underbrace {\sum_ {t = T _ {N} + 1} ^ {T} \left(\alpha f \left(S ^ {*}\right) - \mathbb {E} [ f (S) ]\right)} _ {\text {e x p l o i t a t i o n p h a s e}}. \tag {3} \\ \end{array} +$$ + +Using the fact that the reward is bounded between $[0,1]$ , we have + +$$ +\mathbb {E} [ \mathcal {R} (T) | \mathcal {E} ] \leq N m + T \delta \mathrm {r a d}. +$$ + +Optimizing over $m$ then results in + +$$ +\mathbb {E} [ \mathcal {R} (T) | \mathcal {E} ] = \mathcal {O} \left(\delta^ {\frac {2}{3}} N ^ {\frac {1}{3}} T ^ {\frac {2}{3}} \log (T) ^ {\frac {1}{3}}\right). +$$ + +We then show that because the clean event $\mathcal{E}$ happens with high probability, the expected cumulative regret $\mathbb{E}[\mathcal{R}(T)]$ is dominated by $\mathbb{E}[\mathcal{R}(T)|\mathcal{E}]$ , which concludes the proof. + +Lower bounds: For the general setting we explore in this paper, with stochastic (or even adversarial) combinatorial MAB and only bandit feedback, it is unknown whether $\tilde{\mathcal{O}}(T^{1/2})$ expected cumulative $\alpha$ -regret is possible (ignoring problem parameters like $n$ ). For special cases, such as linear reward functions, $\tilde{\mathcal{O}}(T^{1/2})$ is known to be achievable even with bandit feedback. Even for the special case of submodular reward functions and a cardinality constraint, it remains an open question. (Niazadeh et al., 2021) obtain $\tilde{\Omega}(T^{2/3})$ lower bounds for the harder setting where feedback is only available during "exploration" rounds chosen by the agent, who incurs an associated penalty. + +Remark 5.2. C-ETC uses knowledge of the horizon $T$ to optimize the number $m$ of samples per action. When the time horizon $T$ is not known, we can use geometric doubling trick to extend our result to an anytime algorithm. We refer to the general detailed procedure in (Besson and Kaufmann, 2018). From Theorem 4 in (Besson and Kaufmann, 2018), we can show that the regret bound conserves the original $T^{2/3} \log(T)^{1/3}$ dependence with only changes in constant factors. + +# 6. Applications on Submodular Maximization + +In this section, we apply our general framework to stochastic CMAB problems with monotone submodular rewards where only bandit feedback is available. This application results in the first sublinear $\alpha$ -regret CMAB algorithms for knapsack constraints under bandit feedback. We begin with a brief background, and analyze the robustness of offline approximation algorithms, and then obtain problem independent regret bounds. + +# 6.1. Background and Definitions + +Denote the marginal gain $f(e|A) = f(A \cup e) - f(A)$ and the marginal density $\rho(e|A) = \frac{f(A \cup e) - f(A)}{c(e)}$ for any subset $A \subseteq \Omega$ and element $e \in \Omega \setminus A$ . A set function $f: 2^{\Omega} \to \mathbb{R}$ defined on a finite ground set $\Omega$ is said to be submodular if it satisfies the diminishing return property: for all $A \subseteq B \subseteq \Omega$ , and $e \in \Omega \setminus B$ , it holds that $f(e|A) \geq f(e|B)$ . A set function is said to be monotonically non-decreasing if $f(A) \leq f(B)$ for all $A \subseteq B \subseteq \Omega$ . Our aim is to find a set $S$ such that $f(S)$ is maximized subject to some constraints. + +For knapsack constraints, we assume that the cost function $c:\Omega \to R_{>0}$ is known and linear, so the cost of a subset + +is be the sum of the costs of individual items: $c(A) = \sum_{v \in A} c(v)$ . To simplify the presentation, we avoid the cases of trivially large budgets $B > \sum_{v \in \Omega} c(v)$ and assume all items have non-trivial costs $0 < c(v) \leq B$ . A cardinality constraint is a special case with unit costs. + +In the following, we consider both types of those constraints: cardinality and knapsack. Maximizing a monotone submodular set function under a $k$ -cardinality constraint is NP-hard even with a value oracle (Nemhauser et al., 1978). The best achievable approximation ratio with a polynomial time algorithm is $1 - 1 / e$ (Nemhauser et al., 1978) using $\mathcal{O}(nk)$ oracle calls. In Badanidiyuru and Vondrak (2014), $1 - 1 / e - \epsilon'$ is achieved within $\mathcal{O}\left(\frac{n}{\epsilon'}\log \frac{n}{\epsilon'}\right)$ time, where $\epsilon'$ is a user-selected parameter to balance accuracy and time complexity. + +Maximizing a monotone submodular set function under a knapsack constraint is consequently also NP-hard (Khuller et al., 1999). The best achievable approximation ratio with a polynomial time algorithm is $1 - 1 / e$ (Sviridenko, 2004; Khuller et al., 1999), but that requires $\mathcal{O}(n^5)$ function evaluations, making it prohibitive for many applications. There are other offline algorithms that achieve worse approximation ratios but are much more efficient. We adapt a $\frac{1}{2}$ approximation algorithm (Yaroslavtsev et al., 2020) and a $\frac{1}{2} (1 - 1 / e)$ approximation algorithm (Khuller et al., 1999), both of which use $\mathcal{O}(n^2)$ function evaluations. There is another algorithm proposed recently in (Li et al., 2022), but since it queries infeasible sets (i.e., it evaluates for some subsets whose cost is above budget $B$ ), we do not consider it (see Appendix H for more details). + +# 6.2. Offline Approximation Algorithms - Robustness + +For an overview of offline approximation algorithms for submodular optimization, please refer Appendix B. We next state our results on $(\alpha, \delta)$ -robustness of the offline algorithms considered. The assumption of complete/noiseless access to a value oracle is often a strong assumption for real world applications. Thus, even for offline applications, it is worthwhile knowing how robust an algorithm is. So the following results are relevant even in the offline setting. For the CMAB setting we consider, robustness is also a sufficient property to guarantee a no-regret adaptation of the offline algorithm. Detailed proofs are included in Appendix C in the supplementary material. + +Proposition 6.1 (Corollary 4.3 of Nie et al. (2022)). GreEDy in (Nemhauser et al., 1978) is a $(1 - \frac{1}{e}, 2k)$ -robust approximation algorithm for submodular maximization under a $k$ -cardinality constraint. + +Proposition 6.2. THRESHOLDGREEDY (Badanidiyuru and Vondrák, 2014) is a $(1 - \frac{1}{e} - \epsilon', 2(2 - \epsilon')k)$ -robust approximation algorithm for submodular maximization under a $k$ -cardinality constraint. + +Proposition 6.3. PARTIALENUMERATION (Sviridenko, + +2004; Khuller et al., 1999) is a $(1 - \frac{1}{e}, 4 + 2\tilde{K} + 2\beta)$ -robust approximation algorithm for submodular maximization under a knapsack constraint. + +Proposition 6.4. GreEDy+MAX (Yaroslavtsev et al., 2020) is a $(\frac{1}{2}, \frac{1}{2} + \tilde{K} + 2\beta)$ -robust approximation algorithm for submodular maximization problem under a knapsack constraint. + +Proposition 6.5. GreEDy+ (Khuller et al., 1999) is a $(\frac{1}{2}(1 - \frac{1}{e}), 2 + \tilde{K} + \beta)$ -robust approximation algorithm for submodular maximization problem under a knapsack constraint. + +Remark 6.6. For the offline setting, $\mathrm{GREEDY} + \mathrm{MAX}$ is superior to $\mathrm{GREEDY} +$ , as it achieves a better $\alpha$ approximation ratio with the same calls to the value oracle. However, their $(\alpha, \delta)$ pairs are incomparable, as for $\beta > 1.5$ (with $\beta = 1$ corresponding to a cardinality constraint), $\mathrm{GREEDY} +$ has a smaller $\delta$ (thus more robust) which affects exploration time in their adaptations and in turn affects their regret. + +To illustrate the robustness analysis, we highlight some key steps for the proof of Proposition 6.4 for GREEDY+MAX. Let $o_1 \in \arg \max_{e: e \in \mathrm{OPT}} c(e)$ denote the most expensive element in OPT. Inspired by the proof techniques in (Yaroslavtsev et al., 2020), we consider the last item added by the greedy solution (based on noisy evaluation) before the cost of this solution exceeds $B - c(o_1)$ . Let $G_i$ denote the set selected by GREEDY that has cardinality $i$ and denote the constituent elements as $G_i = \{g_1, \dots, g_i\}$ . Denote $G_\ell$ as the largest greedy sequence that consumes less than $B - c(o_1)$ of the budget $B$ , so $c(G_\ell) \leq B - c(o_1) < c(G_{\ell + 1})$ . Let $S_i$ denote the augmented set at $i$ -th iteration and $S$ denote the final output of the algorithm. Denote $\hat{f}(e|S) := \hat{f}(S \cup e) - \hat{f}(S)$ and $\hat{\rho}(e|S) := \frac{\hat{f}(S \cup e) - \hat{f}(S)}{c(e)}$ . We prove the following lemma. + +Lemma 6.7 (GREEDY+MAX inequality). For $i \in \{0,1,\dots,\ell\}$ , the following inequality holds: + +$$ +\begin{array}{l} \hat {f} \left(G _ {i} \cup o _ {1}\right) + \max \{0, \hat {\rho} \left(g _ {i + 1} \mid G _ {i}\right) \} (B - c \left(o _ {1}\right)) \\ \geq f (\mathrm {O P T}) - (2 \tilde {K} - 1) \epsilon . \\ \end{array} +$$ + +For $i = \ell$ , Lemma 6.7 tells us that there can be two cases: + +$$ +\hat {f} \left(G _ {\ell} \cup o _ {1}\right) \geq \frac {1}{2} f (\mathrm {O P T}) - \left(\tilde {K} - \frac {1}{2} + \gamma\right) \epsilon , \text {o r} +$$ + +$$ +\hat {\rho} \left(g _ {\ell + 1} \mid G _ {\ell}\right) \left(B - c \left(o _ {1}\right)\right) \geq \frac {1}{2} f (\mathrm {O P T}) - \left(\tilde {K} - \frac {1}{2} - \gamma\right) \epsilon , +$$ + +where $\gamma$ will be selected later to minimize the additive error $\delta$ coefficient. + +If $\hat{f}(G_{\ell} \cup o_1) \geq \frac{1}{2} f(\mathrm{OPT}) - \left(\tilde{K} - \frac{1}{2} + \gamma\right)\epsilon$ , then denote $a_{\ell} = \arg \max_{e \in \Omega \setminus G_{\ell}} \hat{f}(e|G_{\ell})$ , which is the element + +selected to augment $G_{\ell}$ . We have + +$$ +\begin{array}{l} \hat {f} \left(G _ {\ell} \cup a _ {\ell}\right) \geq \hat {f} \left(G _ {\ell} \cup o _ {1}\right) \\ \geq \frac {1}{2} f (\mathrm {O P T}) - \left(\tilde {K} - \frac {1}{2} + \gamma\right) \epsilon . \tag {4} \\ \end{array} +$$ + +Then the final output of the algorithm $S$ will satisfy + +$$ +\begin{array}{l} f (S) \geq \hat {f} (S) - \epsilon \\ \geq \hat {f} (G _ {\ell} \cup a _ {\ell}) - \epsilon \\ \geq \frac {1}{2} f (\mathrm {O P T}) - \left(\tilde {K} + \frac {1}{2} + \gamma\right) \epsilon . \quad (\text {u s i n g (4)}) \\ \end{array} +$$ + +If $\hat{\rho} (g_{\ell +1}|G_{\ell})(B - c(o_1))\geq \frac{1}{2} f(\mathrm{OPT}) - (\tilde{K} -\frac{1}{2} -\gamma)\epsilon ,$ rearranging we have + +$$ +\hat {\rho} \left(g _ {\ell + 1} \mid G _ {\ell}\right) \geq \frac {f (\mathrm {O P T})}{2 \left(B - c \left(o _ {1}\right)\right)} - \frac {\left(\tilde {K} - \frac {1}{2} - \gamma\right) \epsilon}{B - c \left(o _ {1}\right)}. \tag {5} +$$ + +Moreover, + +$$ +\begin{array}{l} \hat {f} (G _ {\ell}) = \sum_ {j = 0} ^ {l - 1} \hat {\rho} (g _ {j + 1} | G _ {j}) c (g _ {j + 1}) \\ \geq \sum_ {j = 0} ^ {l - 1} \hat {\rho} \left(g _ {\ell + 1} \mid G _ {j}\right) c \left(g _ {j + 1}\right) (6) \\ \geq \sum_ {j = 0} ^ {l - 1} \left(\rho \left(g _ {\ell + 1} \mid G _ {j}\right) - \frac {2 \epsilon}{c \left(g _ {\ell + 1}\right)}\right) c \left(g _ {j + 1}\right) \\ \geq \sum_ {j = 0} ^ {l - 1} \left(\rho \left(g _ {\ell + 1} \mid G _ {\ell}\right) - \frac {2 \epsilon}{c \left(g _ {\ell + 1}\right)}\right) c \left(g _ {j + 1}\right) (7) \\ = \left(\rho \left(g _ {\ell + 1} \mid G _ {\ell}\right) - \frac {2 \epsilon}{c \left(g _ {\ell + 1}\right)}\right) c \left(G _ {\ell}\right) \\ \geq \left(\hat {\rho} \left(g _ {\ell + 1} \mid G _ {\ell}\right) - \frac {4 \epsilon}{c \left(g _ {\ell + 1}\right)}\right) c \left(G _ {\ell}\right) \\ \geq \hat {\rho} \left(g _ {\ell + 1} \mid G _ {\ell}\right) c \left(G _ {\ell}\right) - 4 \beta \epsilon , (8) \\ \end{array} +$$ + +where (6) follows from the greedy selection rule, the (7) follows from submodularity of $f$ , and (8) follows from the definition of $\beta$ . We then have + +$$ +\begin{array}{l} \hat {f} (G _ {\ell + 1}) \\ = \hat {f} (G _ {\ell}) + c (g _ {\ell + 1}) \hat {\rho} (g _ {\ell + 1} | G _ {\ell}) \\ \geq \left(\hat {\rho} \left(g _ {\ell + 1} \mid G _ {\ell}\right) c \left(G _ {\ell}\right) - 4 \beta \epsilon\right) + c \left(g _ {\ell + 1}\right) \hat {\rho} \left(g _ {\ell + 1} \mid G _ {\ell}\right) (9) \\ = \hat {\rho} \left(g _ {\ell + 1} \mid G _ {\ell}\right) c \left(G _ {\ell + 1}\right) - 4 \beta \epsilon \\ \geq \frac {\frac {1}{2} f (\mathrm {O P T}) - \left(\tilde {K} - \frac {1}{2} - \gamma\right) \epsilon}{B - c \left(o _ {1}\right)} c \left(G _ {\ell + 1}\right) - 4 \beta \epsilon (10) \\ \geq \frac {1}{2} f (\mathrm {O P T}) - \left(\tilde {K} - \frac {1}{2} - \gamma\right) \epsilon - 4 \beta \epsilon (11) \\ \end{array} +$$ + +$$ += \frac {1}{2} f (\mathrm {O P T}) - \left(\tilde {K} - \frac {1}{2} - \gamma + 4 \beta\right) \epsilon , \tag {12} +$$ + +where (9) follows from (8), (10) follows from (5), and (11) follows from the chosen $\ell$ satisfies $c(G_{\ell + 1}) > B - c(o_1)$ . Thus, the final output of the algorithm $S$ will satisfy + +$$ +\begin{array}{l} f (S) \geq \hat {f} (S) - \epsilon \\ \geq \hat {f} (G _ {\ell + 1}) - \epsilon \\ \geq \frac {1}{2} f (\mathrm {O P T}) - \left(\tilde {K} + \frac {1}{2} - \gamma + 4 \beta\right) \epsilon . \\ \end{array} +$$ + +Finally, combining both cases and selecting $\gamma = 2\beta$ completes the proof. + +# 6.3. CMAB algorithms for Submodular Rewards with Knapsack Constraints + +Now that we have analyzed the robustness of several offline algorithms, we can invoke Theorem 5.1 to bound the expected cumulative $\alpha$ regret for stochastic CMAB adaptations that rely only on bandit feedback. We name the adapted algorithms as C-ETC-N, C-ETC-B for cardinality constraint, C-ETC-S C-ETC-K and C-ETC-Y for knapsack constraint, respectively, based on which offline algorithm it is adapted from (using the first author's last name); which are in order (Nemhauser et al., 1978; Badanidiyuru and Vondrak, 2014; Sviridenko, 2004; Khuller et al., 1999; Yaroslavtsev et al., 2020). PARTIALENUMERATION was first proposed and analyzed by Khuller et al. (1999) for maximum coverage problems and then analyzed by Sviridenko (2004) for monotone submodular functions. To distinguish CMAB adaptations of GREEDY+ and C-ETC-K, both proposed in Khuller et al. (1999), we use C-ETC-S for the adaption of PARTIALENUMERATION. The following corollaries hold immediately from Propositions 6.1 to 6.5: + +Corollary 6.8. For an online submodular maximization under a cardinality constraint, the expected cumulative $(1 - 1/e)$ -regret of $C$ -ETC- $N$ is at most $\mathcal{O}\left(kn^{\frac{1}{3}}T^{\frac{2}{3}}\log (T)^{\frac{1}{3}}\right)$ for $T \geq \sqrt{2} n$ . + +Remark 6.9. This result improves upon the result from Nie et al. (2022) by a factor of $k^{\frac{1}{3}}$ despite our use of a generic framework. + +Corollary 6.10. For an online submodular maximization under a cardinality constraint, the expected cumulative $(1 - 1 / e - \epsilon')$ -regret of $C$ -ETC-B is at most $\mathcal{O}\left(k^{\frac{2}{3}}n^{\frac{1}{3}}(\epsilon')^{\frac{1}{3}}(\log \frac{n}{\epsilon'})^{\frac{1}{3}}T^{\frac{2}{3}}\log (T)^{\frac{1}{3}}\right)$ for $T \geq \frac{\sqrt{2}n}{(2 - \epsilon')\epsilon'k}\log \frac{n}{\epsilon'}$ . + +Corollary 6.11. For an online submodular maximization under a knapsack constraint, the expected cumulative $(1 - 1 / e)$ -regret of C-ETC-S is at most $\mathcal{O}\left(\beta^{\frac{2}{3}}\tilde{K}^{\frac{1}{3}}n^{\frac{4}{3}}T^{\frac{2}{3}}\log (T)^{\frac{1}{3}}\right)$ for $T\geq \frac{\sqrt{2}\tilde{K}n^{4}}{2 + \tilde{K} + \beta}$ . + +Corollary 6.12. For an online submodular maximization under a knapsack constraint, the expected cumulative $\frac{1}{2}$ -regret of C-ETC-Y is at most $\mathcal{O}\left(\beta^{\frac{2}{3}}\tilde{K}^{\frac{1}{3}}n^{\frac{1}{3}}T^{\frac{2}{3}}\log (T)^{\frac{1}{3}}\right)$ for $T \geq \frac{2\sqrt{2}\tilde{K}n}{\frac{1}{2} + \tilde{K} + 2\beta}$ . + +Corollary 6.13. For an online submodular maximization under a knapsack constraint, the expected cumulative $\frac{1}{2} (1 - \frac{1}{e})$ -regret of $C$ -ETC- $K$ is at most $\mathcal{O}\left(\beta^{\frac{2}{3}}\tilde{K}^{\frac{1}{3}}n^{\frac{1}{3}}T^{\frac{2}{3}}\log (T)^{\frac{1}{3}}\right)$ for $T \geq \frac{2\sqrt{2}\tilde{K}n}{2 + \tilde{K} + \beta}$ . + +Comparison with $\mathbf{OG^{o}}$ : Streeter and Golovin (2008) proposed and analyzed an algorithm for adversarial CMAB with submodular rewards, full-bandit feedback, and under a knapsack constraint (only the expected cost of the (randomly selected) action was required to be under budget). We discuss this in more detail in Appendix H, here only highlighting a few key points. We also use this as a baseline in our experiments in Section 7. The authors adapted a simpler greedy algorithm than the one we adapt (Khuller et al., 1999), using an $\epsilon$ -greedy exploration type framework. We provide evidence in our experiments that their algorithm requires large horizons to learn. The offline algorithm they adapted achieves an approximation ratio $(1 - 1 / e)$ for budgets that exactly match the cost used up by the greedy solution, but otherwise does not achieve a constant approximation (Khuller et al., 1999). + +Storage and Per-Round Time Complexities: C-ETC-Y and C-ETC-K have low storage complexity and per-round time-complexity. During exploitation, only the indices of at most $\tilde{K}$ base arms are needed in memory and does not need any computation. During exploration, they just need to update the empirical mean for the current action at time $t$ , which can be done in $\mathcal{O}(1)$ time. It additionally stores the highest empirical density so far in the current iteration of the greedy routine and its associated base arm (C-ETC-K needs to store one more arm and C-ETC-Y an additional $\mathcal{O}(\tilde{K})$ storage is needed to store the augmented set). Thus, C-ETC-Y and C-ETC-K have $\mathcal{O}(\tilde{K})$ storage complexity and $\mathcal{O}(1)$ per-round time complexity. For comparison, the algorithm proposed by (Streeter and Golovin, 2008) for an averaged knapsack constraint in the adversarial setting uses $\mathcal{O}(n\tilde{K})$ storage complexity and $\mathcal{O}(n)$ per-round time complexity. Some comments on lower bound are given in Appendix E. + +# 7. Experiments + +In this section, we conduct experiments on real world data with a Budgeted Influence Maximization (BIM). We also conduct experiments on Song Recommendation (SR) in Appendix I. Both of these are applications of stochastic CMAB with submodular rewards under a knapsack constraint. There are three adoptions we considered in Section 6 + +for knapsack constraint. Since the time complexity for PARTIALENUMERATION is much larger than the other two offline algorithms we consider, it will use at least $T \approx 10^{8}$ for C-ETC-S to finish exploration. For this reason, we do not consider C-ETC-S in the experiments. To our knowledge, our work is the first to consider these applications with only bandit feedback available. + +Baseline: The only other algorithm designed for combinatorial MAB with general submodular rewards, under a knapsack constraint, and using full-bandit feedback is Online Greedy with opaque feedback model $(\mathbf{OG}^{\mathrm{o}})$ proposed by Streeter and Golovin (2008) for the adversarial setting. However, $\mathbf{OG}^{\mathrm{o}}$ only satisfies the knapsack constraint in expectation, while our algorithms C-ETC-K and C-ETC-Y satisfy a strict constraint (i.e. every action $A_{t}$ must be under budget). See Appendix D for more details about $\mathbf{OG}^{\mathrm{o}}$ and its implementation. + +In Section 6, we used $N = \tilde{K} n$ as an upper bound on the number of function evaluations for both C-ETC-K and C-ETC-Y, where $n$ is the number of base arms and $\tilde{K}$ is an upper bound of the cardinality of any feasible set. When the time horizon $T$ is small, it is possible that the exploration phase will not finish due to the formula being optimized for $m$ (the number of plays for each action queried by $\mathcal{A}$ ) uses a loose bound on the exploitation time. When this is the case, we select the largest $m$ (closest to the formula) for which we can guarantee that exploration will finish. For details, see Appendix F. + +We first conduct experiments for the application of budgeted influence maximization (BIM) on a portion of the Facebook network graph. BIM models the problem of identifying a low-cost subset (seed set) of nodes in a (known) social network that can influence the maximum number of nodes in a network. While there are prior works proposing algorithms for budgeted online influence maximization problems, the state of the art (e.g., (Perrault et al., 2020)) presumes knowledge of the diffusion model (such as independent cascade) and, more importantly, extensive semi-bandit feedback on individual diffusions, such as which specific nodes became active or along which edges successful infections occurred, in order to estimate diffusion parameters. For social networks with user privacy, this information is not available. + +Data Set Description and Experiment Details: The Facebook network dataset was introduced in (Leskovec and Mcauley, 2012). To facilitate running multiple experiments for different horizons, we used the community detection method proposed by (Blondel et al., 2008) to detect a community with 354 nodes and 2853 edges. We further changed the network to be directed by replacing every undirected edge by two directed edge with opposite directions, yielding a directed network with 5706 edges. The diffusion process + +![](images/bd711f53de25feda18bfd93b64e29970090041f6e00a607185fa75c8b8de4a8e.jpg) + +![](images/799b6c18a5822eb48eeb14e97e7eea923511a09aeeb12d985aee0c83db8fa106.jpg) +(a) +(c) +Figure 1: Plots for budgeted influence maximization (BIM) example. (a) and (b) are comparison results for cumulative regret as a function of time horizon $T$ . (c) and (d) are the moving average plot with window size 100 of instantaneous reward as a function of $t$ . The gray dashed lines in (a) and (b) represent $y = aT^{2/3}$ for various values of $a$ for visual reference. The gray dashed lines in (c) and (d) represent expected rewards for the action chosen by an offline greedy algorithm. + +![](images/36244660ac7df59d7c870bfa45d8a3eae1591c5c5aec7d454df219f7d7fb74ba.jpg) + +![](images/b7f9acb89dee99eb1f8338bd519cd4a2080ca9a45abf8d9d5596543b5e509cd6.jpg) +(b) +(d) + +is simulated using the independent cascade model (Kempe et al., 2003), where in each discrete step, an active node (that was inactive at the previous time step) independently attempts to infect each of its inactive neighbors. Following existing work of Tang et al. (2015; 2018); Bian et al. (2020), we set the probability of each edge $(u,v)$ as $1 / d_{\mathrm{in}}(v)$ , where $d_{\mathrm{in}}(v)$ is the in-degree of node $v$ . Moreover, we consider a user $u$ is more influential if the user has more out-degrees, $d_{\mathrm{out}}(u)$ . In our experiment, we only consider influential users to spend our budget more efficiently. We pick the users with out-degrees that are above $95^{\mathrm{th}}$ percentile (18 users). Denote this set as $\mathcal{I}$ , then for a user $u \in \mathcal{I}$ , the cost is defined as $c(u) = 0.01d_{\mathrm{out}}(u) + 1$ , similar to (Wu et al., 2022). For each time horizon that was used, we ran each method ten times. + +For this set of experiments, instead of cumulative $\frac{1}{2}$ -regret, which requires knowing OPT, we compare the cumulative rewards achieved by C-ETC and $\mathrm{OG^o}$ against $Tf(S^{\mathrm{grad}})$ , where $S^{\mathrm{grad}}$ is the solution returned by the offline $\frac{1}{2}$ -approximation algorithm proposed by (Yaroslavtsev et al., 2020). $Tf(S^{\mathrm{grad}}) \geq \frac{1}{2} Tf(\mathrm{OPT})$ , so $Tf(S^{\mathrm{grad}})$ is a more challenging reference value. + +Results and Discussion: Figures 1a and 1b show average cumulative regret curves for C-ETC-K (in blue), C-ETC-Y (in orange) and $\mathrm{OG}^{\mathrm{o}}$ (in green) for different horizon $T$ values when the budget constraint $B$ is 6 and 8, respectively. + +For $B = 8$ , the turning point is $T = 21544$ . Standard errors of means are presented as error bars, but might be too small to be noticed. Figures 1c and 1d are the instantaneous reward plots. In these plots, standard errors of means are presented as shaded areas. The peaks at the very beginning of exploration phase correspond to the time step that the single person with highest influence is sampled. + +C-ETC significantly outperforms $\mathrm{OG^o}$ for all time horizons and budget considered. To evaluate the gap between the empirical performance and the theoretical guarantee, we estimated the slope for both methods on log-log scale plots. Over the horizons tested, $\mathrm{OG^o}$ 's cumulative regret (averaged over ten runs) has a growth rate of 0.98. The growth rates of C-ETC-K for budgets 6 and 8 are 0.76 and 0.68, respectively. The growth rates of C-ETC-Y for budgets 6 and 8 are 0.75 and 0.69, respectively. The slopes are close to the $2/3 \approx$ 0.67 theoretical guarantee, and notably, the performance for larger $B$ is better. + +# 8. Conclusions and Future Directions + +In this paper, we provide a general framework for adapting discrete offline approximation algorithms for combinatorial optimization problems into sublinear $\alpha$ -regret methods that only require bandit feedback. Through our proposed framework, we achieve $\mathcal{O}\left(T^{\frac{2}{3}}\log (T)^{\frac{1}{3}}\right)$ expected cumulative $\alpha$ -regret dependence on the horizon $T$ . Importantly, our approach only relies on the offline algorithms being robust to small errors in function evaluation. The offline algorithm can be treated as a black box subroutine, making our framework easily applicable in practice. The results are demonstrated on multiple problems in constrained submodular optimization. + +Our findings pave the way for further exploration and development of algorithms for similar problems, opening up new avenues for research in this area. Recently, (Fourati et al., 2023) considered submodular maximization with bandit feedback and non-monotone rewards. Exploring this as a special case of framework will be considered as a future work. Further, finding regret guarantees for non-monotone submodular maximization subject to a knapsack constraint with bandit feedback is open, where the result with semi-bandit feedback has been studied in (Amanatidis et al., 2020). Finally, exploring product ranking optimization in online platforms and reserve price optimization in auctions (considered in (Niazadeh et al., 2021) for adversarial CMAB) as a special case of our framework will be considered as future work. + +# 9. Acknowledgements + +This work was supported in part by the National Science Foundation under grants 2149588 and 2149617. + +# References + +Amanatidis, G., Fusco, F., Lazos, P., Leonardi, S., and Reiffenhäuser, R. (2020). Fast adaptive non-monotone submodular maximization subject to a knapsack constraint. Advances in neural information processing systems, 33:16903-16915. +Arora, S., Hazan, E., and Kale, S. (2012). The multiplicative weights update method: a meta-algorithm and applications. Theory of Computing, 8(1):121-164. +Badanidiyuru, A. and Vondrak, J. (2014). Fast algorithms for maximizing submodular functions. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1497-1514. SIAM. +Bertin-Mahieux, T., Ellis, D. P., Whitman, B., and Lamere, P. (2011). The million song dataset. In Proceedings of the 12th International Conference on Music Information Retrieval (ISMIR 2011). +Besson, L. and Kaufmann, E. (2018). What doubling tricks can and can't do for multi-armed bandits. *ArXiv*, abs/1803.06971. +Bian, S., Guo, Q., Wang, S., and Yu, J. X. (2020). Efficient algorithms for budgeted influence maximization on massive social networks. Proceedings of the VLDB Endowment, 13(9):1498-1510. +Blondel, V. D., Guillaume, J.-L., Lambiotte, R., and Lefebvre, E. (2008). Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment, 2008:10008. +Fourati, F., Aggarwal, V., Quinn, C., and Alouini, M.-S. (2023). Randomized greedy learning for non-monotone stochastic submodular maximization under full-bandit feedback. In International Conference on Artificial Intelligence and Statistics, pages 7455-7471. PMLR. +Golovin, D., Krause, A., and Streeter, M. (2014). Online submodular maximization under a matroid constraint with application to learning assignments. arXiv preprint arXiv:1407.1082. +Hiranandani, G., Singh, H., Gupta, P., Burhanuddin, I. A., Wen, Z., and Kveton, B. (2020). Cascading linear submodular bandits: Accounting for position bias and diversity in online learning to rank. In Adams, R. P. and Gogate, V., editors, Proceedings of The 35th Uncertainty + +in Artificial Intelligence Conference, volume 115 of Proceedings of Machine Learning Research, pages 722-732. PMLR. +Kempe, D., Kleinberg, J., and Tardos, E. (2003). Maximizing the spread of influence through a social network. In Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 137-146. +Khuller, S., Moss, A., and Naor, J. S. (1999). The budgeted maximum coverage problem. Information Processing Letters, 70(1):39-45. +Krause, A. and Guestrin, C. (2005). A note on the budgeted maximization on submodular functions. Technical Report CMU-CALD-05-103, Carnegie Mellon University. +Leskovec, J., Krause, A., Guestrin, C., Faloutsos, C., VanBriesen, J., and Glance, N. (2007). Cost-effective outbreak detection in networks. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, page 420-429. Association for Computing Machinery. +Leskovec, J. and Mcauley, J. (2012). Learning to discover social circles in ego networks. In Advances in Neural Information Processing Systems, volume 25. Curran Associates, Inc. +Li, W., Feldman, M., Kazemi, E., and Karbasi, A. (2022). Submodular maximization in clean linear time. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K., editors, Advances in Neural Information Processing Systems. +Lin, T., Li, J., and Chen, W. (2015). Stochastic online greedy learning with semi-bandit feedbacks. In Proceedings of the 29th International Conference on Neural Information Processing Systems, pages 352-360. +Nemhauser, G. L., Wolsey, L. A., and Fisher, M. L. (1978). An analysis of approximations for maximizing submodular set functions—i. Mathematical Programming, 14(1):265-294. +Nguyen, H. and Zheng, R. (2013). On budgeted influence maximization in social networks. IEEE Journal on Selected Areas in Communications, 31:1084-1094. +Niazadeh, R., Golrezaei, N., Wang, J. R., Susan, F., and Badanidiyuru, A. (2021). Online learning via offline greedy algorithms: Applications in market design and optimization. In Proceedings of the 22nd ACM Conference on Economics and Computation, pages 737-738. +Nie, G., Agarwal, M., Umrawal, A. K., Aggarwal, V., and Quinn, C. J. (2022). An explore-then-commit algorithm for submodular maximization under full-bandit feedback. + +In Uncertainty in Artificial Intelligence, pages 1541-1551. PMLR. +Perrault, P., Healey, J., Wen, Z., and Valko, M. (2020). Budgeted online influence maximization. In III, H. D. and Singh, A., editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 7620-7631. PMLR. +Slivkins, A. (2019). Introduction to multi-armed bandits. Foundations and Trends® in Machine Learning, 12(1-2):1-286. +Streeter, M. and Golovin, D. (2008). An online algorithm for maximizing submodular functions. In Proceedings of the 21st International Conference on Neural Information Processing Systems, page 1577-1584. +Sviridenko, M. (2004). A note on maximizing a submodular set function subject to a knapsack constraint. *Operations Research Letters*, 32:41-43. +Takemori, S., Sato, M., Sonoda, T., Singh, J., and Ohkuma, T. (2020a). Submodular bandit problem under multiple constraints. In Conference on Uncertainty in Artificial Intelligence, pages 191-200. PMLR. +Takemori, S., Sato, M., Sonoda, T., Singh, J., and Ohkuma, T. (2020b). Submodular bandit problem under multiple constraints. In Peters, J. and Sontag, D., editors, Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), volume 124 of Proceedings of Machine Learning Research, pages 191-200. PMLR. +Tang, J., Tang, X., Xiao, X., and Yuan, J. (2018). Online processing algorithms for influence maximization. In Proceedings of the 2018 International Conference on Management of Data, pages 991-1005. +Tang, Y., Shi, Y., and Xiao, X. (2015). Influence maximization in near-linear time: A martingale approach. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, pages 1539-1554. +Wang, S., Yang, S., Xu, Z., and Truong, V. (2020). Fast Thompson sampling algorithm with cumulative oversampling: Application to budgeted influence maximization. CoRR, abs/2004.11963. +Wu, J., Gao, J., Zhu, H., and Zhang, Z. (2022). Budgeted influence maximization via boost simulated annealing in social networks. arXiv preprint arXiv:2203.11594. +Yaroslavtsev, G., Zhou, S., and Avdiukhin, D. (2020). "Bring your own greedy" + max: Near-optimal 1/2-approximations for submodular knapsack. In Chiappa, S. and Calandra, R., editors, Proceedings of the Twenty + +Third International Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, pages 3263-3274. PMLR. +Yu, B., Fang, M., and Tao, D. (2016). Linear submodular bandits with a knapsack constraint. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30. +Yue, Y. and Guestrin, C. (2011). Linear submodular bandits and their application to diversified retrieval. Advances in Neural Information Processing Systems, 24. + +# A. Proof for Regret of C-ETC + +In this section, we prove Theorem 5.1 in Section 4 of the main paper. We restate the theorem: For the sequential decision making problem defined in Section 2 and $T \geq \frac{2\sqrt{2}N}{\delta}$ , the expected cumulative $\alpha$ -regret of C-ETC using an $(\alpha, \delta)$ -robust approximation algorithm as subroutine is at most $\mathcal{O}\left(\delta^{\frac{2}{3}}N^{\frac{1}{3}}T^{\frac{2}{3}}\log (T)^{\frac{1}{3}}\right)$ , where $N$ upper-bounds the number of value oracle queries made by the offline algorithm $\mathcal{A}$ . + +# A.1. Overview and Notations + +We will separate the proof into two cases. The first case is for when the clean event $\mathcal{E}$ happens, which we will show in Lemma A.3 happens with high probability. Under the clean event, using the fact that the offline algorithm is an $(\alpha, \delta)$ -robust approximation, C-ETC's chosen set $S$ for the exploitation phase will nonetheless be near-optimal. The second case is when the complementary event happens, which occurs with low probability. + +The proof structure analyzing a high-probability "clean event" where empirical estimates are sufficiently concentrated around their means is analogous to that for the unstructured non-combinatorial setting (see for instance, Section 1.2 in (Slivkins, 2019)). However, unlike the ETC procedure for non-combinatorial MAB problems, C-ETC makes sequences of decisions during exploration. Furthermore, the combinatorial action space, non-linearity of the reward function, and lack of extra feedback (like marginal gains) make the problem challenging. Even in the special setting of deterministic rewards, the standard MAB problem becomes trivial (finding the largest of $n$ base arms) while the problem we considered are NP-hard. + +Recap that for any (feasible) action $A$ , $f_{t}(A)$ denotes a (random) reward at time $t$ for the agent taking that action, $f(A)$ denotes the expected value for action $A$ . Let $\bar{f}_{t}(A)$ denote the empirical mean of rewards received from playing action $A$ up to and including time $t$ . In the following, we will drop the subscript $t$ from the empirical mean, writing $\bar{f}(A)$ when it is clear from context that action $A$ has been played $m$ times. Also, we use $A_{i}$ , $i \in \{1, \dots, N\}$ denotes the i-th action the algorithm samples. We further denote $T_{i}$ , $i \in \{1, \dots, N\}$ as the time step when the sampling of the i-th action has been determined, or $A_{i}$ has been played $m$ times. For notation consistency, we also denote $T_{0} = 0$ and $T_{N+1} = T$ . + +# A.2. Probability of the Clean Event + +Now we define events that are important in our analysis. Recall that for each action $A$ being explored, the $m$ rewards are i.i.d. with mean $f(A)$ and bounded in [0, 1]. Thus, we can bound the deviation of the (unbiased) empirical mean $\bar{f}(A_i)$ from the expected value $f(A_i)$ for each action played. Specifically, we can use a two-sided Hoeffding bound for bounded variables. Remark A.1. For convenience, we assume the reward function bounded in [0, 1], but the result can be generalized to the case where the deviation of the true reward and the expected reward has a light tailed distribution (e.g., sub-Gaussian). + +Lemma A.2 (Hoeffding's inequality). Let $X_{1}, \dots, X_{n}$ be independent random variables bounded in the interval [0, 1], and let $\bar{X}$ denote their empirical mean. Then we have for any $\epsilon > 0$ , + +$$ +\mathbb {P} \left(\left| \bar {X} - \mathbb {E} [ \bar {X} ] \right| \geq \epsilon\right) \leq 2 \exp \left(- 2 n \epsilon^ {2}\right). \tag {13} +$$ + +By C-ETC, each sampled action will be played the same number of times, denoted by $m$ , so we consider bounding the probabilities of equal-sized confidence radii $\mathrm{rad} \coloneqq \sqrt{\log(T) / 2m}$ for all the actions played during exploration. + +We next analyze the probability of the event that the empirical means of all actions played during exploration are concentrated around their statistical means within a radius rad. Denote the corresponding events for each action played having empirical means concentrated around their respective statistical means as $\mathcal{E}_i$ , + +$$ +\mathcal {E} _ {i} := \bigcap \left\{\left| \bar {f} (A _ {i}) - f (A _ {i}) \right| < \operatorname {r a d} \right\}, \quad i \in \{1, \dots , N \}. \tag {14} +$$ + +Define the clean event $\mathcal{E}$ to be the event that the empirical means of all actions played in the exploration phase are within rad of their corresponding statistical means: + +$$ +\mathcal {E} := \mathcal {E} _ {1} \cap \dots \cap \mathcal {E} _ {N}. \tag {15} +$$ + +Lemma A.3. The probability of the clean event $\mathcal{E}$ (15) satisfies: + +$$ +\mathbb {P} (\mathcal {E}) \geq 1 - \frac {2 N}{T}. +$$ + +Proof. Applying the Hoeffding bound Lemma A.2 to the empirical mean $\bar{f}(A_i)$ of $m$ rewards for action $A_i$ and choosing $\epsilon = \mathrm{rad} = \sqrt{\log(T) / 2m}$ gives + +$$ +\begin{array}{l} \mathbb {P} \left(\bar {\mathcal {E}} _ {i}\right) = \mathbb {P} \left[ | \bar {f} (A _ {i}) - f (A _ {i}) | \geq \mathrm {r a d} \right] \\ \leq 2 \exp (- 2 m \mathrm {r a d} ^ {2}) \\ = 2 \exp (- 2 m (\log (T) / 2 m)) \\ = 2 \exp (- \log (T)) \\ = \frac {2}{T}. \tag {16} \\ \end{array} +$$ + +Then, we can bound the probability of clean events + +$$ +\begin{array}{l} \mathbb {P} (\mathcal {E}) = \mathbb {P} \left(\mathcal {E} _ {1} \cap \dots \cap \mathcal {E} _ {N}\right) \\ = 1 - \mathbb {P} \left(\bar {\mathcal {E}} _ {1} \cup \dots \cup \bar {\mathcal {E}} _ {N}\right) \quad (\text {D e M o r g a n ' s L a w}) \\ \geq 1 - \sum_ {i = 1} ^ {N} \mathbb {P} \left(\bar {\mathcal {E}} _ {i}\right) \quad (\text {u n i o n b o u n d s}) \\ \geq 1 - \frac {2 N}{T}. \quad (\text {u s i n g (1 6)}) \\ \end{array} +$$ + +# A.3. Near Optimality of the final $S$ (Exploitation Phase Action) + +In Lemma A.3, we showed that the clean event $\mathcal{E}$ will happen with high probability. When the clean event $\mathcal{E}$ happens, we have $|\bar{f}(A) - f(A)| \leq \mathrm{rad}$ for all evaluated action $A$ . For an online algorithm (with output $S$ ) using an $(\alpha, \delta)$ -robust approximation as subroutine, we have + +$$ +f (S) \geq \alpha f (\mathrm {O P T}) - \delta \cdot \text {r a d .} \tag {17} +$$ + +# A.4. Final Regret + +Now we are ready to show the regret of C-ETC (Theorem 5.1 in Section 4 of the main paper). + +# CASE 1: CLEAN EVENT $\mathcal{E}$ HAPPENS + +In the first case we analyse the expected regret under the condition that the clean event $\mathcal{E}$ happens. In this section, all expectations will be conditioned on $\mathcal{E}$ , but to simplify notation we will write $\mathbb{E}[\cdot]$ instead of $\mathbb{E}[\cdot|\mathcal{E}]$ in some cases. + +First we can break up the expected $\alpha$ -regret conditioned on $\mathcal{E}$ into two parts, one for the first $L$ exploration iterations, and the second for the exploitation iteration. Although the number of actions taken per iteration and the number of iterations of the greedy is not known a priori, we can upper bound the duration. Also recall that $f_{t}(A_{t})$ is the random reward for taking + +action $A_{t}$ , which itself is random, depending on empirical means of actions in earlier iterations. + +$$ +\begin{array}{l} \mathbb {E} [ \mathcal {R} (T) | \mathcal {E} ] = \alpha T f (\mathrm {O P T}) - \sum_ {t = 1} ^ {T} \mathbb {E} \left[ f _ {t} \left(A _ {t}\right) \right] \\ = \alpha T f (\mathrm {O P T}) - \sum_ {t = 1} ^ {T} \mathbb {E} [ \mathbb {E} [ f _ {t} (A _ {t}) | A _ {t} ] ] \quad \text {(l a w o f t o t a l e x p e c t a t i o n)} \\ = \alpha T f (\mathrm {O P T}) - \sum_ {t = 1} ^ {T} \mathbb {E} [ f (A _ {t}) ] \quad (f (\cdot) \text {d e f i n e d a s e x p e c t e d r e w a r d}) \\ = \sum_ {t = 1} ^ {T} \left(\alpha f (\text {O P T}) - \mathbb {E} [ f (A _ {t}) ]\right) \quad \text {(r e a r r a n g i n g)} \\ = \underbrace {\sum_ {i = 1} ^ {N} m \left(\alpha f (\mathrm {O P T}) - \mathbb {E} [ f (A _ {i}) ]\right)} _ {\text {E x p l o r a t i o n p h a s e}} + \underbrace {\sum_ {t = T _ {N} + 1} ^ {T} \left(\alpha f (\mathrm {O P T}) - \mathbb {E} [ f (A _ {t}) ]\right)} _ {\text {E x p l o t i t a t i o n p h a s e}} \\ = \sum_ {i = 1} ^ {N} m (\alpha f (\mathrm {O P T}) - \mathbb {E} [ f (A _ {i}) ]) + \sum_ {t = T _ {N} + 1} ^ {T} (\alpha f (\mathrm {O P T}) - \mathbb {E} [ f (S) ]). \tag {18} \\ \end{array} +$$ + +Case 1 (clean event): Bounding exploration regret: We will separately bound the regret incurred from the exploration and exploitation. We begin with bounding regret from exploration, + +$$ +\begin{array}{l} \sum_ {i = 1} ^ {N} m \left(\alpha f (\mathrm {O P T}) - \mathbb {E} [ f (A _ {i}) ]\right) \\ \leq \sum_ {i = 1} ^ {N} m (\alpha - 0) \quad \text {(r e w a r d s a r e b o u n d e d i n [ 0 , 1 ])} \\ \leq N m. \tag {19} \\ \end{array} +$$ + +Case 1 (clean event): Bounding exploitation regret: We next bound the regret incurred during the exploitation iteration. Since the set $S$ used during exploitation is a random variable, we can take the expectation of (17) (conditioned on event $\mathcal{E}$ ), to bound the expected instantaneous regret for each time step of the exploitation iteration, + +$$ +\alpha f (\mathrm {O P T}) - \mathbb {E} [ f (S) ] \leq \delta \text {r a d .} \tag {20} +$$ + +Using a loose bound for the duration of the exploitation iteration, $T - T_{L} + 1 < T$ + +$$ +\begin{array}{l} \sum_ {t = T _ {N} + 1} ^ {T} (\alpha f (\mathrm {O P T}) - \mathbb {E} [ f (S) ]) \leq \sum_ {t = T _ {N} + 1} ^ {T} \delta \text {r a d} (using(20)) \\ \leq T \delta \text {r a d .} (21) \\ \end{array} +$$ + +Case 1 (clean event): Bounding total regret: Then the expected cumulative regret (18) can be bounded as + +$$ +\begin{array}{l} \mathbb {E} [ \mathcal {R} (T) | \mathcal {E} ] = \sum_ {i = 1} ^ {N} m (\alpha f (\mathrm {O P T}) - \mathbb {E} [ f (A _ {i}) ]) + \sum_ {t = T _ {N} + 1} ^ {T} (\alpha f (\mathrm {O P T}) - \mathbb {E} [ f (S) ]) \tag {copying(18)} \\ \leq N m + T \delta \text {r a d} \quad (\text {u s i n g (1 9) , (2 1)}) \\ \end{array} +$$ + +Plugging in the formula for the confidence radius $\mathrm{rad} = \sqrt{\log(T) / 2m}$ , we have + +$$ +\mathbb {E} [ \mathcal {R} (T) | \mathcal {E} ] \leq N m + T \delta \sqrt {\log (T) / 2 m} +$$ + +We want to optimize $m$ , the number of times each action is played. Denoting the regret bound (22) as a function of $m$ + +$$ +g (m) = N m + T \delta \sqrt {\log (T) / 2 m}, \tag {22} +$$ + +then + +$$ +g ^ {\prime} (m) = N - \frac {1}{2} T \delta \sqrt {\log (T) / 2} m ^ {- 3 / 2}. \tag {23} +$$ + +Setting $g'(m) = 0$ and solving for $m$ , + +$$ +m ^ {*} = \frac {\delta^ {2 / 3} T ^ {2 / 3} \log (T) ^ {1 / 3}}{2 N ^ {2 / 3}}. \tag {24} +$$ + +We next check the second derivative, + +$$ +g ^ {\prime \prime} (m) = \frac {3}{4} \delta T \sqrt {\log (T) / 2} m ^ {- 5 / 2}. \tag {25} +$$ + +For positive values of $m$ , $g''(m) > 0$ , thus $g(m)$ reaches a minimum at (24). + +Since $m$ is the number of times actions are played, we (trivially) need $m \geq 1$ and $m$ to be an integer. We choose + +$$ +m ^ {\dagger} = \left\lceil \frac {\delta^ {2 / 3} T ^ {2 / 3} \log (T) ^ {1 / 3}}{2 N ^ {2 / 3}} \right\rceil . \tag {26} +$$ + +Since from (25) we have that $g''(m) > 0$ for positive $m$ , $g(m^*) \leq g(m^\dagger)$ . For $T \geq \frac{2\sqrt{2}N}{\delta}$ , we have $m^* \geq 1$ . + +Plugging (26) back in to (22), + +$$ +\begin{array}{l} \mathbb {E} [ \mathcal {R} (T) | \mathcal {E} ] \leq m ^ {\dagger} N + T \delta \sqrt {\log (T) / 2 m ^ {\dagger}} (22) \\ = \lceil m ^ {*} \rceil N + T \delta \sqrt {\log (T) / 2 \lceil m ^ {*} \rceil} \\ \leq \lceil m ^ {*} \rceil N + T \delta \sqrt {\log (T) / 2 m ^ {*}} \quad (\text {S i n c e} \lceil m ^ {*} \rceil \geq m ^ {*}) \\ \leq 2 m ^ {*} N + T \delta \sqrt {\log (T) / 2 m ^ {*}} \quad (\text {S i n c e} m ^ {*} \geq 1, \lceil m ^ {*} \rceil \leq 2 m ^ {*}) \\ = 2 \frac {\delta^ {2 / 3} T ^ {2 / 3} \log (T) ^ {1 / 3}}{2 N ^ {2 / 3}} N \\ + T \delta \sqrt {\log (T) / 2} \left(\frac {\delta^ {2 / 3} T ^ {2 / 3} \log (T) ^ {1 / 3}}{2 N ^ {2 / 3}}\right) ^ {- 1 / 2} (using(24)) \\ = 3 \delta^ {2 / 3} N ^ {1 / 3} T ^ {2 / 3} \log (T) ^ {1 / 3} (27) \\ = \mathcal {O} \left(\delta^ {\frac {2}{3}} N ^ {\frac {1}{3}} T ^ {\frac {2}{3}} \log (T) ^ {\frac {1}{3}}\right). \\ \end{array} +$$ + +In conclusion, the expected $\alpha$ -regret of C-ETC using an $(\alpha, \delta)$ -robust approximation as subroutine is upper bounded by (27) if the clean event $\mathcal{E}$ happens. + +# CASE 2: CLEAN EVENT $\mathcal{E}$ DOES NOT HAPPEN + +We next derive an upper bound for the expected $\alpha$ -regret for case that the event $\mathcal{E}$ does not happen. By Lemma A.3, + +$$ +\mathbb {P} (\bar {\mathcal {E}}) = 1 - \mathbb {P} (\mathcal {E}) \leq \frac {2 N}{T}. +$$ + +Since the reward function $f_{t}(\cdot)$ is upper bounded by 1, the expected $\alpha$ -regret incurred under $\bar{\mathcal{E}}$ for a horizon of $T$ is at most $T$ , + +$$ +\mathbb {E} [ \mathcal {R} (T) | \bar {\mathcal {E}} ] \leq T. \tag {28} +$$ + +# PUTTING IT ALL TOGETHER + +Combining Cases 1 and 2 we have, + +$$ +\begin{array}{l} \mathbb {E} [ \mathcal {R} (T) ] = \mathbb {E} [ \mathcal {R} (T) | \mathcal {E} ] \cdot \mathbb {P} (\mathcal {E}) + \mathbb {E} [ \mathcal {R} (T) | \bar {\mathcal {E}} ] \cdot \mathbb {P} (\bar {\mathcal {E}}) \\ \leq 3 \delta^ {2 / 3} N ^ {1 / 3} T ^ {2 / 3} \log (T) ^ {1 / 3} \cdot 1 + T \cdot \frac {2 N}{T} \\ = \mathcal {O} \left(\delta^ {\frac {2}{3}} N ^ {\frac {1}{3}} T ^ {\frac {2}{3}} \log (T) ^ {\frac {1}{3}}\right). \\ \end{array} +$$ + +(Law of total expectation) + +(using (27), Lemma A.3, and (28)) + +This concludes the proof. + +# B. Offline Approximation Algorithms - Overview + +We give a brief overview of the offline approximation algorithms which we will analyze $(\alpha, \delta)$ robustness for. + +For a $k$ -cardinality constraint, the greedy algorithm GreEDy proposed in Nemhauser et al. (1978) starts from an empty set $G \gets \emptyset$ . Then it repeatedly adds the element with highest marginal gain $f(e|G)$ until the cardinality $|G|$ reaches $k$ . THRESHOLDGREEDY, proposed in Badanidiyuru and Vondrak (2014), considers a sequence of decreasing thresholds: $\{\tau = d; \tau \geq \frac{\epsilon'}{n} d; \tau \gets (1 - \epsilon')\tau\}$ where $d = \max_{e \in \Omega} f(e)$ . Then starting from empty set $G = \emptyset$ , the algorithm includes any element $e \notin G$ such that $f(e|G) \geq \tau$ whenever the cardinality is smaller than $k$ . The algorithm then repeats using a lower threshold. Badanidiyuru and Vondrak (2014) showed that THRESHOLDGREEDY can achieve $1 - 1/e - \epsilon'$ approximation. + +For a knapsack constraint, several algorithms run the following greedy subroutine, which we refer to as GreEDy (cardinality is a special case of this routine with budget $k$ and unit cost, so we keep the same name without confusion). Start with empty set $G \gets \emptyset$ . Repeatedly add the element $e$ with the highest marginal density $\rho(e|G)$ that fits into the budget. Let $G_i$ denote the set selected by GreEDy that has cardinality $i$ and denote the constituent elements as $G_i = \{g_1, \dots, g_i\}$ . Let $L$ denote the cardinality of the final greedy set (i.e. when no more elements remain that are under budget), so $G_L$ is output by GreEDy. Note that $L$ can only be bounded ahead of time—there could be maximal subsets (to which no other elements could be added without violating the budget) of different cardinalities. + +GREEDY can have an unbounded approximation ratio Khuller et al. (1999) for knapsack constraint. Khuller et al. (1999) proposed GREEDY+, which outputs the better of the best individual element $a^* \in \arg \max_{e \in \Omega} f(e)$ and the output of GREEDY, arg $\max_{S \in \{G_L, a^*\}} f(S)$ . Khuller et al. (1999) proved that GREEDY+ achieves a $\frac{1}{2} (1 - \frac{1}{e})$ approximation ratio. Then, Sviridenko (2004); Khuller et al. (1999) proposed PARTIALENUMERATION. It first enumerate all sets with cardinality up to three. For each enumerated triplets, it builds the rest of the solution set greedily. Then it outputs the set with largest value among all evaluated sets. They showed that PARTIALENUMERATION can achieve $1 - 1/e$ approximation ratio. + +Greedy+Max generalizes GreEDy+ by augmenting each set $\{G_i\}_{i=1}^L$ in the nested sequence produced by GreEDy with another element. For $0 \leq i \leq L-1$ , define $G_i' \gets G_i \cup \arg \max_{e \in \Omega: c(G_i) + c(e) \leq B} f(G_i \cup e)$ . By construction, $G_0' = \{a^*\}$ , the best individual element. For $i = L$ , $G_L' \gets G_L$ . GreEDy+MAX then outputs the best set in the augmented sequence, $\arg \max_{S \in \{G_0', \dots, G_L'\}} f(S)$ . (Yaroslavtsev et al., 2020) proposed GreEDy+MAX and proved it achieves an approximation ratio of $\frac{1}{2}$ . + +A bound on the number of value oracle calls will be important in adapting offline methods. Denote $\beta \coloneqq B / c_{\mathrm{min}}$ and $\tilde{K} \coloneqq \min \{n, \beta\}$ as an upper bound of the number of items in any feasible set. We note here that while PARTIALENUMERATION uses $\mathcal{O}(\tilde{K} n^4)$ function evaluations, both GreEDY+MAX and GreEDY+ use $\mathcal{O}(\tilde{K} n)$ oracle calls, same as GreEDY. We use $N = \tilde{K} n$ in the analysis for GreEDY+MAX and GreEDY+. + +# C. Proof for Robustness of Offline Algorithms + +In this section, we prove the $(\alpha, \delta)$ robustness of algorithms considered in Section 6 of the main paper. + +# C.1. Notation + +We first review notations used in the analysis. Recall that we are only able to evaluate the surrogate function $\hat{f}$ such that $|\hat{f}(S) - f(S)| \leq \epsilon$ for any feasible set $S$ and some $\epsilon > 0$ , we further denote $\hat{f}(e|S) = \hat{f}(S \cup e) - \hat{f}(S)$ and $\hat{\rho}(e|S) = \frac{\hat{f}(S \cup e) - \hat{f}(S)}{c(e)}$ . Let $G_i$ denote the set selected by basic GreEDy (based on surrogate function $\hat{f}$ ) as described + +in Section 3 up until $i$ th item and $G_{i} = \{g_{1},\dots ,g_{i}\}$ in the order of each item is selected. Without loss of generality, define $G_0 = \emptyset$ and $f(G_0) = \hat{f} (G_0) = 0$ . Denote $c_{\mathrm{min}} = \min_{e\in \Omega}c(e)$ be the item with lowest individual cost. Let $\beta = B / c_{\mathrm{min}}$ and $\tilde{K} = \min \{n,\beta \}$ being an upper bound of the number of items in any feasible set. Since all selected actions should be feasible, for ease of notation, we omit denoting that condition throughout the proof. For example, we write $\arg \max_{e\in \Omega \backslash A}f(e|A)$ to simplify the notation of $\arg \max_{e:e\in \Omega \backslash A}$ and $A\cup e\in Df(e|A)$ . Let $S$ be the set returned by modified algorithms in corresponding context. + +# C.2. Robustness of Offline Methods for Submodular Maximization under Cardinality Constraint + +# C.2.1. GreEDY + +We consider the original greedy algorithm GreEDy proposed in Nemhauser et al. (1978), which gives a $(1 - \frac{1}{e})$ -approximation guarantee for submodular maximization under a $k$ -cardinality constraint. To restate Proposition 6.1 in the main paper, GreEDy is a $(1 - \frac{1}{e}, 2k)$ -robust approximation algorithm for submodular maximization under a $k$ -cardinality constraint. The result follows from Corollary 4.3 of Nie et al. (2022), part of the regret analysis for a CMAB adaptation of GreEDy. + +# C.2.2. THRESHOLDGREEDY + +We then consider the threshold greedy algorithm THRESHOLDGREEDY proposed in Badanidiyuru and Vondrak (2014), which gives a $(1 - \frac{1}{e} -\epsilon^{\prime})$ -approximation guarantee for submodular maximization under a $k$ -cardinality constraint, where $\epsilon^\prime$ is a user specified parameter to balance accuracy and run time. Restating Proposition 6.2 in the main paper, THRESHOLDGREEDY is a $(1 - \frac{1}{e} -\epsilon^{\prime},2(2 - \epsilon^{\prime})k)$ -robust approximation algorithm for submodular maximization under a $k$ -cardinality constraint. + +Proof. From the assumption of the surrogate function $\hat{f}$ we know + +$$ +f (e | S) - 2 \epsilon \leq \hat {f} (e | S) \leq f (e | S) + 2 \epsilon +$$ + +for any $e \in \Omega \setminus S$ and $S \subseteq \Omega$ . Now assume the next chosen element is $a$ and the current partial solution is $S$ . On one hand, we have + +$$ +\hat {f} (a | S) \geq w \Longrightarrow f (a | S) \geq w - 2 \epsilon , \tag {29} +$$ + +on the other hand, for every $e\in \mathrm{OPT}\setminus S$ + +$$ +\hat {f} (e | S) \leq \frac {w}{1 - \epsilon^ {\prime}} \Longrightarrow f (e | S) \leq \frac {w}{1 - \epsilon^ {\prime}} + 2 \epsilon . \tag {30} +$$ + +Combining and manipulating (29) and (30) we have for any $e \in \mathrm{OPT} \setminus S$ : + +$$ +f (a | S) + 2 \epsilon \geq \left(f (e | S) - 2 \epsilon\right) \left(1 - \epsilon^ {\prime}\right) \Longrightarrow f (a | S) \geq \left(1 - \epsilon^ {\prime}\right) f (e | S) - 2 \left(2 - \epsilon^ {\prime}\right) \epsilon . \tag {31} +$$ + +Taking an average over all $e \in \mathrm{OPT} \setminus S$ , + +$$ +\begin{array}{l} f (a | S) \geq \frac {1 - \epsilon^ {\prime}}{| \mathrm {O P T} \setminus S |} \sum_ {e \in \mathrm {O P T} \setminus S} f (e | S) - 2 (2 - \epsilon^ {\prime}) \epsilon \\ \geq \frac {1 - \epsilon^ {\prime}}{k} \sum_ {e \in \mathrm {O P T} \backslash S} f (e | S) - 2 \left(2 - \epsilon^ {\prime}\right) \epsilon . \tag {32} \\ \end{array} +$$ + +Now consider after $i \in [k - 1]$ steps, we get a partial solution $S_{i} = \{a_{1},\dots ,a_{i}\}$ . By (32), we have + +$$ +\begin{array}{l} f (a _ {i + 1} | S _ {i}) \geq \frac {1 - \epsilon^ {\prime}}{k} \sum_ {e \in \mathrm {O P T} \backslash S} f (e | S _ {i}) - 2 (2 - \epsilon^ {\prime}) \epsilon \\ \geq \frac {1 - \epsilon^ {\prime}}{k} f (\mathrm {O P T} | S _ {i}) - 2 (2 - \epsilon^ {\prime}) \epsilon \quad \text {(s u b m o d u l a r i t y)} \\ \geq \frac {1 - \epsilon^ {\prime}}{k} (f (\mathrm {O P T}) - f (S _ {i})) - 2 (2 - \epsilon^ {\prime}) \epsilon , \quad \text {(m o n o t o n i c i t y)} \\ \end{array} +$$ + +and hence for $i \in [k - 1]$ , + +$$ +f \left(S _ {i + 1}\right) - f \left(S _ {i}\right) = f \left(a _ {i + 1} \mid S _ {i}\right) \geq \frac {1 - \epsilon^ {\prime}}{k} \left(f (\mathrm {O P T}) - f \left(S _ {i}\right)\right) - 2 \left(2 - \epsilon^ {\prime}\right) \epsilon . \tag {33} +$$ + +Using (33) as induction hypothesis, we then prove by induction (omitted) that for $i \in [k - 1]$ , + +$$ +f (S _ {i + 1}) \geq \left[ 1 - \left(1 - \frac {1 - \epsilon^ {\prime}}{k}\right) ^ {i + 1} \right] f (\mathrm {O P T}) - 2 (i + 1) (2 - \epsilon^ {\prime}) \epsilon , +$$ + +and plugging in $i = k - 1$ we get + +$$ +\begin{array}{l} f (S _ {k}) \geq \left[ 1 - \left(1 - \frac {1 - \epsilon^ {\prime}}{k}\right) ^ {k} \right] f (\mathrm {O P T}) - 2 k (2 - \epsilon^ {\prime}) \epsilon \\ \geq (1 - e ^ {- (1 - \epsilon^ {\prime})}) f (\mathrm {O P T}) - 2 k (2 - \epsilon^ {\prime}) \epsilon \\ \geq (1 - 1 / e - \epsilon^ {\prime}) f (\mathrm {O P T}) - 2 k (2 - \epsilon^ {\prime}) \epsilon . \\ \end{array} +$$ + +We finish the proof by observing that $S_{k}$ is the output. + +![](images/b26c385da8facae8a32c8fc77fe3eaecabbc2f99e45c2bb588a76581e5db3548.jpg) + +# C.3. Proof for Robustness of GreEDy+MAX + +In this section, we give a detailed proof for Proposition 6.4 in Section 6 of the main paper. Recall the statement is that GREEDY+MAX is a $(\frac{1}{2}, \frac{1}{2} + \tilde{K} + 2\beta)$ -robust approximation algorithm for submodular maximization problem under a knapsack constraint. + +Let $o_1 \in \arg \max_{e: e \in \mathrm{OPT}} c(e)$ denote the most expensive element in OPT. During the $i$ th iteration of the greedy process, having previously selected the set $G_{i-1}$ with $i - 1$ elements, it will select the element $g_i$ with highest marginal density (based on surrogate function $\hat{f}$ ) among feasible elements, + +$$ +g _ {i} = \underset {e: e \in \Omega \backslash G _ {i - 1}} {\arg \max } \hat {\rho} (e | G _ {i - 1}). \tag {34} +$$ + +Inspired by the proof techniques in Yaroslavtsev et al. (2020), we consider the last item added by the greedy solution (based on the surrogate function $\hat{f}$ ) before the cost of this solution exceeds $B - c(o_1)$ . Denote $G_{\ell}$ as the largest greedy sequence that consumes less than $B - c(o_1)$ budgets, $c(G_{\ell}) \leq B - c(o_1) < c(G_{\ell + 1})$ . Let $a_i$ denote the element selected to augment with the greedy solution $G_i$ , i.e., $a_i = \arg \max_{e \in \Omega \setminus G_i} \hat{f}(e|G_i)$ , and $S_i$ denote the augmented set at $i$ -th iteration. Before proving the theorem, we show Lemma 6.7 in Section 6 of the main paper, that for $i \in \{0, 1, \dots, \ell\}$ , the following inequality holds: + +$$ +\hat {f} \left(G _ {i} \cup o _ {1}\right) + \max \{0, \hat {\rho} \left(g _ {i + 1} \mid G _ {i}\right) \} (B - c \left(o _ {1}\right)) \geq f (\mathrm {O P T}) - (2 \tilde {K} - 1) \epsilon . +$$ + +Proof. Recall that from the definition of $\hat{f}$ , we have $|\hat{f}(S) - f(S)| \leq \epsilon$ for any evaluated set $S$ and some $\epsilon > 0$ . Consequently, we have for any $i \in \{0,1,\dots,\ell\}$ , + +$$ +\left| \hat {f} \left(G _ {i}\right) - f \left(G _ {i}\right) \right| \leq \epsilon . \tag {35} +$$ + +Now we evaluate the set $G_{i} \cup o_{1}$ . + +- Case 1: If $o_1$ has already been added, $o_1 \in G_i$ , then + +$$ +\left| \hat {f} \left(G _ {i} \cup o _ {1}\right) - f \left(G _ {i} \cup o _ {1}\right) \right| = \left| \hat {f} \left(G _ {i}\right) - f \left(G _ {i}\right) \right| \leq \epsilon . +$$ + +- Case 2: If $o_1 \notin G_i$ , then $\hat{f}(G_i \cup o_1)$ is evaluated in iteration $i + 1$ . This iteration $i + 1$ does exist because for any $i \in \{0, 1, \dots, \ell\}$ , we only used less than $B - c(o_1)$ budget. For the remaining budget, at least $o_1$ can still fit into the budget so $G_i \cup o_1$ will be evaluated in iteration $i + 1$ . In this case, we still have + +$$ +\left| \hat {f} \left(G _ {i} \cup o _ {1}\right) - f \left(G _ {i} \cup o _ {1}\right) \right| \leq \epsilon . +$$ + +Combining these two cases, we have + +$$ +\left| \hat {f} \left(G _ {i} \cup o _ {1}\right) - f \left(G _ {i} \cup o _ {1}\right) \right| \leq \epsilon . \tag {36} +$$ + +Also, for any evaluated action in iteration $i + 1$ , namely the actions $\{G_i \cup e | e \in \Omega \setminus G_i$ and $c(e) + c(G_i) \leq B\}$ , we have + +$$ +\begin{array}{l} \rho (e | G _ {i}) = \frac {f (G _ {i} \cup e) - f (G _ {i})}{c (e)} \\ \leq \frac {\hat {f} (G _ {i} \cup e) - \hat {f} (G _ {i})}{c (e)} + \frac {2 \epsilon}{c (e)} \\ = \hat {\rho} (e | G _ {i}) + \frac {2 \epsilon}{c (e)}. \tag {37} \\ \end{array} +$$ + +Then we have + +$$ +\begin{array}{l} f (\mathrm {O P T}) \leq f \left(G _ {i} \cup \mathrm {O P T}\right) \quad \text {(M o n o t o n i c i t y o f} f\left. \right) \\ \leq f \left(G _ {i} \cup o _ {1}\right) + f (\mathrm {O P T} \backslash \left(G _ {i} \cup o _ {1}\right) | G _ {i} \cup o _ {1}) \\ \leq f \left(G _ {i} \cup o _ {1}\right) + \sum_ {e \in \mathrm {O P T} \backslash \left(G _ {i} \cup o _ {1}\right)} f (e | G _ {i} \cup o _ {1}) \quad (\text {S u b m o d u l a r i t y o f} f) \\ \leq \hat {f} \left(G _ {i} \cup o _ {1}\right) + \epsilon + \sum_ {e \in \mathrm {O P T} \backslash \left(G _ {i} \cup o _ {1}\right)} c (e) \rho \left(e \mid G _ {i} \cup o _ {1}\right). \tag {38} \\ \end{array} +$$ + +where (38) uses (36). + +Since we picked iteration $i$ such that $c(G_i) \leq B - c(o_1)$ , then all items in OPT $\backslash (G_i \cup o_1)$ still fit, as $o_1$ is the largest item in OPT. Since the greedy algorithm always selects the item with the largest marginal density with respect to the surrogate function $\hat{f}$ , $g_i = \arg \max_{e \in \Omega \backslash G_i} \hat{\rho}(e|G_i)$ , thus we have + +$$ +\hat {\rho} \left(g _ {i + 1} \mid G _ {i}\right) = \max _ {e \in \Omega \backslash G _ {i}} \hat {\rho} (e \mid G _ {i}) \geq \max _ {e \in \Omega \backslash \left(G _ {i} \cup o _ {1}\right)} \hat {\rho} (e \mid G _ {i}). \tag {39} +$$ + +Hence, continuing with (38), + +$$ +\begin{array}{l} f (\mathrm {O P T}) \leq \hat {f} (G _ {i} \cup o _ {1}) + \epsilon + \left(\sum_ {e \in \mathrm {O P T} \backslash (G _ {i} \cup o _ {1})} c (e) \rho (e | G _ {i} \cup o _ {1})\right) \\ \leq \hat {f} \left(G _ {i} \cup o _ {1}\right) + \epsilon + \sum_ {e \in \mathrm {O P T} \backslash \left(G _ {i} \cup o _ {1}\right)} c (e) \rho (e | G _ {i}) \quad (\text {S u b m o d u l a r i t y}) \\ \leq \hat {f} \left(G _ {i} \cup o _ {1}\right) + \epsilon + \sum_ {e \in \mathrm {O P T} \backslash \left(G _ {i} \cup o _ {1}\right)} c (e) \left(\hat {\rho} (e | G _ {i}) + \frac {2 \epsilon}{c (e)}\right) (using(37)) \\ \leq \hat {f} (G _ {i} \cup o _ {1}) + \epsilon + \sum_ {e \in \mathrm {O P T} \backslash (G _ {i} \cup o _ {1})} \left(c (e) \hat {\rho} (e | G _ {i})\right) + 2 \epsilon | \mathrm {O P T} \backslash (G _ {i} \cup o _ {1}) | \\ \leq \hat {f} \left(G _ {i} \cup o _ {1}\right) + \epsilon + \hat {\rho} \left(g _ {i + 1} \mid G _ {i}\right) \sum_ {e \in \mathrm {O P T} \backslash \left(G _ {i} \cup o _ {1}\right)} \binom {c (e)} {e} + 2 \epsilon | \mathrm {O P T} \backslash \left(G _ {i} \cup o _ {1}\right) | (Using(39)) \\ \leq \hat {f} (G _ {i} \cup o _ {1}) + \epsilon + \hat {\rho} (g _ {i + 1} | G _ {i}) c (\mathrm {O P T} \setminus (G _ {i} \cup o _ {1})) + 2 \epsilon | \mathrm {O P T} \setminus (G _ {i} \cup o _ {1}) | \\ \leq \hat {f} (G _ {i} \cup o _ {1}) + \epsilon + \max \{0, \hat {\rho} (g _ {i + 1} | G _ {i}) \} c (\text {O P T} \backslash (G _ {i} \cup o _ {1})) + 2 \epsilon | \text {O P T} \backslash (G _ {i} \cup o _ {1}) | \\ \leq \hat {f} (G _ {i} \cup o _ {1}) + \epsilon + \max \{0, \hat {\rho} (g _ {i + 1} | G _ {i}) \} (g _ {i + 1} | G _ {i}) (B - c (o _ {1})) + 2 \epsilon | \mathrm {O P T} \backslash (G _ {i} \cup o _ {1}) | \\ \leq \hat {f} (G _ {i} \cup o _ {1}) + \max \{0, \hat {\rho} (g _ {i + 1} | G _ {i}) \} (g _ {i + 1} | G _ {i}) (B - c (o _ {1})) + (2 \tilde {K} - 1) \epsilon . \\ \end{array} +$$ + +Rearranging terms gives the desired result. + +Now we are ready to prove Proposition 6.4 (robustness of GreEDy+MAX algorithm). Applying Lemma 6.7 (GreEDy+MAX inequality) for $i = \ell$ , and recalling that $\ell$ is chosen as the index of the last greedy set such that $c(G_{\ell}) \leq B - c(o_1) < c(G_{\ell + 1})$ , + +$$ +\hat {f} \left(G _ {\ell} \cup o _ {1}\right) + \max \left\{0, \hat {\rho} \left(g _ {\ell + 1} \mid G _ {\ell}\right) \right\} (B - c \left(o _ {1}\right)) \geq f (\mathrm {O P T}) - (2 \tilde {K} - 1) \epsilon . \tag {40} +$$ + +From (40), we will next argue at least one of the terms in the left hand side must be large. We will consider cases for the two terms being large. To minimize the worst-case additive error term from the cases, we will split the cases into whether $\hat{f}(G_{\ell} \cup o_1)$ is larger than or equal to $\frac{1}{2} f(\mathrm{OPT}) - (\tilde{K} - \frac{1}{2} + \gamma)\epsilon$ , or $\max \{0, \hat{\rho}(g_{\ell+1}|G_{\ell})(B - c(o_1))\}$ is larger than or equal to $\frac{1}{2} f(\mathrm{OPT}) - (\tilde{K} - \frac{1}{2} - \gamma)\epsilon$ , where $\gamma$ will be selected later to minimize the additive error $\delta$ coefficient. + +Case 1: If $\hat{f}(G_{\ell} \cup o_1) \geq \frac{1}{2} f(\mathrm{OPT}) - (\tilde{K} - \frac{1}{2} + \gamma)\epsilon$ , recall that $a_{\ell}$ as the element selected to augment with the greedy solution $G_{\ell}$ , $a_{\ell} = \arg \max_{e \in \Omega \setminus G_{\ell}} \hat{f}(e|G_{\ell})$ , then + +$$ +\begin{array}{l} \hat {f} \left(G _ {\ell} \cup a _ {\ell}\right) \geq \hat {f} \left(G _ {\ell} \cup o _ {1}\right) \\ \geq \frac {1}{2} f (\mathrm {O P T}) - \left(\tilde {K} - \frac {1}{2} + \gamma\right) \epsilon . \tag {41} \\ \end{array} +$$ + +The set $S$ that the algorithm selects in the end will be the set with the highest mean (based on surrogate function $\hat{f}$ ) among all those evaluated (both sets in the greedy process and their augmentations). Also, its observed value $\hat{f}(S_{\ell})$ is at most $\epsilon$ above $f(S)$ . Thus + +$$ +\begin{array}{l} f (S) \geq \hat {f} (S) - \epsilon \\ \geq \hat {f} (G _ {\ell} \cup a _ {\ell}) - \epsilon \\ \geq \frac {1}{2} f (\mathrm {O P T}) - \left(\tilde {K} + \frac {1}{2} + \gamma\right) \epsilon . \tag {using(41)} \\ \end{array} +$$ + +Case 2(a): If $\max \{0, \hat{\rho}(g_{\ell+1}|G_{\ell})\}(B - c(o_1)) \geq \frac{1}{2} f(\mathrm{OPT}) - (\tilde{K} - \frac{1}{2} - \gamma)\epsilon$ and $\hat{\rho}(g_{\ell+1}|G_{\ell}) > 0$ , rearranging we have + +$$ +\hat {\rho} \left(g _ {\ell + 1} \mid G _ {\ell}\right) \geq \frac {f (\mathrm {O P T})}{2 \left(B - c \left(o _ {1}\right)\right)} - \frac {\left(\tilde {K} - \frac {1}{2} - \gamma\right) \epsilon}{B - c \left(o _ {1}\right)}. \tag {42} +$$ + +Then, + +$$ +\begin{array}{l} \hat {f} (G _ {\ell}) = \hat {f} (G _ {\ell}) - \hat {f} (G _ {\ell - 1}) + \hat {f} (G _ {\ell - 1}) + \dots - \hat {f} (G _ {1}) + \hat {f} (G _ {1}) - \hat {f} (G _ {0}) \quad (\text {t e l e s c o p i n g} G _ {0} = \emptyset , \hat {f} (G _ {0}) := 0) \\ = \sum_ {j = 1} ^ {l - 1} \hat {f} \left(g _ {j + 1} \mid G _ {j}\right) \quad (\text {D e f i n i t i o n o f} \hat {f} (\cdot | \cdot)) \\ = \sum_ {j = 0} ^ {l - 1} \hat {\rho} (g _ {j + 1} | G _ {j}) c (g _ {j + 1}) \quad \text {(D e f i n i t i o n o f} \hat {\rho} (\cdot | \cdot)) \\ \geq \sum_ {j = 0} ^ {l - 1} \hat {\rho} \left(g _ {\ell + 1} \mid G _ {j}\right) c \left(g _ {j + 1}\right) \quad (\text {g r e e d y c h o i c e o f} g _ {j + 1}) \\ \geq \sum_ {j = 0} ^ {l - 1} \left(\rho \left(g _ {\ell + 1} \mid G _ {j}\right) - \frac {2 \epsilon}{c \left(g _ {\ell + 1}\right)}\right) c \left(g _ {j + 1}\right) \\ \geq \sum_ {j = 0} ^ {l - 1} \left(\rho \left(g _ {\ell + 1} \mid G _ {\ell}\right) - \frac {2 \epsilon}{c \left(g _ {\ell + 1}\right)}\right) c \left(g _ {j + 1}\right) \quad (\text {s u b m o d u l a r i t y o f} f) \\ = \left(\rho \left(g _ {\ell + 1} \mid G _ {\ell}\right) - \frac {2 \epsilon}{c \left(g _ {\ell + 1}\right)}\right) c \left(G _ {\ell}\right) \quad (\text {s i m p l i f y i n g}) \\ \geq \left(\hat {\rho} (g _ {\ell + 1} | G _ {\ell}) - \frac {4 \epsilon}{c (g _ {\ell + 1})}\right) c (G _ {\ell}) \\ \geq \hat {\rho} \left(g _ {\ell + 1} \mid G _ {\ell}\right) c \left(G _ {\ell}\right) - 4 \beta \epsilon . \tag {43} \\ \end{array} +$$ + +Recalling that $\ell$ is chosen as the index of the last greedy set that has a remaining budget as big as the cost of the heaviest element in OPT, $c(G_{\ell}) \leq B - c(o_1) < c(G_{\ell + 1})$ , + +$$ +\begin{array}{l} \hat {f} \left(G _ {\ell + 1}\right) = \hat {f} \left(G _ {\ell} \cup g _ {\ell + 1}\right) \\ = \hat {f} (G _ {\ell}) + c (g _ {\ell + 1}) \hat {\rho} (g _ {\ell + 1} | G _ {\ell}) \\ \geq \left(\hat {\rho} \left(g _ {\ell + 1} \mid G _ {\ell}\right) c \left(G _ {\ell}\right) - 4 \beta \epsilon\right) + c \left(g _ {\ell + 1}\right) \hat {\rho} \left(g _ {\ell + 1} \mid G _ {\ell}\right) (from(43)) \\ = \hat {\rho} \left(g _ {\ell + 1} \mid G _ {\ell}\right) c \left(G _ {\ell + 1}\right) - 4 \beta \epsilon \quad (\text {s i m p l i f y i n g}) \\ \geq \frac {\frac {1}{2} f (\mathrm {O P T}) - \left(\tilde {K} - \frac {1}{2} - \gamma\right) \epsilon}{B - c \left(o _ {1}\right)} c \left(G _ {\ell + 1}\right) - 4 \beta \epsilon \quad \text {(c a s e 2 c o n d i t i o n)} \\ \geq \frac {1}{2} f (\mathrm {O P T}) - (\tilde {K} - \frac {1}{2} - \gamma) \epsilon - 4 \beta \epsilon \quad (\ell \text {c h o s e n s o t h a t} c (G _ {\ell + 1}) > B - c (o _ {1})) \\ = \frac {1}{2} f (\mathrm {O P T}) - \left(\tilde {K} - \frac {1}{2} - \gamma + 4 \beta\right) \epsilon . (44) \\ \end{array} +$$ + +The set $S$ that the algorithm selects at the end of the exploitation phase will be the set with the highest empirical mean among all those explored (both sets in the greedy process and augmented sets). Thus its empirical mean is at most $\epsilon$ above $f(S)$ . + +$$ +\begin{array}{l} f (S) \geq \hat {f} (S) - \epsilon \\ \geq \hat {f} (G _ {\ell + 1}) - \epsilon \\ \geq \frac {1}{2} f (\mathrm {O P T}) - \left(\tilde {K} + \frac {1}{2} - \gamma + 4 \beta\right) \epsilon . \tag {using(44)} \\ \end{array} +$$ + +Case 2(b): If $\max \{0, \hat{\rho}(g_{\ell+1}|G_{\ell})\} (B - c(o_1)) \geq \frac{1}{2} f(\mathrm{OPT}) - (\tilde{K} - \frac{1}{2} - \gamma)\epsilon$ and $\hat{\rho}(g_{\ell+1}|G_{\ell}) \leq 0$ , then the set $S$ that the algorithm selects at the end satisfies + +$$ +\begin{array}{l} f (S) \geq 0 \\ \geq \frac {1}{2} f (\mathrm {O P T}) - (\tilde {K} - \frac {1}{2} - \gamma) \epsilon \quad (\text {C a s e 2 (b) c o n d i t i o n}) \\ \geq \frac {1}{2} f (\mathrm {O P T}) - (\tilde {K} - \frac {1}{2} - \gamma + 4 \beta) \epsilon . \\ \end{array} +$$ + +Thus, combining cases 1 and 2, and selecting $\gamma = 2\beta$ , the additive $\frac{1}{2}$ -approximation error we get by the modified Greedy+Max algorithm is at most $(\frac{1}{2} + \tilde{K} + 2\beta)\epsilon$ , which concludes the proof. + +# C.4. Proof for Robustness of GREEDY+ + +In this section, we prove Proposition 6.5 in Section 6 of the main paper. The following statements, Lemmas C.1,C.2 and C.4, and their proofs are adapted from the proof of $\frac{1}{2} (1 - \frac{1}{e})$ approximation ratio in the offline setting (Khuller et al., 1999) using a value oracle. (Krause and Guestrin, 2005) adapted the proof of (Khuller et al., 1999) to an offline setting where the greedy process relies on an exact oracle to evaluate individual element values and to compare the best individual element to the set output by the greedy process, but use an inexact value oracle (within $\epsilon$ of the correct value) to evaluate marginal densities. + +The main differences arise from (i) the algorithms of (Khuller et al., 1999; Krause and Guestrin, 2005) evaluate densities before checking for feasibility, $^{2}$ leading to different definitions of the augmented greedy sequence, necessitating us to use more care to show analogous properties, (ii) exact value oracles for best individual elements and for selecting OPT are used in (Khuller et al., 1999; Krause and Guestrin, 2005), simplifying work to conclude the final bound for the approximation ratio $\alpha = \frac{1}{2} \left(1 - \frac{1}{e}\right)$ and leading to a different $\delta$ . + +Recall that Proposition 6.5 in Section 6 of the main paper states that GreEDy+ is a $(\frac{1}{2}(1 - \frac{1}{e}), 2 + \tilde{K} + \beta)$ -robust approximation algorithm for submodular maximization problem under a knapsack constraint. + +We define $G_{i}$ and $g_{i}$ the same as previous section. Recall that the greedy process (using a surrogate $\hat{f}$ ) produces a nested sequence of subsets $\varnothing = G_0 \subset G_1 \subset \dots \subset G_L$ , where $L$ denotes the cardinality of the set final output of the greedy process. For the proof, we describe the greedy process as running for $L + 1$ iterations, though on the final iteration no elements are added. + +For any action $G_{i-1} \cup a$ evaluated in iteration $i$ of the greedy process, its marginal gains are upper bounded by that of the best subset based on surrogate function $\hat{f}$ , + +$$ +\begin{array}{l} \frac {f (G _ {i - 1} \cup a) - f (G _ {i - 1}) - 2 \epsilon}{c (a)} \leq \frac {\hat {f} (G _ {i - 1} \cup a) - \hat {f} (G _ {i - 1})}{c (a)} \\ \leq \frac {\hat {f} (G _ {i - 1} \cup g _ {i}) - \hat {f} (G _ {i - 1})}{c (g _ {i})} \quad \left(g _ {i} \text {s e l e c t e d b y g r e e d y r u l e b a s e d o n} \hat {f}\right) \\ \leq \frac {f (G _ {i - 1} \cup g _ {i}) - f (G _ {i - 1}) + 2 \epsilon}{c (g _ {i})} \\ = \frac {f \left(G _ {i}\right) - f \left(G _ {i - 1}\right) + 2 \epsilon}{c \left(g _ {i}\right)}, \tag {45} \\ \end{array} +$$ + +where (45) just uses the definition of $G_{i} \gets G_{i-1} \cup g_{i}$ . We will use (45) to lower bound the true marginal gains (i.e. in terms of $f$ ) achieved for each iteration of the greedy process. + +Let $\ell \in \{1, \dots, L + 1\}$ denote the first iteration for which there was an element $a' \in \Omega \backslash G_{\ell - 1}$ whose cost exceeds the remaining budget $(c(a') + c(G_{\ell - 1}) > B)$ (thus subset $G_{\ell - 1} \cup a'$ was not sampled), yet whose marginal density was higher than the marginal density of the chosen element $g_{\ell}$ up to $\pm 2\epsilon$ normalized by the cost, specifically, for $\ell \leq L$ , + +$$ +\frac {f \left(G _ {\ell - 1} \cup a ^ {\prime}\right) - f \left(G _ {\ell - 1}\right) - 2 \epsilon}{c \left(a ^ {\prime}\right)} > \frac {f \left(G _ {\ell - 1} \cup a _ {\ell}\right) - f \left(G _ {\ell - 1}\right) + 2 \epsilon}{c \left(a _ {r}\right)}. \tag {46} +$$ + +If there is no such iteration $\ell < L + 1$ , then for $\ell = L + 1$ , we take the element $a'$ maximizing the term on the left hand side of (46), + +$$ +a ^ {\prime} = \underset {a \in \Omega \backslash G _ {\ell - 1}} {\arg \max } \frac {f \left(G _ {\ell - 1} \cup a\right) - f \left(G _ {\ell - 1}\right) - 2 \epsilon}{c (a)}. \tag {47} +$$ + +Likewise, if there is more than one element satisfying (46) for some (earliest) iteration $r$ , then we also take the maximizer (47). + +We define an "augmented" greedy sequence of length $\ell$ which matches the greedy sequence up to the set of cardinality $\ell$ , where the element $a'$ is selected despite violating the budget, + +$$ +\left\{\widetilde {G} _ {0} = G _ {0} = \emptyset , \widetilde {G} _ {1} = G _ {1}, \dots , \widetilde {G} _ {\ell - 1} = G _ {\ell - 1}, \widetilde {G} _ {\ell} = G _ {\ell - 1} \cup \left\{a ^ {\prime} \right\} \right\} \tag {48} +$$ + +and correspondingly enumerate the elements of $\widetilde{G}_{\ell}$ in the order they were selected, + +$$ +\{\widetilde {g} _ {1} = g _ {1}, \dots , \widetilde {g} _ {\ell - 1} = g _ {\ell - 1}, \widetilde {g} _ {\ell} = g ^ {\prime} \}. \tag {49} +$$ + +We first prove the following lemma, bounding the marginal gains of the augmented greedy sequence $\{\widetilde{G}_0,\dots ,\widetilde{G}_\ell \}$ + +Lemma C.1. For all $i\in \{1,2,\dots ,\ell \}$ , the following inequality holds: + +$$ +f (\widetilde {G} _ {i}) - f (\widetilde {G} _ {i - 1}) \geq \frac {c (\widetilde {g} _ {i})}{B} \left[ f (\mathrm {O P T}) - f (\widetilde {G} _ {i - 1}) \right] - 2 \left(1 + \frac {\tilde {K} c (\widetilde {g} _ {i})}{B}\right) \epsilon . +$$ + +Proof. Set any $i \in \{1, 2, \dots, \ell\}$ . Let $\{v_1, v_2, \dots, v_k\} = \mathrm{OPT} \setminus \widetilde{G}_{i-1}$ . Note that by construction (48), we have $\widetilde{G}_{i-1} = G_{i-1}$ . + +The difference $f(\mathrm{OPT}) - f(\widetilde{G}_{i-1})$ can be bounded by the marginal gains of elements in the set difference, + +$$ +\begin{array}{l} f (\mathrm {O P T}) - f \left(\widetilde {G} _ {i - 1}\right) \leq \sum_ {j = 1} ^ {k} \left[ f \left(\widetilde {G} _ {i - 1} \cup v _ {j}\right) - f \left(\widetilde {G} _ {i - 1}\right) \right] (Fact1) \\ = \sum_ {j = 1} ^ {k} \left[ f \left(\widetilde {G} _ {i - 1} \cup v _ {j}\right) - f \left(\widetilde {G} _ {i - 1}\right) - 2 \epsilon + 2 \epsilon \right] \\ = \sum_ {j = 1} ^ {k} c (v _ {j}) \frac {f (\widetilde {G} _ {i - 1} \cup v _ {j}) - f (\widetilde {G} _ {i - 1}) - 2 \epsilon}{c (v _ {j})} + 2 k \epsilon \\ \leq \sum_ {j = 1} ^ {k} c \left(v _ {j}\right) \frac {f \left(\widetilde {G} _ {i - 1} \cup \widetilde {g} _ {i}\right) - f \left(\widetilde {G} _ {i - 1}\right) + 2 \epsilon}{c \left(\widetilde {g} _ {i}\right)} + 2 k \epsilon (50) \\ = \sum_ {j = 1} ^ {k} c \left(v _ {j}\right) \frac {f \left(\widetilde {G} _ {i}\right) - f \left(\widetilde {G} _ {i - 1}\right) + 2 \epsilon}{c \left(\widetilde {g} _ {i}\right)} + 2 k \epsilon (51) \\ \end{array} +$$ + +where (50) holds by following. We consider four cases, depending on whether or not $\hat{f} (G_{i - 1}\cup v_j)$ was evaluated during the iteration $i$ . + +- Case 1 ( $\hat{f}(G_{i-1} \cup v_j)$ was evaluated and $i < \ell$ ): At iteration $i$ (necessarily $i \leq L$ since no subsets were evaluated in iteration $L+1$ ) with current greedy set $G_{i-1}$ , adding the element $v_j$ to the current greedy set was feasible, $c(v_j) \leq B - c(G_{i-1})$ . Then GreEDy+ would have evaluated $\hat{f}(G_{i-1} \cup v_j)$ . Since $v_j$ was not selected, the chosen element $g_i = G_i \backslash G_{i-1}$ must have had a higher surrogate density $\hat{f}(G_{i-1} \cup v_j) > \hat{f}(G_{i-1} \cup g_i)$ , so for $i < \ell$ , for which $\widetilde{g}_i = g_i$ by construction (49), (45) implies (50). +- Case 2 ( $\hat{f}(G_{i-1} \cup v_j)$ was evaluated and $i = \ell$ ): By the reasoning in the previous case, for the item $a_\ell$ chosen at iteration $\ell$ by the greedy process (due to feasibility and having the highest surrogate density), we still have the bound (45) on true values, which coupled with our specific construction of $\widetilde{g}_\ell$ (46) means + +$$ +\begin{array}{l} \frac {f \left(\widetilde {G} _ {\ell - 1} \cup v _ {j}\right) - f \left(\widetilde {G} _ {\ell - 1}\right) - 2 \epsilon}{c \left(v _ {j}\right)} \leq \frac {f \left(\widetilde {G} _ {\ell - 1} \cup a _ {r}\right) - f \left(\widetilde {G} _ {\ell - 1}\right) + 2 \epsilon}{c \left(a _ {r}\right)} \tag {by(45)} \\ < \frac {f \left(\widetilde {G} _ {\ell - 1} \cup \widetilde {g} _ {r}\right) - f \left(\widetilde {G} _ {\ell - 1}\right) - 2 \epsilon}{c \left(\widetilde {g} _ {r}\right)} \quad (\text {b y}) \\ < \frac {f (\widetilde {G} _ {\ell - 1} \cup \widetilde {g} _ {r}) - f (\widetilde {G} _ {\ell - 1}) + 2 \epsilon}{c (\widetilde {g} _ {r})}. \\ \end{array} +$$ + +- Case 3 ( $\hat{f}(G_{i-1} \cup v_j)$ was not evaluated and $i < \ell$ ): At iteration $i < \ell \leq L+1$ with the current greedy set $G_{i-1}$ , adding the element $v_j$ to the current greedy set was not feasible, $c(v_j) > B - c(G_{i-1})$ . By construction of the augmented greedy sequence, only at iteration $\ell$ was there an infeasible element whose surrogate marginal density satisfied the inequality (46). Thus, for iterations $i < \ell$ , $G_{i-1} = \widetilde{G}_{i-1}$ and $G_i = \widetilde{G}_i$ , so (50) holds. +- Case 4 ( $\hat{f}(G_{i-1} \cup v_j)$ was not evaluated and $i = \ell$ ): For iteration $i = \ell$ , with current greedy set $G_{i-1}$ , the augmented greedy sequence construction implies (50). Namely, with $i = \ell$ , + +$$ +\begin{array}{l} \frac {f \left(\widetilde {G} _ {\ell - 1} \cup v _ {j}\right) - f \left(\widetilde {G} _ {\ell - 1}\right) - 2 \epsilon}{c \left(v _ {j}\right)} < \frac {f \left(\widetilde {G} _ {\ell - 1} \cup \widetilde {g} _ {r}\right) - f \left(\widetilde {G} _ {\ell - 1}\right) - 2 \epsilon}{c \left(\widetilde {g} _ {r}\right)} \tag {by(47)} \\ < \frac {f (\widetilde {G} _ {\ell - 1} \cup \widetilde {g} _ {r}) - f (\widetilde {G} _ {\ell - 1}) + 2 \epsilon}{c (\widetilde {g} _ {r})}. \\ \end{array} +$$ + +menaing (50) holds. + +We now continue lower bounding $f(\mathrm{OPT}) - f(\widetilde{G}_{i - 1})$ + +$$ +\begin{array}{l} f (\mathrm {O P T}) - f (\widetilde {G} _ {i - 1}) \leq \left[ \sum_ {j = 1} ^ {k} c (v _ {j}) \frac {f (\widetilde {G} _ {i}) - f (\widetilde {G} _ {i - 1}) + 2 \epsilon}{c (\widetilde {g} _ {i})} \right] + 2 k \epsilon \tag {copying(51)} \\ = \left[ \sum_ {j = 1} ^ {k} c (v _ {j}) \right] \frac {f (\widetilde {G} _ {i}) - f (\widetilde {G} _ {i - 1}) + 2 \epsilon}{c (\widetilde {g} _ {i})} + 2 k \epsilon \\ \leq B \frac {f (\widetilde {G} _ {i}) - f (\widetilde {G} _ {i - 1}) + 2 \epsilon}{c (\widetilde {g} _ {i})} + 2 k \epsilon \quad \text {(O P T i s f e a s i b l e , s o} \sum_ {j = 1} ^ {k} c (v _ {j}) \leq B) \\ \leq \frac {B}{c (\widetilde {g} _ {i})} \left[ f (\widetilde {G} _ {i}) - f (\widetilde {G} _ {i - 1}) \right] + 2 \left[ \frac {B}{c (\widetilde {g} _ {i})} + \tilde {K} \right] \epsilon . \quad \text {(r e a r r a n g i n g ;} k \leq \tilde {K}) \\ \end{array} +$$ + +Multiplying both sides by $\frac{c(\widetilde{g}_i)}{B}$ and rearranging finishes the proof. + +We unravel the recurrence in Lemma C.1 to lower bound $f(\widetilde{G}_i)$ . + +Lemma C.2. For all $i\in \{1,2,\dots ,\ell \}$ + +$$ +f (\widetilde {G} _ {i}) \geq \left[ 1 - \prod_ {j = 1} ^ {i} (1 - \frac {c (\widetilde {g} _ {j})}{B}) \right] f (\mathrm {O P T}) - 2 (\beta + \tilde {K}) \epsilon . +$$ + +Remark C.3. The steps to unravel the recurrence to obtain the first term (coefficient of $f(\mathrm{OPT})$ ) is the same as the proof for the analogous result in the offline setting (Khuller et al., 1999). The second term (with $\epsilon$ ) is due to working with marginal densities of a surrogate function $\hat{f}$ . The basic steps for working with that second term is the same as (Krause and Guestrin, 2005), though we use a looser bound $\beta$ ; in (Krause and Guestrin, 2005) we think there may be a mistake in applying the induction step (with $c(X_i)$ ) fixed for different $i$ in the proof), though they were loosely bounded with $\beta$ later on. + +Proof. The proof will follow by induction. We first show the base case $i = 1$ using Lemma C.1. + +$$ +\begin{array}{l} f (\widetilde {G} _ {1}) = f (\widetilde {G} _ {1}) - f (\widetilde {G} _ {0}) \quad (f \text {i s n o r m a l i z e d}; \widetilde {G} _ {0} = \emptyset) \\ \geq \frac {c (\widetilde {g} _ {1})}{B} \left[ f (\mathrm {O P T}) - f (\widetilde {G} _ {0}) \right] - 2 \left(1 + \frac {\tilde {K} c (\widetilde {g} _ {1})}{B}\right) \epsilon \quad \text {(u s i n g L e m m a C . 1)} \\ = \left[ 1 - \left(1 - \frac {c \left(\widetilde {g} _ {1}\right)}{B}\right) \right] f (\mathrm {O P T}) - 2 \left(1 + \frac {\tilde {K} c \left(\widetilde {g} _ {1}\right)}{B}\right) \epsilon \tag {52} \\ \end{array} +$$ + +where (52) follows from rearranging. For the second term in (52), using that + +$$ +\begin{array}{l} 1 + \frac {\tilde {K} c (\tilde {g} _ {1})}{B} \leq \frac {B}{c (\tilde {g} _ {1})} \left(1 + \frac {\tilde {K} c (\tilde {g} _ {1})}{B}\right) \quad \left(\text {s i n c e} \frac {B}{c (\tilde {g} _ {1})} \geq 1\right) \\ = \frac {B}{c (\widetilde {g} _ {1})} + \tilde {K} \\ \leq \frac {B}{c _ {\min}} + \tilde {K} \\ = \beta + \tilde {K}, \tag {53} \\ \end{array} +$$ + +then + +$$ +\begin{array}{l} f \left(\widetilde {G} _ {1}\right) \geq \left[ 1 - \left(1 - \frac {c \left(\widetilde {g} _ {1}\right)}{B}\right) \right] f (\mathrm {O P T}) - 2 \left(1 + \frac {\tilde {K} c \left(\widetilde {g} _ {1}\right)}{B}\right) \epsilon (copying(52)) \\ \geq \left[ 1 - \left(1 - \frac {c \left(\widetilde {g} _ {1}\right)}{B}\right) \right] f (\mathrm {O P T}) - 2 (\beta + \tilde {K}) \epsilon . (using(53)) \\ \end{array} +$$ + +This completes the base case of $i = 1$ . + +We next consider $i > 1$ . Unraveling the recurrence shown in Lemma C.1, + +$$ +\begin{array}{l} f (\widetilde {G} _ {i}) = f (\widetilde {G} _ {i}) - f (\widetilde {G} _ {i - 1}) + f (\widetilde {G} _ {i - 1}) \\ \geq \left[ \frac {c (\widetilde {g} _ {i})}{B} (f (\mathrm {O P T}) - f (\widetilde {G} _ {i - 1})) - 2 \left(1 + \frac {\tilde {K} c (\widetilde {g} _ {i})}{B}\right) \epsilon \right] + f (\widetilde {G} _ {i - 1}) \quad (usingLemmaC.1) \\ = \left[ \frac {c (\widetilde {g} _ {i})}{B} \right] f (\mathrm {O P T}) - 2 \left(1 + \frac {\tilde {K} c (\widetilde {g} _ {i})}{B}\right) \epsilon + \left[ 1 - \frac {c (\widetilde {g} _ {i})}{B} \right] f (\widetilde {G} _ {i - 1}) \quad \text {(r e a r r a n g i n g)} \\ = \left[ 1 - \left(1 - \frac {c (\widetilde {g} _ {i})}{B}\right) \right] f (\mathrm {O P T}) - 2 \left(1 + \frac {\tilde {K} c (\widetilde {g} _ {i})}{B}\right) \epsilon \\ + \left[ 1 - \frac {c (\widetilde {g} _ {i})}{B} \right] f (\widetilde {G} _ {i - 1}) \quad \text {(r e a r r a n g i n g)} \\ \geq \left[ 1 - \left(1 - \frac {c (\widetilde {g} _ {i})}{B}\right) \right] f (\mathrm {O P T}) - 2 \left(1 + \frac {\tilde {K} c (\widetilde {g} _ {i})}{B}\right) \epsilon \\ + \left(1 - \frac {c (\widetilde {g} _ {i})}{B}\right) \left[ \left(1 - \prod_ {j = 1} ^ {i - 1} \left(1 - \frac {c (\widetilde {g} _ {j})}{B}\right)\right) f (\mathrm {O P T}) - 2 (\beta + \tilde {K}) \epsilon \right] \quad \text {(i n d u c t i o n s t e p)} \\ = \left[ 1 - \left(1 - \frac {c (\widetilde {g} _ {i})}{B}\right) + \left(1 - \frac {c (\widetilde {g} _ {i})}{B}\right) \left(1 - \prod_ {j = 1} ^ {i - 1} \left(1 - \frac {c (\widetilde {g} _ {j})}{B}\right)\right) \right] f (\text {O P T}) \\ - 2 \left(1 + \frac {\tilde {K} c (\widetilde {g} _ {i})}{B} + \left(1 - \frac {c (\widetilde {g} _ {i})}{B}\right) (\beta + \tilde {K})\right) \epsilon \quad \text {(r e a r r a n g i n g)} \\ = \left[ 1 - \prod_ {j = 1} ^ {i} \left(1 - \frac {c \left(\widetilde {g} _ {j}\right)}{B}\right) \right] f (\mathrm {O P T}) \\ - 2 \left(1 + \beta - \beta \frac {c \left(\widetilde {g} _ {i}\right)}{B} + \tilde {K}\right) \epsilon . (54) \\ \end{array} +$$ + +For the second term in (54), using that + +$$ +\begin{array}{l} \beta \frac {c \left(\widetilde {g} _ {i}\right)}{B} = \frac {B}{c _ {\min }} \frac {c \left(\widetilde {g} _ {i}\right)}{B} \quad (\text {d e f . o f} \beta) \\ = \frac {c (\widetilde {g} _ {i})}{c _ {\min}} \\ \geq 1, \tag {55} \\ \end{array} +$$ + +then + +$$ +\begin{array}{l} - 2 \left(1 + \beta - \beta \frac {c (\widetilde {g} _ {i})}{B} + \tilde {K}\right) \epsilon = - 2 (\beta + \tilde {K}) \epsilon + 2 \left(\beta \frac {c (\widetilde {g} _ {i})}{B} - 1\right) \epsilon \quad \text {(r e a r r a n g i n g)} \\ \geq - 2 (\beta + \tilde {K}) \epsilon . \quad \text {(u s i n g (5 5))} \\ \end{array} +$$ + +Applying this to (54) completes the proof. + +The inequality in Lemma C.2 for the augmented greedy set of cardinality $\ell$ can be further simplified. We will use the following observations. + +Lemma C.4. The following inequality holds: + +$$ +f (\widetilde {G} _ {\ell}) \geq (1 - \frac {1}{e}) f (\mathrm {O P T}) - 2 (\beta + \tilde {K}) \epsilon . +$$ + +Proof. Applying $i = \ell$ to Lemma C.2 and bounding the coefficient for $f(\mathrm{OPT})$ , + +$$ +\begin{array}{l} f (\widetilde {G} _ {\ell}) \geq \left[ 1 - \prod_ {j = 1} ^ {\ell} \left(1 - \frac {c (\widetilde {g} _ {j})}{B}\right) \right] f (\mathrm {O P T}) - 2 (\beta + \tilde {K}) \epsilon \\ \geq \left[ 1 - \prod_ {j = 1} ^ {\ell} \left(1 - \frac {c \left(\widetilde {g} _ {j}\right)}{c \left(\widetilde {G} _ {\ell}\right)}\right) \right] f (\mathrm {O P T}) - 2 (\beta + \tilde {K}) \epsilon \quad \text {(b y c o n s t r u c t i o n ,} c \left(\widetilde {G} _ {\ell}\right) > B) \\ \geq \left[ 1 - \prod_ {j = 1} ^ {\ell} \left(1 - \frac {c \left(\widetilde {G} _ {\ell}\right) / \ell}{c \left(\widetilde {G} _ {\ell}\right)}\right) \right] f (\mathrm {O P T}) - 2 (\beta + \tilde {K}) \epsilon (usingFact2) \\ = \left[ 1 - (1 - \frac {1}{\ell}) ^ {\ell} \right] f (\mathrm {O P T}) - 2 (\beta + \tilde {K}) \epsilon \quad \text {(s i m p l i f y i n g)} \\ \geq \left(1 - \frac {1}{e}\right) f (\mathrm {O P T}) - 2 (\beta + \tilde {K}) \epsilon . (usingFact3) \\ \end{array} +$$ + +Using the aforementioned lemmas, we are now ready to complete the proof for Theorem 3 (robustness of GREEDY+ algorithm). We will bound the value of set $G_{L}$ using the results on the augmented greedy set (48) of cardinality $\ell$ , and in turn bound the value of the set $S$ , the final output of GREEDY+. + +Recall that $\mathrm{GreEDY}+$ chooses the set $S$ to be either the best individual element (based on $\hat{f}$ ) $a^* \gets \arg \max_{e \in \Omega} \hat{f}(e)$ or the output of the greedy process $G_L$ . Let $a^{\mathrm{OPT}} = \arg \max_{e \in \Omega} f(e)$ denote the element with the highest value under $f$ . Then + +$$ +\begin{array}{l} f (a ^ {*}) \geq \hat {f} (a ^ {*}) - \epsilon \\ \geq \hat {f} \left(a ^ {\mathrm {O P T}}\right) - \epsilon \quad (\text {b y}) \\ \geq f \left(a ^ {\mathrm {O P T}}\right) - 2 \epsilon . \tag {56} \\ \end{array} +$$ + +By construction (48), $\widetilde{G}_{\ell}$ includes one more element $a^\prime$ than $\widetilde{G}_{\ell -1}$ (and $a^\prime$ maximizes (47)). By submodularity, the marginal gain of $a^\prime$ is bounded by $f(a^{\prime})$ and in turn by the best individual element based on surrogate function $\hat{f}$ + +$$ +\begin{array}{l} f \left(\widetilde {G} _ {\ell - 1}\right) + f \left(a ^ {\mathrm {O P T}}\right) \geq f \left(\widetilde {G} _ {\ell - 1}\right) + f \left(a ^ {\prime}\right) \quad (\text {b y}) \\ \geq f \left(\widetilde {G} _ {\ell - 1}\right) + \left[ f \left(\widetilde {G} _ {\ell - 1} \cup a ^ {\prime}\right) - f \left(\widetilde {G} _ {\ell - 1}\right) \right] \quad (\text {b y}) \\ = f \left(\widetilde {G} _ {\ell - 1} \cup a ^ {\prime}\right) \\ = f \left(\widetilde {G} _ {\ell}\right) \quad (\text {b y}) \\ \geq \left(1 - \frac {1}{e}\right) f (\mathrm {O P T}) - 2 (\beta + \tilde {K}) \epsilon , \tag {57} \\ \end{array} +$$ + +where (57) follows from Lemma C.4. + +Also by construction (48), the greedy and augmented greedy processes match up to and including the set of cardinality $\ell - 1$ , so + +$$ +\begin{array}{l} f \left(G _ {L}\right) \geq f \left(G _ {\ell - 1}\right) \quad (\text {m o n o t o n i c i t y}) \\ = f \left(\widetilde {G} _ {\ell - 1}\right). \tag {By construction (48)} \\ \end{array} +$$ + +Thus, + +$$ +\begin{array}{l} f (G _ {L}) + f (a ^ {\mathrm {O P T}}) \geq f (\widetilde {G} _ {\ell - 1}) + f (a ^ {\mathrm {O P T}}) \\ \geq \left(1 - \frac {1}{e}\right) f (\mathrm {O P T}) - 2 (\beta + \tilde {K}) \epsilon . \tag {using(57)} \\ \end{array} +$$ + +At least one of $f(G_L)$ and $f(a^{\mathrm{OPT}})$ is at least half of the value of the right hand side, + +$$ +\max \left\{f \left(G _ {L}\right), f \left(a ^ {\mathrm {O P T}}\right) \right\} \geq \frac {1}{2} \left(1 - \frac {1}{e}\right) f (\text {O P T}) - (\beta + \tilde {K}) \epsilon \tag {58} +$$ + +Thus, for the chosen set $S$ + +$$ +\begin{array}{l} f (S) \geq \hat {f} (S) - \epsilon \\ = \max \left\{\hat {f} \left(G _ {L}\right), \hat {f} \left(a ^ {*}\right) \right\} - \epsilon \\ \geq \max \left\{\hat {f} \left(G _ {L}\right), \hat {f} \left(a ^ {\mathrm {O P T}}\right) \right\} - \epsilon \quad \left(a ^ {*} \text {i s t h e e l e m e n t w i t h l a r g e s t} \hat {f} \text {v a l u e}\right) \\ \geq \max \left\{f \left(G _ {L}\right) - \epsilon , f \left(a ^ {\mathrm {O P T}}\right) - \epsilon \right\} - \epsilon \quad \text {(e l e m e n t - w i s e d o m i n a n c e)} \\ = \max \left\{f \left(G _ {L}\right), f \left(a ^ {\mathrm {O P T}}\right) \right\} - 2 \epsilon \\ \geq \frac {1}{2} \left(1 - \frac {1}{e}\right) f (\mathrm {O P T}) - (\beta + \tilde {K}) \epsilon - 2 \epsilon \tag {from(58)} \\ = \frac {1}{2} (1 - \frac {1}{e}) f (\mathrm {O P T}) - (2 + \beta + \tilde {K}) \epsilon . \\ \end{array} +$$ + +which completes the proof. + +# C.5. Proof for Robustness of PARTIALENUMERATION + +Now we analyze the PARTIALENUMERATION algorithm for submodular maximization under a knapsack constraint proposed in Sviridenko (2004); Khuller et al. (1999). Recall that Proposition 6.3 in Section 6 of the main paper states PARTIALENUMERATION is a $(1 - \frac{1}{e}, 4 + 2\tilde{K} + 2\beta)$ -robust approximation algorithm for submodular maximization under a knapsack constraint. + +Proof. Assume $|\mathrm{OPT}| > 3$ , otherwise the algorithm finds a $(1,2)$ -robust approximation, so it is also a $(1 - \frac{1}{e}, 2(\tilde{K} + \beta))$ -robust approximation for non-trivial cases where $\tilde{K} \geq 1$ and $\beta \geq 1$ . Enumerate the elements of the optimal solution as $\mathrm{OPT} = \{Y_1, \dots, Y_m\}$ , corresponding to the order they would be selected by the simple greedy algorithm (iteratively selecting the element with the largest marginal gain, not the largest marginal density) + +$$ +Y _ {i + 1} = \underset {Y \in \mathrm {O P T}} {\arg \max } f \left(\left\{Y _ {1}, \dots , Y _ {i}, Y \right\}\right) - f \left(\left\{Y _ {1}, \dots , Y _ {i} \right\}\right), \tag {59} +$$ + +and let $R = \{Y_1, Y_2, Y_3\}$ . Consider the iteration where the algorithm considers $R$ . Define the function + +$$ +f ^ {\prime} (A) = f (A \cup R) - f (R). \tag {60} +$$ + +$f^{\prime}$ is a non-decreasing submodular set function with $f^{\prime}(\emptyset) = 0$ , and the optimal solution (with budget $B - c(R)$ ) is $\mathrm{OPT} \setminus R$ since for any set $S$ with $\mathrm{cost} c(S) \leq B - c(R)$ , + +$$ +\begin{array}{l} f ^ {\prime} (\mathrm {O P T} \backslash R) = f (\mathrm {O P T} \cup R) - f (R) \quad (\text {d e f o f} f ^ {\prime}) \\ = f (\mathrm {O P T}) - f (R) \quad (R \subseteq \mathrm {O P T} \text {b y c o n s t r u c t i o n}) \\ \geq f (S \cup R) - f (R) \\ = f ^ {\prime} (S). \\ \end{array} +$$ + +Hence we can apply GreEDY+ algorithm to $f'$ (based on noisy evaluations). Let $g_{\ell}$ be the first element from OPT\R which could not be added due to budget constraints, and let $A = \{g_1,\dots ,g_{\ell -1}\}$ be first $\ell -1$ elements selected by GreEDY+ algorithm. Let $G = A\cup R$ . Using Lemma C.4, we get + +$$ +f ^ {\prime} (A \cup g _ {\ell}) \geq (1 - \frac {1}{e}) f ^ {\prime} (\mathrm {O P T} \setminus R) - 2 (\beta^ {\prime} + \tilde {K} ^ {\prime}) \epsilon , +$$ + +where $\beta' = \frac{B - c(R)}{c_{\min}',}$ , $\tilde{K}' = \min \{n - 3, \beta'\}$ and $c_{\min}' = \min_{e \in \Omega \setminus R} c(e)$ . Simple calculation can show that $\beta' \leq \beta$ and $\tilde{K}' \leq \tilde{K}$ . Thus, + +$$ +f ^ {\prime} (A \cup g _ {\ell}) \geq (1 - \frac {1}{e}) f ^ {\prime} (\mathrm {O P T} \setminus R) - 2 (\beta + \tilde {K}) \epsilon , +$$ + +From the definition of $f'$ , we have $f(G) = f'(A) + f(R)$ . Let $\Delta = f'(A \cup g_{\ell}) - f'(A)$ . We have + +$$ +f ^ {\prime} (A) + \Delta \geq \left(1 - \frac {1}{e}\right) f ^ {\prime} (\mathrm {O P T} \backslash R) - 2 (\beta + \tilde {K}) \epsilon . \tag {61} +$$ + +Further observe that elements in OPT are ordered that for all $1 \leq i \leq 3$ , + +$$ +\begin{array}{l} f \left(\left\{Y _ {1}, \dots , Y _ {i} \right\}\right) - f \left(\left\{Y _ {1}, \dots , Y _ {i - 1} \right\}\right) \\ \geq f \left(\left\{Y _ {1}, \dots , Y _ {i - 1}, g _ {\ell} \right\}\right) - f \left(\left\{Y _ {1}, \dots , Y _ {i - 1} \right\}\right) \quad \text {(o r d e r i n g r u l e)} \\ \geq f (R \cup A \cup g _ {\ell}) - f (R \cup A) \quad (\{Y _ {1}, \dots , Y _ {i - 1} \} \subseteq R \text {w h e n} 1 \leq i \leq 3 \text {a n d s u b m o d u l a r i t y}) \\ = f (R \cup A \cup g _ {\ell}) - f (R) - (f (R \cup A) - f (R)) \\ = f ^ {\prime} (A \cup g _ {\ell}) - f ^ {\prime} (A) \\ = \Delta . \\ \end{array} +$$ + +By telescoping sum, $f(R) \geq 3\Delta$ . Now we get + +$$ +\begin{array}{l} f (G) = f (R) + f ^ {\prime} (A) \\ \geq f (R) + \left(1 - \frac {1}{e}\right) f ^ {\prime} (\mathrm {O P T} \backslash R) - 2 (\beta + \tilde {K}) \epsilon - \Delta \\ \geq f (R) + \left(1 - \frac {1}{e}\right) f ^ {\prime} (\mathrm {O P T} \backslash R) - 2 (\beta + \tilde {K}) \epsilon - f (R) / 3 \\ \geq \left(1 - \frac {1}{3}\right) f (R) + \left(1 - \frac {1}{e}\right) f ^ {\prime} (\mathrm {O P T} \backslash R) - 2 (\beta + \tilde {K}) \epsilon \\ \geq \left(1 - \frac {1}{e}\right) \left[ f ^ {\prime} (\mathrm {O P T} \backslash R) + f (R) \right] - 2 (\beta + \tilde {K}) \epsilon \quad (e \leq 3) \\ = (1 - \frac {1}{e}) f (\mathrm {O P T}) - 2 (\beta + \tilde {K}) \epsilon . \quad \text {(d e f i n i t i o n} f ^ {\prime}) \\ \end{array} +$$ + +The output of the algorithm is not necessarily $G$ because the values of the evaluated triplets are based on surrogate function $\hat{f}$ . Denote $\mathcal{O}$ as the output of the algorithm and denote $G'$ as the best evaluated set (with respect to $\hat{f}$ ) with size $\ell + 2$ (same as $G$ ). We must have that $\hat{f}(G') \geq \hat{f}(G)$ . Also denote the final set (until violating budget) continuing $G'$ as $G''$ . We have, + +$$ +\begin{array}{l} f (\mathcal {O}) \geq \hat {f} (\mathcal {O}) - \epsilon \\ \geq \hat {f} \left(G ^ {\prime \prime}\right) - \epsilon \quad \text {(s e l e c t i o n r u l e o f t h e a l g o r i m t h m)} \\ \begin{array}{l l} \geq f (G ^ {\prime \prime}) - 2 \epsilon \\ \geq f (G ^ {\prime}) - 2 \epsilon \end{array} \quad (G ^ {\prime} \subseteq G ^ {\prime \prime} \text {a n d m o n o t o n i c i t y o f} f) \\ \geq \hat {f} (G ^ {\prime}) - 3 \epsilon \\ \geq \hat {f} (G) - 3 \epsilon \\ \geq f (G) - 4 \epsilon \\ \geq \left(1 - \frac {1}{e}\right) f (\mathrm {O P T}) - (4 + 2 \beta + 2 \tilde {K}) \epsilon , \\ \end{array} +$$ + +finishing the proof. + +# D. Implementation of Algorithm $\mathbf{OG}^{\circ}$ + +In this section we describe implementation details and parameter selection for $\mathrm{OG^o}$ algorithm (Streeter and Golovin, 2008). The choice of exploration probability is given by the original paper: $\gamma = n^{1 / 3}\beta \left(\frac{\log(n)}{T}\right)^{1 / 3}$ , where $\beta = B / c_{\min}$ . Note that + +Algorithm 2 Online Greedy for Opaque Feedback Model $(\mathrm{OG}^{\circ})$ +Input: set of base arms $\Omega$ , horizon $T$ , cost for each arm $c(a)$ , budget $B$ +Initialize $n \gets |\Omega|$ , $c_{\min} \gets \min_{a \in \Omega} \{c(a)\}$ , $\beta \gets \frac{B}{c_{\min}}$ , $\gamma \gets n^{1/3} \beta \left( \frac{\log(n)}{T} \right)^{1/3}$ , $\epsilon \gets \sqrt{\frac{\beta \log(n)}{\gamma T}}$ +Initialize $\omega_1 \gets \text{ones}(\beta, n)$ +for $t \in [1, \dots, T]$ do + $S_t \gets \emptyset$ $l \gets \text{zeros}(\beta, n) // \text{loss}$ +Randomly sample a value $\xi \sim \text{Uniform}([0, 1])$ +if $\xi \leq \gamma$ then + $e \sim \text{Uniform}(\{1, \dots, \beta\})$ +for $i \in [1, \dots, e-1]$ do +// For experts before $e$ , exploit +Select an arm $a$ with probability $\frac{\omega_t[i, a]}{\sum\omega_t[i, i]}$ , re-sample if $a \in S_t$ $S_t \gets S_t \cup \{a\}$ with probability $\frac{c_{\min}}{c(a)}$ ; $S_t \gets S_{t-1}$ otherwise +end for + $a \sim \text{Uniform}(\{1, \dots, n\} \backslash S_t) // \text{For expert } e$ , explore + $S_t \gets S_t \cup \{a\}$ +Play action $S_t$ , observe $f_t(S_t)$ +Update $l[i, j] \gets \frac{c_{\min} f_t(S_t)}{c(a)}$ for all $i = e$ and $j \neq a$ // Feed $\frac{c_{\min} f_t(S_t)}{c(a)}$ back to expert $e$ associated with action $a$ +Update $\omega_{t+1}[i, j] \gets \omega_t[i, j] \exp(-el[i, j])$ for all pairs of $i$ and $j$ +else +// Exploitation with probability $1 - \gamma$ +for $i \in [1, \dots, \beta]$ do +// For experts before $e$ , exploit +Select arm $a$ with probability $\frac{\omega_t[i, a]}{\sum\omega_t[i, i]}$ , re-sample if $a \in S_t$ $S_t \gets S_t \cup \{a\}$ with probability $\frac{c_{\min}}{c(a)}$ ; $S_t \gets S_{t-1}$ otherwise +end for +Play action $S_t$ , observe $f_t(S_t)$ $\omega_{t+1}[i, j] \gets \omega_t[i, j] // \text{Since feeding back 0 to all expert-action payoffs, loss is 0, no update}$ +end if +end for + +in the original paper, $B$ is used instead of $\beta$ , because they assume the minimum cost is 1. Here we generalize it to arbitrary non-negative costs. $\epsilon$ is the learning rate for Randomized Weighted Majority (WMR) expert algorithm (Arora et al., 2012). It is chosen by setting the derivative of regret upper bound to zero, which is $\epsilon = \sqrt{\frac{\log(n)}{T_e}}$ , where $T_e$ is the time spent on updating expert $e$ . Since it explores with probability $\gamma$ , and there are $\beta$ expert algorithms, we have $T_e \approx \frac{\gamma T}{\beta}$ . Thus we pick $\epsilon = \sqrt{\frac{\beta \log(n)}{\gamma T}}$ . In experiments, there are many cases the chosen $\gamma$ is large or even larger than 1, so we cap the probability of exploring $\gamma$ by 1/2 to avoid exploring too much. Note that unlike a hard budget in our setting, for $\mathrm{OG^o}$ , it only requires the budget to be satisfied in expectation, so in general we might choose sets over budget. Algorithm 2 is the pseudo code for implementation details of $\mathrm{OG^o}$ . + +# E. Comments on Lower bounds of Submodular CMAB + +For the setting we explore in this paper, with stochastic (or even adversarial) knapsack-constrained combinatorial MAB with submodular expected rewards and just bandit feedback, it remains an open question if $\tilde{\mathcal{O}}(T^{1/2})$ expected cumulative $\alpha$ -regret is possible (ignoring $n$ and $\beta$ ). Both (Streeter and Golovin, 2008) and (Niazadeh et al., 2021) analyze lower bounds for the adversarial setting. However, (Streeter and Golovin, 2008) obtain bounds for 1-regret (it is NP-hard in offline setting to obtain an approximation ratio better than $1 - 1/e$ ). (Niazadeh et al., 2021) obtain $\tilde{\Omega}(T^{2/3})$ lower bounds for the harder setting where feedback is only available during "exploration" rounds chosen by the agent, who incurs an associated penalty. + +# F. Dealing with Small Time Horizons in Experiments + +In Section 6, we used $N = \tilde{K} n$ as an upper bound on the number of function evaluations for both C-ETC-K and C-ETC-Y, where $n$ is the number of base arms and $\tilde{K}$ is an upper bound of the cardinality of any feasible sets. When the time horizon $T$ is small, it is possible that the exploration phase will not finish due to the formula being optimized for $m$ (the number of plays for each action queried by $\mathcal{A}$ ) uses a loose bound on the exploitation time. When this is the case, we select the largest $m$ (closest to the formula) for which we can guarantee that exploration will finish. Recall that for C-ETC-Y and C-ETC-K, the number of oracle calls can only be upper bounded in advance. + +We first calculate $m^{\dagger}$ using (26): + +$$ +m ^ {\dagger} = \left\lceil \frac {\delta^ {2 / 3} T ^ {2 / 3} \log (T) ^ {1 / 3}}{2 \tilde {K} ^ {2 / 3} n ^ {2 / 3}} \right\rceil . +$$ + +Note that a (slightly tighter) upper bound on the number of subsets evaluated during the exploration phase (with $\tilde{K}$ bounding the number of iterations of the greedy process) is + +$$ +\begin{array}{l} N \leq n + (n - 1) + \dots + (n - \tilde {K} + 1) \\ = \left(n - \frac {\tilde {K}}{2} + \frac {1}{2}\right) \tilde {K}. \\ \end{array} +$$ + +We compare $\left(n - \frac{\tilde{K}}{2} +\frac{1}{2}\right)\tilde{K} m^{\dagger}$ with $T$ + +- Case 1. If $\left(n - \frac{\tilde{K}}{2} +\frac{1}{2}\right)\tilde{K} m^{\dagger} < T$ , C-ETC can finish exploring. We select $m = m^{\dagger}$ . +- Case 2. If $\left(n - \frac{\tilde{K}}{2} + \frac{1}{2}\right) \tilde{K} m^{\dagger} \geq T$ , it is possible that the algorithm cannot finish exploring. In this case, we will find a new $m$ , so that the exploration can be guaranteed to finish. We select the largest $m$ (closest to $m^{\dagger}$ ) so that the exploration time is upper bounded by $T$ , + +$$ +m = \frac {T}{\left(n - \frac {\tilde {K}}{2} + \frac {1}{2}\right) \tilde {K}}. +$$ + +# G. Basic Facts + +Fact 1. For a monotonically non-decreasing submodular set function $f$ defined over subsets of $\Omega$ , we have for arbitrary subsets $A, B \subseteq \Omega$ + +$$ +f (B) - f (A) \leq \sum_ {j \in B \backslash A} [ f (A \cup \{j \}) - f (A) ]. +$$ + +Fact 2. (Khuller et al., 1999) For $x_{1}, \dots, x_{n} \in \mathbb{R}^{+}$ such that $\sum x_{i} = A$ , the function $[1 - \prod_{i=1}^{n} (1 - \frac{x_i}{A})]$ achieves its minimum at $x_{1} = x_{2} = \dots = x_{n} = A / n$ . + +Fact 3. For $k \geq 1$ , + +$$ +1 - \left(1 - \frac {1}{k}\right) ^ {k} \geq 1 - \frac {1}{e}. +$$ + +# H. Expanded Discussions of Other Related Works + +# H.1. Relation to (Streeter and Golovin, 2008) + +In (Streeter and Golovin, 2008), an offline iterative greedy algorithm was adapted for the knapsack constraint. In addition to the differences between adversarial and stochastic CMAB problem formulations and regret definitions discussed in Section 5, + +there are two key differences between the regret bounds of $\mathrm{OG^o}$ in (Streeter and Golovin, 2008) and the regret bounds for the proposed adaptations C-ETC-B, C-ETC-Y, C-ETC-K, C-ETC-S making them incomparable. + +The first key difference is that Streeter and Golovin (2008) only adapted an offline iterative greedy algorithm that in general does not achieve a constant approximation. The algorithm is a natural extension of the offline algorithm proposed by Nemhauser et al. (1978) for cardinality constraints. It iteratively adds elements based on density (marginal gain divided by cost). However, Khuller et al. (1999) implicitly showed that unless the greedy procedure happens to use up the whole budget exactly, which in general does not happen, the procedure will not achieve a constant approximation ratio. The offline algorithms we adapted, (Sviridenko, 2004; Yaroslavtsev et al., 2020), all use that iterative greedy procedure as a sub-routine, but then augment the output with elements based on values (instead of densities). While the iterative greedy subroutine is straightforward to adapt like $\mathrm{OG^o}$ was (with a caveat described below as the second difference), it is unclear how the additional augmentation should be implemented. One possibility would be to add additional expert algorithms but they would not be sequential (the augmentations are distinct/independent of each other). + +The second key difference is that while we considered "hard" knapsack constraints (i.e., every action/submit must be within budget), $\mathrm{OG^0}$ was only designed to handle knapsack constraints in expectation, where the expectation is over the algorithm's randomness. More specifically, for each round, the algorithm constructs the actions by sampling base arms one by one based on some probabilities, so that overall the expected budget used is $B$ . That means that $\mathrm{OG^0}$ is allowed to select actions whose cost is larger than the budget $B$ . + +In (Golovin et al., 2014), the authors propose an algorithm for adversarial setting with submodular rewards when there is a matroid constraint (neither knapsack nor matroid constraints are special cases of the other). + +# H.2. Relation to (Niazadeh et al., 2021) + +For the particular problem of submodular CMAB with knapsack constraints, we do not believe Niazadeh et al. (2021)'s results hold directly because of the required sub-problem structure: + +- First, Niazadeh et al. (2021)'s framework requires a known number of sub-problems. +- Second, efficient offline approximation algorithms for knapsack-constrained submodular maximization do not have the "purely" iterative-greedy sub-problem structure required by Niazadeh et al. (2021)'s framework. + +In the following, we explain these two points in detail. Before we discuss those points, we would also like to point out that Niazadeh et al. (2021)'s framework is for adversarial problems while ours is for stochastic problems; the two settings (and subsequently frameworks) are different from one another and cannot be specialized to the other. Thus, our novelty is not just in proposing a framework through which example offline algorithms can be adapted which were not adaptable under theirs, but in proposing a framework for stochastic problems where only bandit feedback (the reward) is available. Finally, we also note that the general regret guarantees of our framework are different than those of Niazadeh et al. (2021)'s; that may in part be due to different problem setups. + +1. Niazadeh et al. (2021)'s framework requires the offline algorithm to have a known number of sub-problems (iterations). For offline approximation algorithms for submodular maximization with knapsack constraints, the number of iterations (corresponding to the number of elements added to the greedy set) is not known ahead of time. It varies with different problem instances. It can be upper-bounded (by $B / c_{\mathrm{min}}$ ), but for certain problem instances the greedy set chosen could be a single element and for other instances it could be a large cardinality set. A potential, albeit partial, "fix" that we believe could be done for iterative greedy offline approximation algorithms where the number of iterations is upper-bounded but not known a priori is to extend Niazadeh et al. (2021)'s framework for weakened constraints, namely that the constraint is only required to be met in expectation. We believe this could be done since Streeter and Golovin (2008)'s adaptation of the standard greedy algorithm for cardinality-constrained submodular maximization was similar to (Niazadeh et al., 2021)'s adaptation of the same algorithm. For knapsack problems, Streeter and Golovin (2008) adapted an offline iterative greedy algorithm but only considered satisfying the knapsack constraint in expectation. They take an upper bound on the number of iterations and converted each (potential) iteration into an experts algorithm. For each experts algorithm, the adaptation included an element with some probability so that the expected size was the budget. We anticipate a similar modification could be done for Niazadeh et al. (2021)'s framework as well. + +2. Niazadeh et al. (2021)'s framework requires the offline algorithm to have an iterative greedy structure, where the output of the $i$ -th subproblem is a feasible solution that is fed into the $(i + 1)$ -th subproblem. However, unlike the case for the cardinality constraint, the structures of the offline algorithms for knapsack-constrained submodular maximization are not "pure" iterative greedy. They do all employ an iterative greedy sub-routine that adds elements based on density (marginal value divided by cost), but that iterative greedy sub-routine alone does not achieve a constant approximation (Khuller et al., 1999). Offline approximation algorithms for this problem that achieve constant approximations all employ additional steps. + +- For example, Yaroslavtsev et al. (2020)'s procedure takes the output of the iterative greedy procedure and then augments it with additional elements. It may be possible to implement this second sub-routine as separate sub-problems (each adapted using expert algorithms), but these new subproblems would not be part of a chain of subproblems, the output of one feeding into the next. And these extra sub-problems would have different characteristics than those in the main iterative-greedy chain. +- For Sviridenko (2004)'s partial enumeration procedure, which is known to achieve the best approximation ratio of $1 - 1/e$ but with high complexity, the solution is selected by re-running an iterative greedy sub-routine on every feasible subset of size at most three. This algorithm might be revised as a purely iterative greedy procedure, such as the first sub-problem selecting the subset of size as most three (so over $\binom{n}{3}$ possible choices), but this will lead to extremely slow update for the first sub-problem because of its dimension and bandit feedback, rendering it impractical for the online setting. + +# H.3. Relation to (Li et al., 2022) + +The offline algorithm proposed in (Li et al., 2022) outputs a feasible solution, but to select that solution, it queries the value oracle for some subsets whose cost is above the budget. Specifically, in (Li et al., 2022), the first subroutine (used to bound $f(\mathrm{OPT})$ ) in the algorithm is an iterative greedy approximation algorithm, with selected set $S'$ yielding an upper and lower bound on the optimal value $f(\mathrm{OPT})$ , namely $\frac{1}{4} f(S') \leq f(\mathrm{OPT}) \leq 2f(S')$ . Those bounds on the optimal value are then used in later sub-routines. + +That first sub-routine iterates over all elements once (in an arbitrary order) and adds element $u$ if $\frac{f(u|S')}{c(u)} \geq \frac{f(S')}{B}$ , where $S'$ is the currently selected set. There is no enforcement that $S'$ constructed in this sub-routine remains feasible (i.e., its cost is under budget, $c(S') \leq B$ ). For example, consider $f$ being linear (thus submodular), $B = 2$ , $f(1) = 1$ , $f(2) = 2$ , $f(3) = 3$ , and $c(u) = 1$ for all $u \in \{1, 2, 3\}$ , where the value $u$ corresponds to the order the elements would be evaluated. That subroutine will select $\{1, 2, 3\}$ as the final set, with a total cost of $3 > B$ . Even if that sub-routine is modified to have two passes, such as one pass to evaluate marginal values, then a second pass over them in order of decreasing marginal values, that counter-example could be expanded by having all marginal values the same and then linear conditional values with similar construction as the example above. + +As mentioned, the sub-routine in question is only used for upper and lower bounding $f(\mathrm{OPT})$ . Looser upper and lower bounds could be used so that only feasible sets are evaluated, namely $\max_u f(u) \leq f(\mathrm{OPT}) \leq \frac{B}{c_{\min}} \max_u f(u)$ , but these would not provide upper and lower bound approximations that are a constant fraction within $f(\mathrm{OPT})$ , which we expect would lead to the computational complexity depending on the budget $B$ and thus no longer being a "clean linear time" algorithm. + +# H.4. Related work on Stochastic Submodular CMAB with Semi-Bandit Feedback + +There are also a number of works that require additional "semi-bandit" feedback. For combinatorial MAB with submodular rewards, a common type of semi-bandit feedback are marginal gains (Lin et al., 2015; Yue and Guestrin, 2011; Yu et al., 2016; Takemori et al., 2020a), which enable the learner to take actions of maximal cardinality or budget, receive a corresponding reward, and gain information not just on the set but individual elements. For the full-bandit setting we consider, to greedily build a solution, we need to spend time taking small cardinality actions to estimate their quality, incurring regret. + +# I. Experiments with Song Recommendation + +We test our methods on the application of song recommendation on the Million Song Dataset (Bertin-Mahieux et al., 2011). In this problem, the agents aims to recommend a bundle of songs to users such that they are liked by as many users as possible. + +![](images/6751c4aeb006e2cae413514a8c51eb21efcf7d8de658720771ebfa95d98923c3.jpg) +(a) + +![](images/894e7f52edd9277347f6c7c83f2cb3753e446e915a08eb010c96a81379a5e76a.jpg) +(b) + +![](images/3371419637ace41fd8c417196b1cf59bab6653deb23216077afe29c98e2521cc.jpg) +(c) + +![](images/5737b693d4b79478bf30005dd6b6b99a32b563612ed84ed346e4d8d6a586af9e.jpg) +(d) +Figure 2: Plots for song recommendation example. (a) and (b) are comparison results for cumulative regret as a function of time horizon $T$ . (c) and (d) are the moving average plot with window size 100 of instantaneous reward as a function of $t$ . The gray dashed lines in (a) and (b) represent $y = aT^{2/3}$ for various values of $a$ for visual reference. The gray dashed lines in (c) and (d) represent expected rewards for the action chosen by an offline greedy algorithm. + +# Data Set Description and Experiment Details + +From the Million Song Dataset, we extract most popular 20 songs and 100 most active users. As in Yue and Guestrin (2011), we model the system as having a set of topics (or genres) $\mathcal{G}$ with $|\mathcal{G}| = d$ and for each item $e\in \Omega$ , there is a feature vector $x(e)\coloneqq (P_g(e))_{g\in \mathcal{G}}\in \mathbb{R}^d$ that represents the information coverage on different genres. For each genre $g$ , we define the probabilistic coverage function $f_{g}(S)$ by $1 - \prod_{e\in S}(1 - P_{g}(e))$ and define the reward function $f(S) = \sum_{i}w_{i}f_{i}(S)$ with linear coefficients $w_{i}$ . The vector $w\coloneqq [w_1,\dots ,w_d]$ represents user preference on genres. In calculating $P_{g}(e)$ and $w$ , we use the same formula for calculating $\bar{w} (e,g)$ and $\theta^{*}$ in Hiranandani et al. (2020). Like Takemori et al. (2020b), we define the cost of a song by its length (in seconds). For each user, the stochastic rewards of set $S$ are sampled from a Bernoulli distribution with parameter $f(S)$ . For the total reward, we take the average over all users. When making the plots, we use statistics taken from 10 runs. + +# Results and Discussion + +Figures 2a and 2b show average cumulative regret curves for C-ETC-K (in blue), C-ETC-Y (in orange) and $\mathrm{OG^o}$ (in green) for different horizon $T$ values when the budget constraint $B$ is 500 and 800, respectively. Figures 2c and 2d are the instantaneous reward plots over a single horizon $T = 215,443$ . Again, C-ETC significantly outperforms $\mathrm{OG^o}$ for all time horizons and budget considered. We again estimated the slopes for both methods on log-log scale plots. Over the horizons tested, $\mathrm{OG^o}$ 's cumulative regret (averaged over ten runs) has a growth rate above 0.85. The growth rates of C-ETC-K for budgets 500 and 800 are 0.70 and 0.73, respectively. The growth rates of C-ETC-Y for budgets 500 and 800 are 0.70 and 0.71, respectively. \ No newline at end of file diff --git a/aframeworkforadaptingofflinealgorithmstosolvecombinatorialmultiarmedbanditproblemswithbanditfeedback/images.zip b/aframeworkforadaptingofflinealgorithmstosolvecombinatorialmultiarmedbanditproblemswithbanditfeedback/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4109c0914b088ee88306a52100f6ebab50b4efe9 --- /dev/null +++ b/aframeworkforadaptingofflinealgorithmstosolvecombinatorialmultiarmedbanditproblemswithbanditfeedback/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c11e3f4b432dae3bfbe50d9e3f962abd04a7f0aea0a41231f0ba77a2c5e6e12 +size 1587147 diff --git a/aframeworkforadaptingofflinealgorithmstosolvecombinatorialmultiarmedbanditproblemswithbanditfeedback/layout.json b/aframeworkforadaptingofflinealgorithmstosolvecombinatorialmultiarmedbanditproblemswithbanditfeedback/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..bb08117f24eddacd3a75d7d9706f39494ac16862 --- /dev/null +++ b/aframeworkforadaptingofflinealgorithmstosolvecombinatorialmultiarmedbanditproblemswithbanditfeedback/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b0f77def8d593a3ea7bf517510142914c8f57d2373113411171d67742ad3405b +size 1573064 diff --git a/afullyfirstordermethodforstochasticbileveloptimization/2cc651f2-f6b4-4e49-a59e-4ad91f32f9e7_content_list.json b/afullyfirstordermethodforstochasticbileveloptimization/2cc651f2-f6b4-4e49-a59e-4ad91f32f9e7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c5e6d699c6877bccfa090df6cc2d3a6d106f9762 --- /dev/null +++ b/afullyfirstordermethodforstochasticbileveloptimization/2cc651f2-f6b4-4e49-a59e-4ad91f32f9e7_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b912f9e3a41254f993313ba0fc53da5063df66909e562fe86e0a16b42d7b1c5b +size 285500 diff --git a/afullyfirstordermethodforstochasticbileveloptimization/2cc651f2-f6b4-4e49-a59e-4ad91f32f9e7_model.json b/afullyfirstordermethodforstochasticbileveloptimization/2cc651f2-f6b4-4e49-a59e-4ad91f32f9e7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0f90e7ac4b6765c020fe3120f3972987d3e80ffe --- /dev/null +++ b/afullyfirstordermethodforstochasticbileveloptimization/2cc651f2-f6b4-4e49-a59e-4ad91f32f9e7_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dbfc307009bf425ab6f649b12874d4507cf29ab5753729fe2047e2170aec11d1 +size 327750 diff --git a/afullyfirstordermethodforstochasticbileveloptimization/2cc651f2-f6b4-4e49-a59e-4ad91f32f9e7_origin.pdf b/afullyfirstordermethodforstochasticbileveloptimization/2cc651f2-f6b4-4e49-a59e-4ad91f32f9e7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6e2d5571f018601dd34fae75bd427ef9b7e7c540 --- /dev/null +++ b/afullyfirstordermethodforstochasticbileveloptimization/2cc651f2-f6b4-4e49-a59e-4ad91f32f9e7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db8226dffe6bcdb6aa00fb9fd7c666f50078ff0170c50ee5e4b5bb977ced7987 +size 555814 diff --git a/afullyfirstordermethodforstochasticbileveloptimization/full.md b/afullyfirstordermethodforstochasticbileveloptimization/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4a60da3501cd4dc225f4e1b84183e47042c5593a --- /dev/null +++ b/afullyfirstordermethodforstochasticbileveloptimization/full.md @@ -0,0 +1,1677 @@ +# A Fully First-Order Method for Stochastic Bilevel Optimization + +Jeongyeol Kwon1 Dohyun Kwon2 Stephen Wright1 Robert Nowak1 + +# Abstract + +We consider stochastic unconstrained bilevel optimization problems when only the first-order gradient oracles are available. While numerous optimization methods have been proposed for tackling bilevel problems, existing methods either tend to require possibly expensive calculations regarding Hessians of lower-level objectives, or lack rigorous finite-time performance guarantees. In this work, we propose a Fully First-order Stochastic Approximation $(\mathbb{F}^2\mathbb{S}\mathbb{A})$ method, and study its non-asymptotic convergence properties. Specifically, we show that $\mathbb{F}^2\mathbb{S}\mathbb{A}$ converges to an $\epsilon$ -stationary solution of the bilevel problem after $\epsilon^{-7/2}, \epsilon^{-5/2}$ , and $\epsilon^{-3/2}$ iterations (each iteration using $O(1)$ samples) when stochastic noises are in both level objectives, only in the upper-level objective, and not present (deterministic settings), respectively. We further show that if we employ momentum-assisted gradient estimators, the iteration complexities can be improved to $\epsilon^{-5/2}, \epsilon^{-4/2}$ , and $\epsilon^{-3/2}$ , respectively. We demonstrate even superior practical performance of the proposed method over existing second-order based approaches on MNIST data-hypercleaning experiments. + +# 1. Introduction + +Bilevel optimization (Colson et al., 2007) arises in many important applications that have two-level hierarchical structures, including meta-learning (Rajeswaran et al., 2019), hyper-parameter optimization (Franceschi et al., 2018; Bao et al., 2021), model selection (Kunapuli et al., 2008; Giovannelli et al., 2021), adversarial networks (Goodfellow et al., 2020; Gidel et al., 2018), game theory (Stackelberg et al., 1952) and reinforcement learning (Konda & Tsitsiklis, 1999; Sutton & Barto, 2018). Bilevel optimization can be gener + +, Dohyun Kwon . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +ally formulated as the following minimization problem: + +$$ +\min _ {x \in X} F (x) := f (x, y ^ {*} (x)) +$$ + +$$ +\text {s . t .} \quad y ^ {*} (x) \in \arg \min _ {y \in \mathbb {R} ^ {d _ {y}}} g (x, y), \tag {P} +$$ + +where $f$ and $g$ are continuously differentiable functions and $X \subseteq \mathbb{R}^{d_x}$ is a convex set. The outer objective $F$ depends on $x$ both directly and also indirectly via $y^*(x)$ , which is a solution of the lower-level problem of minimizing another function $g$ , which is parametrized by $x$ . Throughout the paper, we assume that $X = \mathbb{R}^{d_x}$ (that is, there are no explicit constraints on $x$ ) and that $g(x, y)$ is strongly convex in $y$ , so that $y^*(x)$ is uniquely well-defined for all $x \in X$ . + +Among various approaches to (P), iterative procedures have been predominant due to their simplicity and potential scalability in large-scale applications. Initiated by (Ghadimi & Wang, 2018), a flurry of recent works study efficient iterative procedures and their finite-time performance for solving (P), see e.g., (Chen et al., 2021; Hong et al., 2020; Khanduri et al., 2021; Chen et al., 2022; Dagréou et al., 2022; Guo et al., 2021; Sow et al., 2022; Ji et al., 2021; Yang et al., 2021). The underlying idea is based on an algorithm of (stochastic) gradient descent type, applied to $F$ , that is, + +$$ +x _ {k + 1} = x _ {k} - \alpha_ {k} \nabla F (x _ {k}), +$$ + +with some appropriate step-sizes $\{\alpha_k\}$ . Direct application of this approach requires us to compute or estimate the so-called hyper-gradient of $F$ at $x$ , which is + +$$ +\nabla F (x) = \nabla_ {x} f (x, y ^ {*} (x)) - \nabla_ {x y} ^ {2} g (x, y ^ {*} (x)) \times +$$ + +$$ +\nabla_ {y y} ^ {2} g (x, y ^ {*} (x)) ^ {- 1} \nabla_ {y} f (x, y ^ {*} (x)). \quad (1) +$$ + +There are two major obstacles in computing (1). The first obstacle is that for every given $x$ , we need to search for the optimal solution $y^{*}(x)$ of the lower problem, which results in updating the lower variable $y$ multiple times before updating $x$ . To tackle this issue, several ideas have been proposed in (Ghadimi & Wang, 2018; Hong et al., 2020; Chen et al., 2021) to effectively track $y^{*}(x)$ without waiting for too many inner iterations before updating $x$ (we discuss this further in Section 1.2). Following in the spirit of this approach, we show that a single-loop style algorithm can still be implemented using only first-order gradient estimators. + +The second obstacle, which is the main focus of this work, centers around the presence of second-order derivatives of $g$ in (1). Existing approaches mostly require an explicit extraction of second-order information from $g$ with a major focus on estimating the Jacobian and inverse Hessian efficiently with stochastic noises (Ji et al., 2021; Chen et al., 2022; Dagréou et al., 2022). We are particularly interested in regimes in which such operations are costly and prohibitive (Mehra & Hamm, 2021; Giovannelli et al., 2021). Some existing works avoid the second-order computation and only use the first-order information of both upper and lower objectives; see (Giovannelli et al., 2021; Sow et al., 2022; Liu et al., 2021b; Ye et al., 2022). These works either lack a complete finite-time analysis (Giovannelli et al., 2021; Liu et al., 2021b) or are applicable only to deterministic functions (Ye et al., 2022; Sow et al., 2022). + +Our goal in this paper is to study a fully first-order approach for stochastic bilevel optimization. We propose a gradient-based approach that avoids the estimation of Jacobian and Hessian of $g$ , and finds an $\epsilon$ -stationary solution of $F$ using only first-order gradients of $f$ and $g$ . Further, the number of inner iterations remains constant throughout all outer iterations of our algorithm. We provide a finite-time analysis of our method with explicit convergence rates. To our best knowledge, this work is the first to establish non-asymptotic convergence guarantees for stochastic bilevel optimization using only first-order gradient oracles. + +# 1.1. Overview of Main Results + +The starting point of our approach is to convert $(\mathbf{P})$ to an equivalent constrained single-level version: + +$$ +\min _ {x \in X, y \in \mathbb {R} ^ {d _ {y}}} \quad f (x, y) \quad \text {s . t .} \quad g (x, y) - g ^ {*} (x) \leq 0, (\mathbf {P} ^ {\prime}) +$$ + +where $g^{*}(x)\coloneqq g(x,y^{*}(x))$ . The Lagrangian $\mathcal{L}_{\lambda}$ for $(\mathbf{P}^{\prime})$ with multiplier $\lambda >0$ is + +$$ +\mathcal {L} _ {\lambda} (x, y) := f (x, y) + \lambda \left(g (x, y) - g ^ {*} (x)\right). +$$ + +We can minimize $\mathcal{L}_{\lambda}$ for a given $\lambda$ by, for example, running (stochastic) gradient descent. As noted in (Ye et al., 2022), the gradient of $\mathcal{L}_{\lambda}$ can be computed only with gradients of $f$ and $g$ , and thus the entire procedure can be implemented using only with first-order derivatives. In fact, such a reformulation has been attempted in several recent works (e.g., Liu et al., 2021a; Sow et al., 2022; Ye et al., 2022)). However, the challenge in handling the constrained version ( $\mathbf{P}^{\prime}$ ) is to find an appropriate value of the multiplier $\lambda$ . Unfortunately, the desired solution $x^{*} = \arg \min_{x} F(x)$ can only be obtained at $\lambda = \infty$ (this is a consequence of the fact that the so-called constraint qualifications (Wright et al., 1999) are not satisfied for ( $\mathbf{P}^{\prime}$ )). However, with $\lambda = \infty$ , $\mathcal{L}_{\lambda}(x,y)$ has unbounded smoothness which prevents us from employing gradient-descent style approaches. For these reasons, + +none of the previously proposed algorithms can obtain a consistent estimator for the original problem $\min_x F(x)$ without access to second derivatives of $g$ . + +Nonetheless, we find that $(\mathbf{P}^{\prime})$ is the key to deriving a consistent estimator that converges to an $\epsilon$ -stationary point of $F$ in finite time without access to second derivatives. The main idea is to start with an initial value $\lambda = \lambda_0 > 0$ and gradually increase it on subsequent iterations: At iteration $k$ , $\lambda_{k} = O(k^{b})$ for some $b \in (0,1]$ . The success of this approach depends crucially on the growth rate captured by the parameter $b$ . On one hand, fast growth of $\lambda_{k}$ removes the bias quickly. On the other hand, fast growth of $\lambda_{k}$ forces a fast decay of step-sizes due to the growing nonsmoothness of $\mathcal{L}_{\lambda_k}$ , which slows down the overall convergence. + +Our main technical contribution is to characterize an explicit growth rate of $\lambda_{k}$ that optimizes the trade-off between bias and step-sizes, and to provide a non-asymptotic convergence guarantee with explicit rates for the proposed algorithm. + +- We propose a fully first-order method, $\mathsf{F}^2\mathsf{SA}$ , for stochastic bilevel optimization. $\mathsf{F}^2\mathsf{SA}$ is a single-loop style algorithm: For every outer variable update we only update inner variables a constant number of times. +- We characterize explicit convergence rates of $\mathbb{F}^2\mathbb{S}\mathbb{A}$ in different stochastic regimes. It converges to an $\epsilon$ -stationary-point of $(\mathbf{P})$ after $\tilde{O}(\epsilon^{-3.5})$ , $\tilde{O}(\epsilon^{-2.5})$ , or $\tilde{O}(\epsilon^{-1.5})$ iterations if both $\nabla f$ and $\nabla g$ contain stochastic noise, if only access to $\nabla f$ is noisy, or if we are in deterministic settings, respectively. These complexities can be improved to $\tilde{O}(\epsilon^{-2.5})$ , $\tilde{O}(\epsilon^{-2})$ , or $\tilde{O}(\epsilon^{-1.5})$ , respectively, if momentum or variance-reduction techniques are employed. The crux of the analysis is to understand the effect of the value of multipliers $\lambda_k$ on step-sizes, noise variances, and bias. +- We demonstrate the proposed algorithm on a data hyper-cleaning task for MNIST. Even though our theoretical guarantees are not better than existing methods that use second-order information, we illustrate that $\mathsf{F}^2\mathsf{SA}$ can even outperform such methods in practice. + +# 1.2. Related Work + +Bilevel optimization has a long and rich history since its first introduction in (Bracken & McGill, 1973). A number of algorithms have been proposed for bilevel optimization. Classical results include approximation descent (Vicente et al., 1994) and penalty function method (Ishizuka & Aiyoshi, 1992; Anandalingam & White, 1990; White & Anandalingam, 1993) for instance; see (Colson et al., 2007) for a comprehensive overview. These results often deal with several special cases of bilevel-optimization and only provide asymptotic guarantees. Note that the penalty function methods in (Ishizuka & Aiyoshi, 1992; Anandalingam + +& White, 1990; White & Anandalingam, 1993) discuss the landscape within the infinitesimal neighborhood of local minimizers, and their results cannot imply practical approaches to find a stationary point non-convex objectives $F$ . + +Recently, several papers study gradient-based optimization methods for bilevel optimization and its non-asymptotic analysis. The first non-asymptotic analysis of a double-loop algorithm was given in (Ghadimi & Wang, 2018), where an inner problem finds an approximate solution of $y^{*}(x)$ given $x$ , which is used to evaluate an approximation of $\nabla F(x)$ . Furthermore, (Ghadimi & Wang, 2018) uses the Neuman series approximation to estimate the Hessian inverse when we only have access to the stochastic oracles of second-order derivatives. + +The paper (Ghadimi & Wang, 2018) was followed by a flurry of work that improved their result in numerous ways. For instance, (Hong et al., 2020; Chen et al., 2021; 2022; Ji et al., 2021) develop a single-loop style update by properly choosing two step-sizes for the inner and outer iterations, along with the improved sample complexity, i.e., the total number of accesses to first and second-order stochastic oracles. The overall convergence rate is further optimized by using variance-reduction and momentum techniques (Khanduri et al., 2021; Dagréou et al., 2022; Guo et al., 2021; Yang et al., 2021; Huang & Huang, 2021). We do not aim to compete with the convergence rates obtained from this line of work, since all of these method have access to second-order derivatives, even though some computational cost might be saved if good automatic differentiation packages (Margossian, 2019) are available. Rather, we avoid the needs for second-order information altogether, allowing a simple algorithm with low per-iteration complexity for large scale applications. + +The results most closely related to ours can be found in (Ye et al., 2022; Sow et al., 2022). (Sow et al., 2022) considers a primal-dual approach for $(\mathbf{P}^{\bullet})$ , but their main focus is to get a biased solution when $g$ is only convex (not strongly convex), so the lower-level problem may have multiple solutions. Their analysis is restricted to the case in which the overall Lagrangian is strongly-convex in $x$ (which is not usually guaranteed) and they do not provide any guarantees in terms of the true objective $F$ . More recent work in (Ye et al., 2022) is the closest to ours, but they only consider deterministic gradient oracles, and do not provide convergence guarantees in terms of $F$ . Moreover, they prove a convergence guarantee of $O(k^{-1/4})$ , whereas we show an improved guarantee of $\tilde{O}(k^{-2/3})$ in the deterministic case. + +There are also other lines of work that study a simpler version of the bilevel problem which has no coupling between two variables $x$ and $y$ (e.g., see (Ferris & Mangasarian, 1991; Solodov, 2007; Jiang et al., 2022)). In (Amini & Youse + +fian, 2019a;b), the Lagrangian formulation is exploited with iteratively increasing multiplier. Note that the nature of single-variable bilevel formulation is different from (P) as the former is only interesting when the lower-level problem allows a multiple (convex) solution set. To our best knowledge, the idea of iteratively increasing $\lambda_{k}$ with its non-asymptotic guarantee is new in the context of solving (P), and has the merit of avoiding (possibly) expensive second-order computation. + +# 2. Preliminaries + +We state several assumptions on $(\mathbf{P})$ to specify the problem class of interest. We consider $(\mathbf{P})$ with the following assumptions on objective functions: + +Assumption 1. The functions $f$ and $g$ satisfy the following conditions. + +1. $f$ is continuously differentiable and $l_{f,1}$ -smooth. +2. $g$ is continuously differentiable and $l_{g,1}$ -smooth. +3. For every $\bar{x}\in X$ $\| \nabla_yf(\bar{x},y)\| \leq l_{f,0}$ for all $y$ . + +We focus on well-conditioned bilevel optimization problems, i.e., when $F(x)$ is well-defined, continuous and smooth. The following assumption has been standard for well-conditioned bilevel problems (Ghadimi & Wang, 2018): + +Assumption 2. The following holds for $g$ : + +1. There exists an $\mu_g > 0$ such that for all $\bar{x} \in X, g(\bar{x}, y)$ is $\mu_g$ strongly-convex in $y$ . +2. $g$ is two-times continuously differentiable, and $\nabla^2 g$ is $l_{g,2}$ -Lipschitz jointly in $(x,y)$ . + +We assume that we can access first-order information of objective functions only through stochastic gradient oracles: + +Assumption 3. We access the gradients of objective functions via unbiased estimators $\nabla f(x,y;\zeta),\nabla g(x,y;\phi)$ depending on random variables $\zeta$ and $\phi$ , respectively, where $\mathbb{E}[\nabla f(x,y;\zeta)] = \nabla f(x,y)$ and $\mathbb{E}[\nabla g(x,y;\phi)] = \nabla g(x,y)$ . The variances of stochastic gradient estimators are bounded: + +$$ +\begin{array}{l} \mathbb {E} [ \| \nabla f (x, y; \zeta) - \nabla f (x, y) \| ^ {2} ] \leq \sigma_ {f} ^ {2}, \\ \mathbb {E} [ \| \nabla g (x, y; \phi) - \nabla g (x, y) \| ^ {2} ] \leq \sigma_ {g} ^ {2}. \\ \end{array} +$$ + +Throughout the paper, we assume that Assumptions 1-3 hold unless specified otherwise. We use the following definition as the optimality criteria for solving (P). + +Definition 2.1 (ε-stationary point). A point $x$ is called $\epsilon$ -stationary if $\| \nabla F(x) \|^{2} \leq \epsilon$ , where $\nabla F$ is defined in (1). + +Notation. We say $a_k \asymp b_k$ if $a_k$ and $b_k$ decreases (or increases) in the same rate as $k \to \infty$ , i.e., $\lim_{k \to \infty} a_k / b_k = \Theta(1)$ . Throughout the paper, $\| \cdot \|$ denotes the Euclidean norm on finite dimensional space. + +# 3. Algorithm + +In this section, we develop an algorithm that converges to a stationary point of the bilevel problem (i.e., a stationary point of $F(x) = f(x,y^{*}(x))$ ) and makes use only of gradients of $f$ and $g$ . Recall the equivalent formulation (P'). To see how we can avoid second-order derivatives, we observe the gradient of $\nabla \mathcal{L}_{\lambda}$ : + +$$ +\begin{array}{l} \nabla_ {x} \mathcal {L} _ {\lambda} (x, y) = \nabla_ {x} f (x, y) + \lambda (\nabla_ {x} g (x, y) - \nabla g ^ {*} (x)), \\ \nabla_ {y} \mathcal {L} _ {\lambda} (x, y) = \nabla_ {y} f (x, y) + \lambda \nabla_ {y} g (x, y). \\ \end{array} +$$ + +Note that + +$$ +\begin{array}{l} \nabla g ^ {*} (x) = \nabla_ {x} g (x, y ^ {*} (x)) + \nabla_ {x} y ^ {*} (x) \nabla_ {y} g (x, y ^ {*} (x)) \\ = \nabla_ {x} g (x, y ^ {*} (x)), \\ \end{array} +$$ + +due to the optimality condition for $g$ at $y^{*}(x)$ . Thus, we could consider optimizing $\mathcal{L}_{\lambda}(x,y)$ by introducing an auxiliary variable $z$ that chases $y^{*}(x)$ , and setting up an alternative bilevel formulation (P) with outer-level objective $\mathcal{L}_{\lambda}(x',z)$ , outer variable $x' = (x,y)$ , and inner variable $z$ . However, such an approach settles in a different landscape from that of $F(x)$ , resulting in a bias. The question is how tightly we can control this bias without compromising too much smoothness of the alternative function $\mathcal{L}_{\lambda}$ , which affects the overall step-size design and noise variance. + +To control the bias, we need a better understanding of how the functions $\mathcal{L}_{\lambda}$ and $F(x)$ are related. Let us introduce an auxiliary function $\mathcal{L}_{\lambda}^{*}$ defined as: + +$$ +\mathcal {L} _ {\lambda} ^ {*} (x) := \min _ {y} \mathcal {L} _ {\lambda} (x, y). +$$ + +Note that if $\lambda > 2l_{f,1} / \mu_g$ , then for every $\bar{x} \in X$ , $\mathcal{L}_{\lambda}(\bar{x},y)$ is at least $(\lambda \mu_g / 2)$ strongly-convex in $y$ , and therefore its minimizer $y_{\lambda}^{*}(x)$ is uniquely well-defined: + +$$ +y _ {\lambda} ^ {*} (x) := \arg \min _ {y} \mathcal {L} _ {\lambda} (x, y). \tag {2} +$$ + +Since $F(x) = \lim_{\lambda \to \infty} \mathcal{L}_{\lambda}^{*}(x)$ for every $x \in X$ , we could expect that $\mathcal{L}_{\lambda}^{*}(x)$ is a well-defined proxy of $F(x)$ for sufficiently large $\lambda > 0$ . The following lemma confirms this intuition. + +Lemma 3.1. For any $x \in X$ and $\lambda \geq 2l_{f,1} / \mu_g$ , $\nabla \mathcal{L}_{\lambda}^{*}(x)$ is given by + +$$ +\begin{array}{l} \nabla_ {x} \mathcal {L} _ {\lambda} (x, y _ {\lambda} ^ {*} (x)) = \nabla_ {x} f (x, y _ {\lambda} ^ {*} (x)) \\ + \lambda (\nabla_ {x} g (x, y _ {\lambda} ^ {*} (x)) - \nabla_ {x} g (x, y ^ {*} (x))). \\ \end{array} +$$ + +# Algorithm 1 F $^2$ SA + +Input: step sizes: $\{\alpha_{k},\gamma_{k}\}$ , multiplier difference sequence: $\{\delta_k\}$ , inner-loop iteration count: $T$ , step-size ratio: $\xi$ , initializations: $\lambda_0,x_0,y_0,z_0$ + +1: for $k = 0 \dots K - 1$ do +2: $z_{k,0} \gets z_k, y_{k,0} \gets y_k$ +3: for $t = 0 \dots T - 1$ do + +4: $z_{k,t + 1}\gets z_{k,t} - \gamma_kh_{gz}^k$ + +5: $y_{k,t + 1}\gets y_{k,t} - \alpha_k(h_{fy}^{k,t} + \lambda_kh_{gy}^{k,t})$ + +6: end for + +7: $z_{k + 1}\gets z_{k,T},y_{k + 1}\gets y_{k,T}$ +8: $x_{k + 1}\gets x_k - \xi \alpha_k(h_{fx}^k +\lambda_k(h_{gxy}^k -h_{gxz}^k))$ +9: $\lambda_{k + 1}\gets \lambda_k + \delta_k$ + +# 10: end for + +Furthermore, we have + +$$ +\| \nabla F (x) - \nabla \mathcal {L} _ {\lambda} ^ {*} (x) \| \leq C _ {\lambda} / \lambda , +$$ + +where $C_{\lambda} \coloneqq \frac{4l_{f,0}l_{g,1}}{\mu_g^2}\left(l_{f,1} + \frac{2l_{f,0}l_{g,2}}{\mu_g}\right)$ . + +Importantly, $\nabla \mathcal{L}_{\lambda}^{*}(x)$ can be computed only with first-order derivatives of both $f$ and $g$ . Thus any first-order method that finds a stationary point of $\mathcal{L}_{\lambda}^{*}(x)$ approximately follows the trajectory of $x$ updated with the exact $\nabla F(x)$ , with a bias of $O(1 / \lambda)$ . + +Our strategy is to use $\nabla \mathcal{L}_{\lambda}^{*}(x)$ as a proxy to $\nabla F(x)$ for generating a sequence of iterates $\{x_{k}\}$ . Accordingly, we introduce sequences $\{y_{k}\}$ and $\{z_{k}\}$ that approximate $y_{\lambda_k}^* (x_k)$ and $y^{*}(x_{k})$ , respectively. We gradually increase $\lambda_{k}$ with $k$ so that the bias in the sequence $\{x_{k}\}$ converges to 0. + +Our Fully First-order Stochastic Approximation $(\mathbf{F}^2\mathbf{SA})$ method is shown in Algorithm 1. We emphasize that the method works with stochastic gradients that are independent unbiased estimators of gradients, i.e., + +$h_{gz}^{k,t} \coloneqq \nabla_y g(x_k, z_{k,t}; \phi_z^{k,t})$ , $h_{fy}^{k,t} \coloneqq \nabla_y f(x_k, y_{k,t}; \zeta_y^{k,t})$ , + +$h_{gy}^{k,t} \coloneqq \nabla_y g(x_k, y_{k,t}; \phi_y^{k,t}), h_{gxy}^k \coloneqq \nabla_x g(x_k, y_{k+1}; \phi_{xy}^k)$ , + +$h_{f x}^{k}:= \nabla_{x}f(x_{k},y_{k + 1};\zeta_{x}^{k}),h_{g x z}^{k}:= \nabla_{x}g(x_{k},z_{k + 1};\phi_{x z}^{k}).$ + +The algorithm can set $T = 1$ in conjunction with an appropriate choice of $\xi$ , allowing a fully single-loop update for all variables. + +# 3.1. Step-Size Design Principle + +We describe how we design the step-sizes for Algorithm 1 to achieve convergence to a $\epsilon$ -stationary point of $F$ . Several conditions must be satisfied. As will be shown in the analysis, with $(\lambda_k\mu_g / 2)$ -strong convexity of $\mathcal{L}_{\lambda_k}$ in $y$ , one-step inner iteration of $y_{k,t}$ is a contraction mapping toward $y_{\lambda ,k}^{*}$ with rate $1 - O(\mu_g\beta_k)$ . Henceforth, we often use the + +![](images/bda0bcbd7ad4c533e01245474f4030ba96f3c9d0448b1c8978efa31096bea3fe.jpg) +Figure 1: $y_{k}$ should move faster than $y_{\lambda_k}^* (x_k)$ moves, and stay within $O(1 / \lambda_k)$ -ball around $y_{\lambda_k}^* (x_k)$ . + +notation $\beta_{k} := \alpha_{k}\lambda_{k}$ , which is the effective step-size for updating $y_{k}$ . For simplicity, we denote $y_{\lambda,k}^{*} := y_{\lambda_k}^*(x_k)$ and $y_{k}^{*} := y^{*}(x_{k})$ . + +We now describe the specific rules. Since the step size for $x_{k}$ is essentially $\xi \alpha_{k}$ , and since we may need to traverse an arbitrary distance from the initial point $x_{0}$ to the optimal value of $x$ , we need $\alpha_{k} = \Omega(1 / k)$ . On the other hand, since $\beta_{k} = \alpha_{k} \lambda_{k}$ is the effective step size for updating $y_{k}$ , we need $\beta_{k} < O(1 / l_{g,1}) = O(1)$ . Together, these observations imply that the maximum rate of growth for $\lambda_{k}$ cannot exceed $O(k)$ . + +Second, note that $\| x_{k + 1} - x_k\|$ is (roughly) proportional to + +$$ +\| \nabla F (x _ {k}) \| + C _ {\lambda} / \lambda_ {k} + \lambda_ {k} \| y _ {k} - y _ {\lambda , k} ^ {*} \| + \lambda_ {k} \| z _ {k} - y _ {k} ^ {*} \|. +$$ + +This rate is optimized when $\| y_{k} - y_{\lambda ,k}^{*}\| \asymp \| z_{k} - y_{k}^{*}\| \asymp \lambda_{k}^{-2}$ . Thus, the ideal growth rate for $\lambda_{k}$ is $\| y_{k} - y_{\lambda ,k}^{*}\|^{1 / 2}$ or $\| z_{k} - y_{k}^{*}\|^{1 / 2}$ . We will design the rate of convergence of $y_{k}$ and $z_{k}$ to be the same, i.e., $\beta_{k}\asymp \gamma_{k}$ . For instance, when we have stochastic noises in the gradient estimate of $g$ , i.e., $\sigma_g^2 >0$ , the expected convergence rate of $\| y_{k} - y_{\lambda_{k}}^{*}\|^{2}$ is $O(\beta_k)$ , since the sequence is optimized for strongly convex functions. This suggests $\lambda_{k}\asymp \beta_{k}^{-1 / 4}$ as the ideal rate of growth for $\lambda_{k}$ . + +The crux of Algorithm 1 is how well $y_{k}$ (and $z_{k}$ ) can chase $y_{\lambda_k}^* (x_k)$ (resp. $y^{*}(x_{k})$ ) when $x_{k}$ and $\lambda_{k}$ are changing at every iteration. We characterize first how fast $y_{\lambda}^{*}(x)$ moves in relation to the movements of $\lambda$ and $x$ . + +Lemma 3.2. For any $\lambda_2\geq \lambda_1\geq 2l_{f,1} / \mu_g$ and $x_{1},x_{2}\in X$ we have + +$$ +\left\| y _ {\lambda_ {1}} ^ {*} \left(x _ {1}\right) - y _ {\lambda_ {2}} ^ {*} \left(x _ {2}\right) \right\| \leq \frac {2 \left(\lambda_ {2} - \lambda_ {1}\right)}{\lambda_ {1} \lambda_ {2}} \frac {l _ {f , 0}}{\mu_ {g}} + l _ {\lambda , 0} \| x _ {2} - x _ {1} \|, +$$ + +for some $l_{\lambda,0} \leq 3l_{g,1} / \mu_g$ . + +For Algorithm 1 to converge to a desired point, $y_{k}$ should move sufficiently fast toward the current target $y_{\lambda ,k}^{*}$ every iteration, dominating the movement of target $y_{\lambda ,k}^{*}$ that results + +# Algorithm 2 F $^3$ SA + +Input: step sizes: $\{\alpha_{k},\gamma_{k}\}$ , multiplier difference sequence: $\{\delta_k\}$ , momentum-weight sequence $\{\eta_k\}$ , step-size ratio: $\xi$ , initialization: $\lambda_0,x_0,y_0,z_0$ + +1: for $k = 0 \dots K - 1$ do +2: $z_{k + 1}\gets z_k - \gamma_k\widetilde{h}_{gz}^k$ +3: $y_{k + 1}\gets y_k - \alpha_k(\tilde{h}_{fy}^k +\lambda_k\tilde{h}_{gy}^k)$ +4: $x_{k + 1}\gets x_k - \xi \alpha_k(\bar{h}_{fx}^k +\lambda_k(\bar{h}_{gxy}^k -\bar{h}_{gxz}^k))$ +5: $\lambda_{k + 1}\gets \lambda_k + \delta_k$ +6: end for + +from updates to $x_{k}$ and $\lambda_{k}$ (see Figure 1). At a minimum, the following condition should hold (in expectation): + +$$ +\left\| y _ {k + 1} - y _ {\lambda , k} ^ {*} \right\| < \left\| y _ {k} - y _ {\lambda , k - 1} ^ {*} \right\|. +$$ + +Since $\| y_{k + 1} - y_{\lambda ,k}^{*}\|^{2}$ can be bounded with $T$ -steps of $1 - O(\mu_g\beta_k)$ contractions, starting from $y_{k}$ , we require + +$$ +\left(1 - O \left(T \mu_ {g} \beta_ {k}\right)\right) \| y _ {k} - y _ {\lambda , k} ^ {*} \| ^ {2} < \| y _ {k} - y _ {\lambda , k - 1} ^ {*} \| ^ {2}. +$$ + +Now, applying the bound in Lemma 3.2, the minimal condition is given by: + +$$ +\begin{array}{l} \left\| y _ {\lambda , k - 1} ^ {*} - y _ {\lambda , k} ^ {*} \right\| \leq \left(l _ {f, 0} / \mu_ {g}\right) \cdot \left(\delta_ {k} / \lambda_ {k} ^ {2}\right) + l _ {\lambda , 0} \| x _ {k} - x _ {k - 1} \| \\ \leq T \mu_ {g} \beta_ {k} \| y _ {k} - y _ {\lambda , k - 1} ^ {*} \|. \\ \end{array} +$$ + +Note that $\| y_{k + 1} - y_{\lambda ,k}^*\|$ should decay faster than $\lambda_k^{-1}$ . Otherwise, the bias in updating $x_{k}$ using $y_{k}$ (to estimate $\nabla \mathcal{L}_{\lambda_k}^*$ ) is larger than $\lambda_k\| y_{k + 1} - y_{\lambda ,k}^*\|$ , and this amount might blow up. Also, it can be easily seen that $\| x_{k} - x_{k - 1}\| = \Omega (\xi \beta_{k}\| y_{k} - y_{\lambda ,k - 1}^{*})$ . We can thus derive two simple conditions: + +$$ +\frac {\delta_ {k}}{\lambda_ {k}} \leq O _ {\mathrm {P}} (1) \cdot \beta_ {k}, \frac {\xi}{T} < O _ {\mathrm {P}} (1), +$$ + +where $O_{\mathbb{P}}(1)$ are instance-dependent constants. If $\lambda_{k}$ grows in some polynomial rate, then $\delta_k / \lambda_k = O(1 / k)$ and the first condition is satisfied provided that $\beta_{k} = \Omega (1 / k)$ . The second condition indicates the number of inner iterations $T$ required for each outer iteration. We can set $T = 1$ (thus making the algorithm single-loop) by setting $\xi$ sufficiently small. Alternatively, we can set $\xi = 1$ choose $T > 1$ to depend on some instance-specific parameters. + +# 3.2. Extension: Integrating Momentum + +Given the simple structure of Algorithm 1, we can integrate variance-reduction techniques to improve the overall convergence rates. One relevant technique is the momentum-assisting technique of (Khanduri et al., 2021) for stochastic bilevel optimization. To simplify the presentation, we consider a fully single-loop variant by setting $T = 1$ . + +To apply the momentum technique, we only need to replace the simple unbiased gradient estimators $h$ with momentum-assisted gradient estimators $\tilde{h}$ . For instance, $\tilde{h}_z^k$ can be defined with a proper momentum weight sequence $\eta_k \in (0,1]$ as follows: + +$$ +\begin{array}{l} \tilde {h} _ {z} ^ {k} := \nabla_ {y} g \left(x _ {k}, z _ {k}; \phi_ {z} ^ {k}\right) \\ + \left(1 - \eta_ {k}\right) \left(\tilde {h} _ {z} ^ {k - 1} - \nabla_ {y} g (x _ {k - 1}, z _ {k - 1}; \phi_ {z} ^ {k})\right). \\ \end{array} +$$ + +Other quantities $\tilde{h}_{fy},\tilde{h}_{gy},\tilde{h}_{fx},\tilde{h}_{gxy},\tilde{h}_{gxz}$ are defined similarly, with the same momentum-weight sequence. We defer the full description of those quantities to Appendix C. The version of our algorithm that incorporates momentum is called Faster Fully First-order Stochastic Approximation (F $^3$ SA); it is described in Algorithm 2, where we simply replace $h$ with $\tilde{h}$ . Note that we have additional moment-weight parameters $\{\eta_k\}$ . + +# 4. Main Results + +In this section we provide non-asymptotic convergence guarantees of the proposed algorithms. For Algorithm 1, we prove in Theorem 4.1 that the weighted sum of $\| \nabla F(x_k) \|^2$ in expectation is bounded from above. By choosing suitable step sizes, the estimate yields a convergence rate. Dependence on stochastic noises is explicated in Corollaries 4.2. Similar results with better convergence rates and weaker assumptions are proved for Algorithm 2; see Theorem 4.3 and Corollary 4.4. + +# 4.1. Main Result for Algorithm 1 + +Two mild assumptions are required for exploiting the smoothness of $y_{\lambda}^{*}(x)$ . + +Assumption 4. The gradient with respect to $x$ is bounded for functions $f$ and $g$ : + +1. For every $\bar{y}$ , $\| \nabla_x f(x, \bar{y}) \| \leq l_{f,0}$ for all $x \in X$ . +2. For every $\bar{y}$ , $\| \nabla_x g(x, \bar{y}) \| \leq l_{g,0}$ for all $x \in X$ . + +Assumption 5. $f$ is two-times continuously differentiable, and $\nabla^2 f$ is $l_{f,2}$ -Lipschitz in $(x,y)$ . + +The smoothness of $y_{\lambda}^{*}(x)$ is used to keep the number of effective inner iterations constant throughout all outer-iterations, as in (Chen et al., 2021). + +Before we state our convergence result, let us define some additional notation. We denote the second-moment bound of the $x$ update, $x_{k + 1} - x_k$ , as $M \coloneqq \max(l_{f,0}^2 + \sigma_f^2, l_{g,0}^2 + \sigma_g^2)$ . We also denote $l_{*,0} = \max(1, l_{\lambda_0,0})$ and $l_{*,1} = l_{\lambda_0,1}$ where $\lambda_0$ is the starting value of Lagrange multiplier. + +We are now ready to state our main results for Algorithm 1. + +Theorem 4.1. Suppose that Assumptions 1 - 5 hold, and parameters and step-sizes are chosen such that $\lambda_0 \geq 2l_{f,1} / \mu_g$ and + +$$ +\beta_ {k} \leq \gamma_ {k} \leq \min \left(\frac {1}{4 l _ {g , 1}}, \frac {1}{4 T \mu_ {g}}\right), \alpha_ {k} \leq \frac {1}{2 \xi l _ {F , 1}}, \tag {3a} +$$ + +$$ +\frac {\xi}{T} < c _ {\xi} \mu_ {g} \cdot \max \left(l _ {g, 1} l _ {*, 0} ^ {2}, l _ {*, 1} \sqrt {M}\right) ^ {- 1}, \frac {\delta_ {k}}{\lambda_ {k}} \leq \frac {T \mu_ {g} \beta_ {k}}{1 6} \tag {3b} +$$ + +for all $k \geq 0$ with a proper absolute constant $c_{\xi} > 0$ . Then for any $K \geq 1$ , the iterates generated by Algorithm 1 satisfy + +$$ +\begin{array}{l} \sum_ {k = 0} ^ {K - 1} \xi \alpha_ {k} \mathbb {E} [ \| \nabla F (x _ {k}) \| ^ {2} ] \leq O _ {P} (1) \cdot \sum_ {k} \xi \alpha_ {k} \lambda_ {k} ^ {- 2} \\ + O _ {P} \left(\sigma_ {f} ^ {2}\right) \cdot \sum_ {k} \alpha_ {k} ^ {2} \lambda_ {k} + O _ {P} \left(\sigma_ {g} ^ {2}\right) \cdot \sum_ {k} \gamma_ {k} ^ {2} \lambda_ {k} + O _ {P} (1), \\ \end{array} +$$ + +where $O_{P}(1)$ are instance-dependent constants. + +The proof of Theorem 4.1 is given in Appendix B. At a high level, our analysis investigates the decrease in expectation (with $k$ ) of the potential function $\mathbb{V}_k$ defined by + +$$ +\begin{array}{l} \mathbb {V} _ {k} := \left(F \left(x _ {k}\right) - F ^ {*}\right) + l _ {g, 1} \lambda_ {k} \left\| y _ {k} - y _ {\lambda_ {k}} ^ {*} \left(x _ {k}\right) \right\| ^ {2} \\ + \frac {\lambda_ {k} l _ {g , 1}}{2} \| z _ {k} - y ^ {*} \left(x _ {k}\right) \| ^ {2}, \tag {4} \\ \end{array} +$$ + +where $F^{*}$ is the minimum value of $F$ and $y_{\lambda}^{*}$ and $y^{*}$ are given in (2) and $(\mathbf{P})$ , respectively. That is, in addition to the decrease in values of $F$ and $z_{k} - y_{k}^{*}$ which have been standardized in literature, we track the error between $y_{k}$ and $y_{\lambda_k}^* (x_k)$ since $y_{\lambda ,k}^{*}$ is the key to compute true $\nabla F(x_{k})$ only with gradients. It is also shown in the proof that the right scaling factor for the tracking errors is $O_{\mathbb{P}}(\lambda_k)$ . + +We now describe how we design step sizes. Note that the conditions (3a) are standard conditions on the step sizes for gradient-based methods with smooth functions. The conditions (3b) arise from the double-loop nature of the problem, as discussed in Section 3.1. In accordance with the step-size design rule (3), we propose the following: + +$$ +\begin{array}{l} T = \max \left(3 2, (c _ {\xi} \mu_ {g}) ^ {- 1} \max \left(l _ {g, 1} l _ {*}, 0, \sqrt {M} l _ {*}, 1\right)\right), \\ \xi = 1, \alpha_ {k} = \frac {c _ {\alpha}}{(k + k _ {0}) ^ {a}}, \gamma_ {k} = \frac {c _ {\gamma}}{(k + k _ {0}) ^ {c}}, \tag {5} \\ \end{array} +$$ + +and for the multiplier increase sequence $\{\delta_k\}$ , + +$$ +\delta_ {k} = \min \left(\frac {T \mu_ {g}}{1 6} \alpha_ {k} \lambda_ {k} ^ {2}, \frac {\gamma_ {k}}{2 \alpha_ {k}} - \lambda_ {k}\right), \tag {6} +$$ + +with some rate constants $a, c \in [0,1]$ and $a \geq c$ . We design the starting value $\lambda_0$ of the Lagrange multiplier and the constants as + +$$ +k _ {0} \geq \frac {4}{\mu_ {g}} \max \left(\frac {\xi l _ {F , 1}}{2}, T l _ {g, 1}, l _ {f, 1}\right), \lambda_ {0} \geq \frac {2 l _ {f , 1}}{\mu_ {g}}, +$$ + +$$ +c _ {\gamma} = \frac {1}{\mu_ {g} k _ {0} ^ {1 - c}}, c _ {\alpha} = \frac {1}{2 \lambda_ {0} \mu_ {g} k _ {0} ^ {1 - a}}. \tag {7} +$$ + +These choices simplify the convergence rate analysis, but any set of choices can be used as long as it satisfies (3). With the choices above, we can specify the rate of convergence in three different regimes of stochastic noises. + +Corollary 4.2. Suppose that the conditions of Theorem 4.1 hold, with step-sizes designed as in (5), (6), and (7). Let $R$ be a random variable drawn from a uniform distribution over $\{0, \dots, K - 1\}$ . Then the following convergence results hold after $K$ iterations of Algorithm 1. + +(a) If stochastic noises are present in both upper-level objective $f$ and lower-level objective $g$ (i.e., $\sigma_f^2, \sigma_g^2 > 0$ ), then by setting $a = 5/7$ and $c = 4/7$ in (5) and (7), we obtain $\mathbb{E}[\|\nabla F(x_R)\|^2] \asymp \frac{\log K}{K^{2/7}}$ , +(b) If stochastic noises are present only in $f$ (i.e., $\sigma_f^2 > 0$ ), $\sigma_g^2 = 0$ , then by setting $a = 3/5$ and $c = 2/5$ in (5) and (7), we obtain $\mathbb{E}[\|\nabla F(x_R)\|^2] \asymp \frac{\log K}{K^{2/5}}$ , +(c) If we have access to exact information about $f$ and $g$ (i.e., $\sigma_f^2 = \sigma_g^2 = 0$ ), then by setting $a = 1/3$ and $c = 0$ in (5) and (7), we obtain $\| \nabla F(x_K) \|^2 \asymp \frac{\log K}{K^{2/3}}$ , + +As these results show, stronger convergence results can be proved when noise is present in fewer places in the problem. If stochastic noise is present only in the upper-level rather than in both levels, the rate can be improved from $O(k^{-2/7})$ to $O(k^{-2/5})$ . In deterministic settings (no noise), we get a rate of $O(k^{-2/3})$ . This rate compares to the $O(k^{-1})$ rate that can be obtained with second-order based methods. + +# 4.2. Main Result for Algorithm 2 + +When we use the momentum-assisting technique, we require the stochastic functions to be well-behaved as well. + +Assumption 6. Assumption 1 holds for $f(x,y;\zeta)$ and $g(x,y;\phi)$ with probability 1. + +One technical benefit of the momentum technique is that now we no longer require the bounded-gradient assumption w.r.t. $x$ (Assumption 4) or the smoothness of Hessian of $f$ (Assumption 5) for the analysis, as we no longer make use of the smoothness of $y_{\lambda}^{*}$ . We show the following convergence result for Algorithm 2. + +Theorem 4.3. Suppose Assumptions 1-3 and 6 hold. If step-size parameters are chosen such that $\lambda_0 \geq 2l_{f,1} / \mu_g$ and + +$$ +\begin{array}{l} \beta_ {k} \leq \gamma_ {k} \leq \frac {1}{1 6 l _ {g , 1}}, \xi \alpha_ {k} \leq \frac {1}{l _ {F , 1}}, \\ \xi \leq c _ {\xi} \frac {\mu_ {g}}{l _ {g , 1} l _ {* , 0} ^ {2}}, \frac {\delta_ {k}}{\lambda_ {k}} \leq \frac {\mu_ {g} \beta_ {k}}{8}, \tag {8a} \\ \end{array} +$$ + +$$ +\begin{array}{l} \max \left(2 \frac {\gamma_ {k - 1} - \gamma_ {k}}{\gamma_ {k - 1}}, c _ {\eta} \frac {l _ {g , 1} ^ {3}}{\mu_ {g}} \gamma_ {k} ^ {2}\right) \leq \eta_ {k + 1} \leq 1, \\ \eta_ {0} = \eta_ {1} = 1, \delta_ {k} / \gamma_ {k} = o (1), \tag {8b} \\ \end{array} +$$ + +with proper absolute constants $c_{\xi}, c_{\eta} > 0$ , then for any $K \geq 1$ , the iterates generated by Algorithm 2 satisfy + +$$ +\begin{array}{l} \sum_ {k = 0} ^ {K - 1} \xi \alpha_ {k} \mathbb {E} [ \| \nabla F (x _ {k}) \| ^ {2} ] \leq O _ {\mathcal {P}} (1) \cdot \sum_ {k} \xi \alpha_ {k} \lambda_ {k} ^ {- 2} \\ + O _ {P} \left(\sigma_ {f} ^ {2}\right) \cdot \sum_ {k} \frac {\eta_ {k + 1} ^ {2}}{\gamma_ {k} \lambda_ {k}} + O _ {P} \left(\sigma_ {g} ^ {2}\right) \cdot \sum_ {k} \frac {\eta_ {k + 1} ^ {2} \lambda_ {k}}{\gamma_ {k}} + O _ {P} (1), \\ \end{array} +$$ + +where $O_{P}(1)$ are instance-dependent constants. + +The proof of Theorem 4.3 appears in Appendix C. We introduce the following step-size design, consistent with (8). + +$$ +\alpha_ {k} = \frac {c _ {\alpha}}{\left(k + k _ {0}\right) ^ {a}}, \gamma_ {k} = \frac {c _ {\gamma}}{\left(k + k _ {0}\right) ^ {c}}, \eta_ {k} = (k + 1) ^ {- 2 c} \tag {9a} +$$ + +$$ +\xi \leq c _ {\xi} \frac {\mu_ {g}}{l _ {g , 1} l _ {* , 0} ^ {2}}, \delta_ {k} = \frac {\gamma_ {k}}{\alpha_ {k}} - \lambda_ {k}, \lambda_ {0} \geq \frac {2 l _ {f , 1}}{\mu_ {g}}, \tag {9b} +$$ + +$$ +k _ {0} \geq \frac {1 2 8}{\mu_ {g}} \max \left(\xi l _ {F, 1}, l _ {g, 1} \sqrt {\frac {c _ {\eta} l _ {g , 1}}{\mu_ {g}}}\right), +$$ + +$$ +c _ {\gamma} = \frac {8}{\mu_ {g} k _ {0} ^ {1 - c}}, c _ {\alpha} = \frac {8}{\mu_ {g} \lambda_ {0} k _ {0} ^ {1 - a}}, \tag {9c} +$$ + +with some rate constants $a, c \in [0,1]$ and $a \geq c$ . As a corollary, we can obtain faster convergence rates for Algorithm 2 than Algorithm 1. + +Corollary 4.4. Suppose the conditions of Theorem 4.3 hold. Suppose that Algorithm 2 is run with step-sizes are designed as in (9). Let $R$ be a random variable drawn from a uniform distribution over $\{0, \dots, K - 1\}$ . Then the following convergence results hold after $K$ iterations of Algorithm 2. + +(a) If stochastic noises are present in both upper-level objective $f$ and lower-level objective $g$ (i.e., $\sigma_f^2$ , $\sigma_g^2 > 0$ ), then by setting $a = 3/5$ and $c = 2/5$ in (9), we obtain $\mathbb{E}[\| \nabla F(x_R) \|^2] \asymp \frac{\log K}{K^{2/5}}$ . +(b) If stochastic noises are present only in $f$ (i.e., $\sigma_f^2 > 0$ ), $\sigma_g^2 = 0$ , then by setting $a = 1/2$ and $c = 1/4$ in (9), we obtain $\mathbb{E}[\|\nabla F(x_R)\|^2] \asymp \frac{\log K}{K^{1/2}}$ . +(c) If we have access to exact information about $f$ and $g$ (i.e., $\sigma_f^2 = \sigma_g^2 = 0$ ), then by setting $a = 1/3$ and $c = 0$ in (9), we obtain $\| \nabla F(x_K) \|^2 \asymp \frac{\log K}{K^{2/3}}$ . + +The improvements in rates are different in different stochastic regimes. For instance, the sample complexity required + +![](images/40f3d2cb1b0df6b94dd1e1bfd6dbfc1b4eada29ae5fa8fe8f7d06ab37f6ea07e.jpg) +(a) + +![](images/139bb9b153a9104bfae0a4e845086bca275f5eb5c0db3b140ebf01d3dd9d6b6f.jpg) +(b) +Figure 2: Outer objective (validation) loss with label corruption rate: (a) $p = 0.1$ , (b) $p = 0.3$ . + +to achieve $\epsilon$ -stationary point is $\tilde{O}_{\mathbb{P}}(\epsilon^{-7/2})$ without momentum and $\tilde{O}_{\mathbb{P}}(\epsilon^{-5/2})$ with momentum — a factor of $O(\epsilon)$ improvement — when stochastic noises are present in both levels. In contrast, when stochastic noises are only in the upper-level objective, then the overall sample complexity is tightened from $\tilde{O}_{\mathbb{P}}(\epsilon^{-5/2})$ to $\tilde{O}_{\mathbb{P}}(\epsilon^{-2})$ , an $O(\epsilon^{-0.5})$ improvement. Whether Algorithm 2 achieves the optimal sample complexity for fully first-order methods is an interesting topic for future work. + +# 4.3. Discussion + +Because our algorithms do not access second-order derivatives of $g$ , their iteration convergence rate is slower, decreasing from $O(k^{-1/2})$ (e.g., (Chen et al., 2021)) to $O(k^{-2/7})$ for algorithms without momentum and from $O(k^{-2/3})$ (e.g., (Khanduri et al., 2021)) to $O(k^{-2/5})$ for algorithms with momentum. This is not unexpected since we use less information. Our experiments, perhaps surprisingly, do not show a slowdown in the convergence speed. In fact, first-order methods even outperform existing methods that use second-order information of $g$ , as we show in Section 5. We add that in practice, if a bias of $O(1/\lambda^2)$ -bias in the solution is not critical to the overall performance, then we can set $\lambda_k \coloneqq \lambda$ constant at all iterations and choose more aggressive step-sizes, e.g., $\alpha_k \asymp k^{-1/2}$ , $\gamma_k \asymp k^{-1/2}$ as in (Chen et al., 2021). Such a strategy yields faster convergence to a certain biased point. + +When deterministic gradient oracles are available, the authors in (Ye et al., 2022) employed the so called dynamic-barrier method (Gong et al., 2021) to decide the value of $\lambda_{k}$ at every iteration, based on $\| \nabla_y g(x_k,z_{k + 1}) - \nabla_y g(x_k,y_{k + 1})\|$ . Such an approach requires precise knowledge of the latter quantity, which is not available in stochastic settings. Our result shows that a simple design of + +polynomial-rate growth of $\lambda_{k}$ is sufficient; an adaptive choice is not needed for good practical performance. Further, the convergence rate reported in (Ye et al., 2022) is $k^{-1/4}$ , while our result guarantees $k^{-2/3}$ convergence rate in deterministic settings. + +# 5. Numerical Experiment + +We demonstrate the proposed algorithms on a data hypercleaning task involving MNIST (Deng, 2012). We are given a noisy training set $\mathcal{D}_{\mathrm{train}} := \{(\tilde{x}_i,\tilde{y}_i)\}_{i = 1}^n$ with the label $\tilde{y}_i$ being randomly corrupted with probability $p < 1$ . We are also given a small but clean validation set $\mathcal{D}_{\mathrm{val}} := \{(x_i,y_i)\}_{i = 1}^m$ . The goal is to assign weights to each training data point so that the model trained on the weighted training set yields good performance on the validation set. This task can be formulated as bilevel optimization problem, as follows: + +$$ +\min _ {\lambda} \quad \sum_ {i = 1} ^ {m} l \left(x _ {i}, y _ {i}; w ^ {*}\right) +$$ + +$$ +\mathrm {s . t .} \qquad w ^ {*} \in \arg \min _ {w} \sum_ {i = 1} ^ {n} \sigma (\lambda_ {i}) l (\tilde {x} _ {i}, \tilde {y} _ {i}; w) + c \| w \| ^ {2}. +$$ + +where $\sigma(\cdot)$ is a sigmoid function, $l(x,y;w)$ is a logistic loss function with parameter $w$ and $c$ is a regularization constant. We use $n = 19000$ training samples and $m = 1000$ clean validation samples with regularization parameter $c = 0.01$ . We do not include momentum-assisted methods in our discussion, since we do not observe a significant improvement over the $\mathbb{F}^2$ SA approach of Algorithm 1. + +We demonstrate the performance of Algorithm 1 (F $^2$ SA) and the second-order based method (SOBO) with batch sizes 50 and 500. We note that several existing second-order methods are in principle the same when momentum or variance-reduction techniques are omitted (Ghadimi & Wang, 2018; Hong et al., 2020; Chen et al., 2021), so we use + +the implementation of stocBiO (Ji et al., 2021) as a representative of the other second-order methods. As a baseline, we also add a result from training without bilevel formulation (Without BO), i.e., train on all samples as usual, ignoring the label corruption. Results are shown in Figure 2. $^{1}$ + +Although iteration complexity is worse for first-order methods than SOBO, we observe that $\mathbb{F}^2\mathbb{S}\mathbb{A}$ is at least on par with SOBO in this example. It can even give superior performance when the batch size is small. We conjecture that stochastic noises in Hessian become significantly larger than those in gradients, degrading the performance of SOBO. In our experiment, we also observe that the use of a truncated Neumann approximation (Ghadimi & Wang, 2018) for estimating the Hessian-inverse may induce non-negligible bias. In contrast, our fully first-order method $\mathbb{F}^2\mathbb{S}\mathbb{A}$ is much less sensitive to small batch sizes and free of bias. + +# 6. Conclusion + +In this work, we study a fully first-order method for stochastic bilevel optimization and its non-asymptotic convergence behavior. While we focus on well-conditioned bilevel problems, there are already several recent work that considers a more challenging case when the lower-level optimization problem can be non-strongly-convex and non-smooth Liu et al. (2021b;a); Arbel & Mairal (2022). The potential benefit of the first-order method over existing second-order based methods is that it can still be considered to tackle such scenarios, whereas the formula (1) is only available for well-conditioned lower-level problems. We believe it is an important future direction to study a more general class of $(\mathbf{P})$ beyond strongly-convex lower-level problems with fully first-order methods. Adding variable-dependent constraints to the lower-level problem would also lead to an interesting extension of fully first-order approaches. + +# Acknowledgement + +This work is partially supported by NSF Awards DMS 2023239 and CCF 2224213, DOE via subcontract 8F-30039 from Argonne National Laboratory, and AFOSR Award FA9550-21-1-0084. + +# References + +Amini, M. and Yousefian, F. An iterative regularized incremental projected subgradient method for a class of bilevel optimization problems. In 2019 American Control Conference (ACC), pp. 4069-4074. IEEE, 2019a. +Amini, M. and Yousefian, F. An iterative regularized mirror descent method for ill-posed nondifferentiable stochastic + +optimization. arXiv preprint arXiv:1901.09506, 2019b. +Anandalingam, G. and White, D. A solution method for the linear static stackelberg problem using penalty functions. IEEE Transactions on automatic control, 35(10):1170-1173, 1990. +Arbel, M. and Mairal, J. Non-convex bilevel games with critical point selection maps. arXiv preprint arXiv:2207.04888, 2022. +Bao, F., Wu, G., Li, C., Zhu, J., and Zhang, B. Stability and generalization of bilevel programming in hyperparameter optimization. Advances in Neural Information Processing Systems, 34:4529-4541, 2021. +Bracken, J. and McGill, J. T. Mathematical programs with optimization problems in the constraints. Operations Research, 21(1):37-44, 1973. +Chen, T., Sun, Y., and Yin, W. Closing the gap: Tighter analysis of alternating stochastic gradient methods for bilevel problems. Advances in Neural Information Processing Systems, 34:25294-25307, 2021. +Chen, T., Sun, Y., Xiao, Q., and Yin, W. A single-timescale method for stochastic bilevel optimization. In International Conference on Artificial Intelligence and Statistics, pp. 2466-2488. PMLR, 2022. +Colson, B., Marcotte, P., and Savard, G. An overview of bilevel optimization. Annals of operations research, 153 (1):235-256, 2007. +Dagréou, M., Ablin, P., Vaiter, S., and Moreau, T. A framework for bilevel optimization that enables stochastic and global variance reduction algorithms. arXiv preprint arXiv:2201.13409, 2022. +Deng, L. The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE signal processing magazine, 29(6):141-142, 2012. +Ferris, M. C. and Mangasarian, O. L. Finite perturbation of convex programs. Applied Mathematics and Optimization, 23(1):263-273, 1991. +Franceschi, L., Frasconi, P., Salzo, S., Grazzi, R., and Pontil, M. Bilevel programming for hyperparameter optimization and meta-learning. In International Conference on Machine Learning, pp. 1568-1577. PMLR, 2018. +Ghadimi, S. and Wang, M. Approximation methods for bilevel programming. arXiv preprint arXiv:1802.02246, 2018. +Gidel, G., Berard, H., Vignoud, G., Vincent, P., and Lacoste-Julien, S. A variational inequality perspective on generative adversarial networks. In International Conference on Learning Representations, 2018. + +Giovannelli, T., Kent, G., and Vicente, L. N. Bilevel stochastic methods for optimization and machine learning: Bilevel stochastic descent and darts. arXiv preprint arXiv:2110.00604, 2021. +Gong, C., Liu, X., and Liu, Q. Automatic and harmless regularization with constrained and lexicographic optimization: A dynamic barrier approach. Advances in Neural Information Processing Systems, 34:29630-29642, 2021. +Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial networks. Communications of the ACM, 63(11):139-144, 2020. +Guo, Z., Hu, Q., Zhang, L., and Yang, T. Randomized stochastic variance-reduced methods for multitask stochastic bilevel optimization. arXiv preprint arXiv:2105.02266, 2021. +Hong, M., Wai, H.-T., Wang, Z., and Yang, Z. A two-timescale framework for bilevel optimization: Complexity analysis and application to actor-critic. arXiv preprint arXiv:2007.05170, 2020. +Huang, F. and Huang, H. Biadam: Fast adaptive bilevel optimization methods. arXiv preprint arXiv:2106.11396, 2021. +Ishizuka, Y. and Aiyoshi, E. Double penalty method for bilevel optimization problems. Annals of Operations Research, 34(1):73-88, 1992. +Ji, K., Yang, J., and Liang, Y. Bilevel optimization: Convergence analysis and enhanced design. In International Conference on Machine Learning, pp. 4882-4892. PMLR, 2021. +Jiang, R., Abolfazli, N., Mokhtari, A., and Hamedani, E. Y. A conditional gradient-based method for simple bilevel optimization with convex lower-level problem, 2022. +Khanduri, P., Zeng, S., Hong, M., Wai, H.-T., Wang, Z., and Yang, Z. A near-optimal algorithm for stochastic bilevel optimization via double-momentum. Advances in Neural Information Processing Systems, 34:30271-30283, 2021. +Konda, V. and Tsitsiklis, J. Actor-critic algorithms. Advances in neural information processing systems, 12, 1999. +Kunapuli, G., Bennett, K. P., Hu, J., and Pang, J.-S. Bilevel model selection for support vector machines. Data mining and mathematical programming, 45:129-158, 2008. +Liu, R., Liu, X., Yuan, X., Zeng, S., and Zhang, J. A value-function-based interior-point method for non-convex bilevel optimization. In International Conference on Machine Learning, pp. 6882-6892. PMLR, 2021a. + +Liu, R., Liu, Y., Zeng, S., and Zhang, J. Towards gradient-based bilevel optimization with non-convex followers and beyond. Advances in Neural Information Processing Systems, 34:8662-8675, 2021b. +Margossian, C. C. A review of automatic differentiation and its efficient implementation. Wiley interdisciplinary reviews: data mining and knowledge discovery, 9(4): e1305, 2019. +Mehra, A. and Hamm, J. Penalty method for inversion-free deep bilevel optimization. In *Asian Conference on Machine Learning*, pp. 347-362. PMLR, 2021. +Nesterov, Y. et al. Lectures on convex optimization, volume 137. Springer, 2018. +Rajeswaran, A., Finn, C., Kakade, S. M., and Levine, S. Meta-learning with implicit gradients. Advances in neural information processing systems, 32, 2019. +Solodov, M. An explicit descent method for bilevel convex optimization. Journal of Convex Analysis, 14(2):227, 2007. +Sow, D., Ji, K., Guan, Z., and Liang, Y. A constrained optimization approach to bilevel optimization with multiple inner minima. arXiv preprint arXiv:2203.01123, 2022. +Stackelberg, H. v. et al. Theory of the market economy. 1952. +Sutton, R. S. and Barto, A. G. Reinforcement learning: An introduction. MIT press, 2018. +Vicente, L., Savard, G., and Júdice, J. Descent approaches for quadratic bilevel programming. Journal of Optimization theory and applications, 81(2):379-399, 1994. +White, D. J. and Anandalingam, G. A penalty function approach for solving bi-level linear programs. Journal of Global Optimization, 3(4):397-419, 1993. +Wright, S., Nocedal, J., et al. Numerical optimization. Springer Science, 35(67-68):7, 1999. +Yang, J., Ji, K., and Liang, Y. Provably faster algorithms for bilevel optimization. Advances in Neural Information Processing Systems, 34:13670-13682, 2021. +Ye, M., Liu, B., Wright, S., Stone, P., and Liu, Q. Bome! bilevel optimization made easy: A simple first-order approach. arXiv preprint arXiv:2209.08709, 2022. + +# A. Auxiliary Lemmas + +All deferred proofs in the main text and appendix are directed to Appendix D. + +# A.1. Additional Notation + +
SymbolMeaningLess than
lf,0Bound of ||∇xf||, ||∇yf||·
lf,1Smoothness of f·
lg,0Bound of ||∇xg||·
lg,1Smoothness of g·
μgStrong-convexity of g·
lg,2Hessian-continuity of g·
MfSecond-order moment of ∇f(x,y;ζ)l2f,0+σ2f
MgSecond-order moment of ∇g(x,y;φ)l2g,0+σ2g
lf,2Hessian-continuity of f (with Assumption 5)·
lf,1Smoothness of F(x)l*,0(lf,1+l2g,1/μg+2lf,0lg,1lg,2/μg2)
lλ,0Lipschitzness of yλ(x) (for all λ ≥ 2lf,1/μg)3lg,1/μg
lλ,1Smoothness of yλ(x) (for λ ≥ 2lf,1/μg with Assumption 5)32(lg,2+λ-1·lf,2) lg2/μg3
l*,0= 1 + maxλ≥2lf,1/μg lλ,0·
l*,1= maxλ≥2lf,1/μg lλ,1·
+ +Table 1: Meaning of Constants + +To simplify the representation for the movement of variables, we often use $q_{k}^{x}, q_{k}^{y}$ and $q_{k}^{z}$ defined as + +$$ +\begin{array}{l} q _ {k} ^ {x} := \nabla_ {x} f (x _ {k}, y _ {k + 1}) + \lambda_ {k} (\nabla_ {x} g (x _ {k}, y _ {k + 1}) - \nabla_ {x} g (x _ {k}, z _ {k + 1})), \\ q _ {k, t} ^ {y} := \nabla_ {y} f (x _ {k}, y _ {k, t}) + \lambda_ {k} \nabla_ {y} g (x _ {k}, y _ {k, t}), \\ q _ {k, t} ^ {z} := \nabla_ {y} g \left(x _ {k}, z _ {k, t}\right). \tag {10} \\ \end{array} +$$ + +The above quantities are the expected movements of $x_{k}, y_{k}^{(t)}, z_{k}^{(t)}$ respectively if there are no stochastic noises in gradient oracles. We also summarize symbols and their meanings for instance-specific constants in Table 1. + +# A.2. Auxiliary Lemmas + +We first state a few lemmas that will be useful in our main proofs. + +Lemma A.1. $F(x) = f(x,y^{*}(x))$ is $l_{F,1}$ -smooth where + +$$ +l _ {F, 1} \leq l _ {*}, 0 \left(l _ {f, 1} + \frac {l _ {g , 1} ^ {2}}{\mu_ {g}} + \frac {2 l _ {f , 0} l _ {g , 1} l _ {g , 2}}{\mu_ {g} ^ {2}}\right). +$$ + +Lemma A.2. For any $x,y,\lambda >0$ the following holds: + +$$ +\begin{array}{l} \| \nabla F (x) - \nabla_ {x} \mathcal {L} _ {\lambda} (x, y) + \nabla_ {x y} ^ {2} g (x, y ^ {*} (x)) ^ {\top} \nabla_ {y y} ^ {2} g (x, y ^ {*} (x)) ^ {- 1} \nabla_ {y} \mathcal {L} _ {\lambda} (x, y) \| \\ \leq 2 \left(l _ {g, 1} / \mu_ {g}\right) \| y - y ^ {*} (x) \| \left(l _ {f, 1} + \lambda \cdot \min \left(2 l _ {g, 1}, l _ {g, 2} \| y - y ^ {*} (x) \|\right)\right). \\ \end{array} +$$ + +Lemma A.3. Under Assumptions 1, 2 and 5, and $\lambda > 2l_{f,1} / \mu_g$ , a function $y_{\lambda}^{*}(x)$ is $l_{\lambda,1}$ -smooth: for any $x_1, x_2 \in X$ , we have + +$$ +\left\| \nabla y _ {\lambda} ^ {*} (x _ {1}) - \nabla y _ {\lambda} ^ {*} (x _ {2}) \right\| \leq l _ {\lambda , 1} \| x _ {1} - x _ {2} \| +$$ + +where $l_{\lambda,1} \leq 32(l_{g,2} + \lambda^{-1}l_{f,2})l_{g,1}^2 / \mu_g^3$ . + +Lemma A.4. For any fixed $\lambda > 2l_{f,1} / \mu_g$ , at every $k$ iteration conditioned on $\mathcal{F}_k$ , we have + +$$ +\mathbb {E} \left[ \| y ^ {*} (x _ {k + 1}) - y ^ {*} (x _ {k}) \| ^ {2} | \mathcal {F} _ {k} \right] \leq \xi^ {2} l _ {*, 0} ^ {2} \left(\alpha_ {k} ^ {2} \mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] + \alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}\right). +$$ + +Lemma A.5. At every $k^{th}$ iteration, conditioned on $\mathcal{F}_k$ , let $v_{k}$ be a random vector decided before updating $x_{k}$ . Then for any $\eta_k > 0$ , we have + +$$ +\begin{array}{l} \mathbb {E} [ \langle v _ {k}, y ^ {*} (x _ {k + 1}) - y ^ {*} (x _ {k}) \rangle | \mathcal {F} _ {k} ] \leq (\xi \alpha_ {k} \eta_ {k} + M \xi^ {2} l _ {*}, 1 ^ {2} \beta_ {k} ^ {2}) \mathbb {E} [ \| v _ {k} \| ^ {2} | \mathcal {F} _ {k} ] \\ + \left(\frac {\xi \alpha_ {k} l _ {* , 0} ^ {2}}{4 \eta_ {k}} + \frac {\xi^ {2} \alpha_ {k} ^ {2}}{4}\right) \mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] + \frac {\xi^ {2}}{4} (\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}), \\ \end{array} +$$ + +where $M\coloneqq \max \left(l_{f,0}^{2} + \sigma_{f}^{2},l_{g,0}^{2} + \sigma_{g}^{2}\right)$ + +Lemma A.6. Under Assumptions 1-5, at every $k^{th}$ iteration, conditioned on $\mathcal{F}_k$ , let $v_{k}$ be a random vector decided before updating $x_{k}$ . Then for any $\eta_k > 0$ , we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \langle v _ {k}, y _ {\lambda_ {k + 1}} ^ {*} (x _ {k + 1}) - y _ {\lambda_ {k}} ^ {*} (x _ {k}) \rangle | \mathcal {F} _ {k} \right] \leq \left(\delta_ {k} / \lambda_ {k} + \xi \alpha_ {k} \eta_ {k} + M \xi^ {2} l _ {\lambda_ {k}, 1} ^ {2} \beta_ {k} ^ {2}\right) \mathbb {E} \left[ \| v _ {k} \| ^ {2} | \mathcal {F} _ {k} \right] \\ + \left(\frac {\xi \alpha_ {k} l _ {* , 0} ^ {2}}{4 \eta_ {k}} + \frac {\xi^ {2} \alpha_ {k} ^ {2}}{4}\right) \mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] + \frac {\xi^ {2}}{4} (\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}) + \frac {\delta_ {k} l _ {f , 0} ^ {2}}{\lambda_ {k} ^ {3} \mu_ {g} ^ {2}}, \\ \end{array} +$$ + +where $M\coloneqq \max \left(l_{f,0}^{2} + \sigma_{f}^{2},l_{g,0}^{2} + \sigma_{g}^{2}\right)$ + +# B. Main Results for Algorithm 1 + +In this section, we prove our key estimate, Theorem 4.1. Our aim is to find the upper bound of $\mathbb{V}_{k + 1} - \mathbb{V}_k$ for the potential function $\mathbb{V}_k$ given in (4). For $x_{k}$ and $y_{k}$ given in Algorithm 1, the following notations will be used: + +$$ +\mathcal {I} _ {k} := \left\| y _ {k} - y _ {\lambda , k} ^ {*} \right\| ^ {2} \text {a n d} \mathcal {J} _ {k} := \left\| z _ {k} - y _ {k} ^ {*} \right\| ^ {2} \tag {11} +$$ + +where $y_{\lambda ,k}^{*}\coloneqq y_{\lambda_{k}}^{*}(x_{k})$ , $y_{k}^{*}\coloneqq y^{*}(x_{k})$ , and $x^{*} = \arg \min_{x}F(x)$ . Recall that $y_{\lambda}^{*}$ and $y^{*}$ are given in (2) and (P), respectively. Using the above notation, the potential function given in (4) can be rewritten as + +$$ +\mathbb {V} _ {k} := \left(F \left(x _ {k}\right) - F \left(x ^ {*}\right)\right) + \lambda_ {k} l _ {g, 1} \mathcal {I} _ {k} + \frac {\lambda_ {k} l _ {g , 1}}{2} \mathcal {J} _ {k} \tag {12} +$$ + +for each $k \in \mathbb{N}$ . In the following three subsections, we find the upper bound of $\mathbb{V}_{k+1} - \mathbb{V}_k$ in terms of $\mathcal{I}_k$ and $\mathcal{J}_k$ . The proof of Theorem 4.1 is given in Section B.4. + +# B.1. Estimation of $F(x_{k + 1}) - F(x_k)$ + +The step size $\alpha_{k}$ is designed to satisfy + +$$ +\text {(s t e p - s i z e r u l e)}: \quad \alpha_ {k} \leq \frac {1}{2 \xi l _ {F , 1}}, \tag {13} +$$ + +which is essential to obtain the negative term $-\frac{\xi\alpha_k}{4}\| q_k^x\|^2$ on the right hand side of (15). This negativity plays an important role in the proof of Theorem 4.1 in Section B.4. + +On the other hand, we also impose + +$$ +\text {(s t e p - s i z e r u l e)}: \quad \frac {\xi}{T} \leq \frac {\mu_ {g}}{9 6 l _ {g , 1}}. \tag {14} +$$ + +The terms, $\| y_{k + 1} - y_{\lambda ,k}^* \|^2$ and $\| z_{k + 1} - y_k^*\|^2$ , in the upper bound (15) will be estimated in Lemma B.3 and Lemma B.5, respectively. + +Proposition B.1. Under the step-size rules given in (13), and (14) and $\lambda_{k} \geq 2l_{f,1} / \mu_{g}$ , it holds that for each $k \in \mathbb{N}$ + +$$ +\begin{array}{l} \mathbb {E} [ F (x _ {k + 1}) - F (x _ {k}) | \mathcal {F} _ {k} ] \leq - \frac {\xi \alpha_ {k}}{4} \left(2 \| \nabla F (x _ {k}) \| ^ {2} + \| q _ {k} ^ {x} \| ^ {2}\right) + \frac {T \mu_ {g} \alpha_ {k} \lambda_ {k} ^ {2}}{3 2} \left(2 \| y _ {k + 1} - y _ {\lambda , k} ^ {*} \| ^ {2} + \| z _ {k + 1} - y _ {k} ^ {*} \| ^ {2}\right) \\ + \frac {\xi^ {2} l _ {F , 1}}{2} \left(\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}\right) + \frac {\xi \alpha_ {k}}{2} \cdot 3 C _ {\lambda} ^ {2} \lambda_ {k} ^ {- 2}, \tag {15} \\ \end{array} +$$ + +where $q_k^x$ is given in (10), and $C_\lambda \coloneqq \frac{4l_{f,0}l_{g,1}}{\mu_g^2}\left(l_{f,1} + \frac{2l_{f,0}l_{g,2}}{\mu_g}\right)$ . + +Proof. From the smoothness of $F$ , + +$$ +\mathbb {E} \left[ F \left(x _ {k + 1}\right) - F \left(x _ {k}\right) \mid \mathcal {F} _ {k} \right] \leq \mathbb {E} \left[ \left\langle \nabla F \left(x _ {k}\right), x _ {k + 1} - x _ {k} \right\rangle + \frac {l _ {F , 1}}{2} \left\| x _ {k + 1} - x _ {k} \right\| ^ {2} \mid \mathcal {F} _ {k} \right]. +$$ + +As $q_{k}^{x}$ satisfies $\mathbb{E}[x_{k + 1} - x_k|\mathcal{F}_k] = \alpha_k q_k^x$ + +$$ +\begin{array}{l} \mathbb {E} \left[ F \left(x _ {k + 1}\right) - F \left(x _ {k}\right) \mid \mathcal {F} _ {k} \right] = - \xi \alpha_ {k} \left\langle \nabla_ {x} F \left(x _ {k}\right), q _ {k} ^ {x} \right\rangle + \frac {l _ {F , 1}}{2} \mathbb {E} \left[ \| x _ {k + 1} - x _ {k} \| ^ {2} \mid \mathcal {F} _ {k} \right] \\ = - \frac {\xi \alpha_ {k}}{2} (\| \nabla F (x _ {k}) \| ^ {2} + \| q _ {k} ^ {x} \| ^ {2} - \| \nabla F (x _ {k}) - q _ {k} ^ {x} \| ^ {2}) + \frac {l _ {F , 1}}{2} \mathbb {E} [ \| x _ {k + 1} - x _ {k} \| ^ {2} | \mathcal {F} _ {k} ]. \\ \end{array} +$$ + +Note that + +$$ +\mathbb {E} \left[ \| x _ {k + 1} - x _ {k} \| ^ {2} \right] \leq \xi^ {2} \alpha_ {k} ^ {2} \mathbb {E} \left[ \| q _ {k} ^ {x} \| ^ {2} \right. + \xi^ {2} \left(\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}\right), +$$ + +and thus with (13) we have + +$$ +\begin{array}{l} \mathbb {E} \left[ F \left(x _ {k + 1}\right) - F \left(x _ {k}\right) \mid \mathcal {F} _ {k} \right] \leq - \frac {\xi \alpha_ {k}}{2} \| \nabla F (x _ {k}) \| ^ {2} - \frac {\xi \alpha_ {k}}{4} \| q _ {k} ^ {x} \| ^ {2} \\ + \frac {\xi \alpha_ {k}}{2} \| \nabla F (x _ {k}) - q _ {k} ^ {x} \| ^ {2} + \frac {\xi^ {2} l _ {F , 1}}{2} (\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}). \\ \end{array} +$$ + +Next, we bound $\| \nabla F(x_k) - q_k^x \|$ using the triangle inequality: + +$$ +\begin{array}{l} \left\| q _ {k} ^ {x} - \nabla F (x _ {k}) \right\| = \left\| q _ {k} ^ {x} - \nabla \mathcal {L} _ {\lambda_ {k}} ^ {*} (x _ {k}) + \nabla \mathcal {L} _ {\lambda_ {k}} ^ {*} (x _ {k}) - \nabla F (x _ {k}) \right\| \\ \leq \left\| \nabla_ {x} f \left(x _ {k}, y _ {k + 1}\right) - \nabla_ {x} f \left(x _ {k}, y _ {\lambda , k} ^ {*}\right) \right\| + \lambda_ {k} \left\| \nabla_ {x} g \left(x _ {k}, y _ {k + 1}\right) - \nabla_ {x} g \left(x _ {k}, y _ {\lambda , k} ^ {*}\right) \right\| \\ + \lambda_ {k} \| \nabla_ {x} g (x _ {k}, z _ {k + 1}) - \nabla_ {x} g (x _ {k}, y _ {k} ^ {*}) \| + \| \nabla \mathcal {L} _ {\lambda_ {k}} ^ {*} (x _ {k}) - \nabla F (x _ {k}) \|. \\ \end{array} +$$ + +From Lemma 3.1, the term $\| \nabla \mathcal{L}_{\lambda_k}^* (x_k) - \nabla F(x_k)\|$ is bounded by $C_\lambda /\lambda_k$ . Combining with the regularity of $f$ and $g$ yields the following: + +$$ +\left\| q _ {k} ^ {x} - \nabla F (x _ {k}) \right\| \leq 2 l _ {g, 1} \lambda_ {k} \left\| y _ {k + 1} - y _ {\lambda , k} ^ {*} \right\| + l _ {g, 1} \lambda_ {k} \left\| z _ {k + 1} - y _ {k} ^ {*} \right\| + C _ {\lambda} / \lambda_ {k}. \tag {16} +$$ + +Note that $\lambda_{k}\geq 2l_{f,1} / \mu_{g}$ , and thus $l_{f,1} < l_{g,1}\lambda_{k}$ + +Finally, from Cauchy-Schwartz inequality $(a + b + c)^2 \leq 3(a^2 + b^2 + c^2)$ , we get + +$$ +\begin{array}{l} \mathbb {E} [ F (x _ {k + 1}) - F (x _ {k}) | \mathcal {F} _ {k} ] \leq - \frac {\xi \alpha_ {k}}{2} \| \nabla F (x _ {k}) \| ^ {2} - \frac {\xi \alpha_ {k}}{4} \| q _ {k} ^ {x} \| ^ {2} \tag {17} \\ + \frac {\xi \alpha_ {k}}{2} \cdot 3 C _ {\lambda} ^ {2} \lambda_ {k} ^ {- 2} + 3 \xi \alpha_ {k} l _ {g, 1} \lambda_ {k} ^ {2} \| z _ {k + 1} - y _ {k} ^ {*} \| ^ {2} + 6 \xi \alpha_ {k} l _ {g, 1} \lambda_ {k} ^ {2} \| y _ {k + 1} - y _ {\lambda , k} ^ {*} \| ^ {2} + \frac {\xi^ {2} l _ {F , 1}}{2} (\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}). \\ \end{array} +$$ + +The step-size condition (14) concludes our claim. + +# B.2. Descent Lemma for $y_{k}$ towards $y_{\lambda ,k}^{*}$ + +In this section, the upper bounds of $\mathcal{I}_{k + 1}$ and $\| y_{k + 1} - y_{\lambda ,k}^*\|$ are provided, respectively, in Lemma B.2 and Lemma B.3. The following rule is required to ensure that $\| y_{k + 1} - y_{\lambda ,k + 1}\| ^2$ contracts: + +$$ +\text {(s t e p - s i z e r u l e)}: \quad \frac {\delta_ {k}}{\lambda_ {k}} \leq \frac {T \beta_ {k} \mu_ {g}}{3 2}, \text {a n d} 2 \xi^ {2} M l _ {* , 1} ^ {2} \beta_ {k} ^ {2} < T \beta_ {k} \mu_ {g} / 1 6. \tag {18} +$$ + +The first condition holds directly from (3b), and the second condition holds since $\beta_{k} \leq \frac{1}{4T\mu_{g}}$ and also + +$$ +\frac {\xi^ {2}}{T ^ {2}} \leq \frac {\mu_ {g} ^ {2}}{8} (M l _ {* , l} ^ {2}) ^ {- 1}, +$$ + +which also holds by (3b) with sufficiently small $c_{\xi}$ . + +Lemma B.2. Under the step-size rule (18), it holds that for each $k\in \mathbb{N}$ + +$$ +\begin{array}{l} \mathbb {E} \left[ \mathcal {I} _ {k + 1} \mid \mathcal {F} _ {k} \right] \leq \left(1 + T \beta_ {k} \mu_ {g} / 4\right) \mathbb {E} \left[ \left\| y _ {k + 1} - y _ {\lambda , k} ^ {*} \right\| ^ {2} \mid \mathcal {F} _ {k} \right] \\ + O \left(\frac {\xi^ {2} l _ {* , 0} ^ {2} \alpha_ {k} ^ {2}}{\mu_ {g} T \beta_ {k}}\right) \mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] + O \left(\frac {\delta_ {k}}{\lambda_ {k} ^ {3}} \frac {l _ {f , 0} ^ {2}}{\mu_ {g} ^ {2}}\right) + O \left(\xi^ {2} l _ {* , 0} ^ {2}\right) \cdot \left(\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}\right). \tag {19} \\ \end{array} +$$ + +where $\mathcal{I}_k$ and $q_k^x$ are given in (11) and (10), respectively. + +Proof. We can start from + +$$ +\| y_{k + 1} - y^{*}_{\lambda ,k + 1}\|^{2} = \underbrace{\|y_{k + 1} - y^{*}_{\lambda,k}\|^{2}}_{(i)} + \underbrace{\|y^{*}_{\lambda,k + 1} - y^{*}_{\lambda,k}\|^{2}}_{(ii)} - \underbrace{2\langle y_{k + 1} - y^{*}_{\lambda,k},y^{*}_{\lambda,k + 1} - y^{*}_{\lambda,k}\rangle}_{(iii)}. +$$ + +The upper bound of $(i)$ is given in Lemma B.3 below. To bound $(ii)$ , we invoke Lemma 3.2 to get + +$$ +\begin{array}{l} \left(i i\right): \mathbb {E} [ \| y _ {\lambda , k + 1} ^ {*} - y _ {\lambda , k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} ] \leq \frac {4 \delta_ {k} ^ {2}}{\lambda_ {k} ^ {2} \lambda_ {k + 1} ^ {2}} \frac {l _ {f , 0} ^ {2}}{\mu_ {g} ^ {2}} + l _ {* , 0} ^ {2} \mathbb {E} [ \| x _ {k + 1} - x _ {k} \| ^ {2} | \mathcal {F} _ {k} ] \\ \leq \frac {4 \delta_ {k} ^ {2}}{\lambda_ {k} ^ {4}} \frac {l _ {f , 0} ^ {2}}{\mu_ {g} ^ {2}} + \xi^ {2} l _ {* , 0} ^ {2} (\alpha_ {k} ^ {2} \mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} ] + \alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {f} ^ {2}). \\ \end{array} +$$ + +For (iii), recall the smoothness of $y_{\lambda}^{*}(x)$ in Lemma A.3, and thus Lemma A.6. By setting $v = y_{k + 1} - y_{\lambda ,k}^{*}$ and $\eta_k = T\mu_g\lambda_k / (16\xi)$ , and get + +$$ +\begin{array}{l} \left(i i i\right) \leq \left(2 \delta_ {k} / \lambda_ {k} + T \beta_ {k} \mu_ {g} / 8 + 2 M \xi^ {2} l _ {* , 1} ^ {2} \beta_ {k} ^ {2}\right) \mathbb {E} [ \| y _ {k + 1} - y _ {\lambda , k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} ] \\ + \xi^ {2} \left(\frac {\alpha_ {k} ^ {2}}{2} + \frac {8 \alpha_ {k} ^ {2} l _ {* , 0} ^ {2}}{\mu_ {g} T \beta_ {k}}\right) \| q _ {k} ^ {x} \| ^ {2} + \frac {\xi^ {2}}{2} (\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}) + \frac {2 \delta_ {k}}{\lambda_ {k} ^ {3}} \frac {l _ {f , 0} ^ {2}}{\mu_ {g} ^ {3}}. \\ \end{array} +$$ + +We sum up the $(i),(ii),(iii)$ to conclude + +$$ +\begin{array}{l} \mathbb {E} \left[ \mathcal {I} _ {k + 1} \mid \mathcal {F} _ {k} \right] \leq \left(1 + 2 \delta_ {k} / \lambda_ {k} + T \beta_ {k} \mu_ {g} / 8 + 2 M \xi^ {2} l _ {*}, 1 ^ {2} \beta_ {k} ^ {2}\right) \mathbb {E} \left[ \| y _ {k + 1} - y _ {\lambda , k} ^ {*} \| ^ {2} \right] \\ + O \left(\frac {\xi^ {2} l _ {* , 0} ^ {2} \alpha_ {k} ^ {2}}{\mu_ {g} T \beta_ {k}}\right) \| q _ {k} ^ {x} \| ^ {2} + O \left(\frac {\delta_ {k}}{\lambda_ {k} ^ {3}} \frac {l _ {f , 0} ^ {2}}{\mu_ {g} ^ {2}}\right) + O \left(\xi^ {2} l _ {* , 0} ^ {2}\right) \cdot \left(\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}\right). \tag {20} \\ \end{array} +$$ + +Lastly, the step-size rule (18) yields our conclusion. + +Next, we note that $\alpha_{k}$ and $\beta_{k}$ are chosen to satisfy + +$$ +\text {(s t e p s i z e r u l e s)}: \quad \alpha_ {k} \leq \frac {1}{8 l _ {f , 1}} \text {a n d} \beta_ {k} \leq \frac {1}{8 l _ {g , 1}}, \tag {21} +$$ + +Note that $\beta_{k} \leq \frac{1}{8l_{g,1}}$ is given from the step-size condition (3a), and $\alpha_{k} \leq \frac{1}{8l_{g,1}\lambda_{k}} \leq \frac{1}{8l_{f,1}}$ since $\lambda_{k} \geq l_{f,1} / \mu_{g}$ . + +Lemma B.3. Under the step-size rule given in (21), it holds that for each $k \in \mathbb{N}$ + +$$ +\mathbb {E} \left[ \| y _ {k + 1} - y _ {\lambda , k} ^ {*} \| ^ {2} \mid \mathcal {F} _ {k} \right] \leq (1 - 3 T \mu_ {g} \beta_ {k} / 4) \mathcal {I} _ {k} + T \left(\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}\right). \tag {22} +$$ + +Proof. Since $\mathbb{E}[y_k^{(t + 1)} - y_k^{(t)}|\mathcal{F}_k] = -\alpha_k\nabla_yq_k^{(t)} = -\alpha_k\nabla_y\mathcal{L}_{\lambda_k}(x_k,y_k^{(t)})$ , we have + +$$ +\mathbb {E} [ \| y _ {k} ^ {(t + 1)} - y _ {\lambda , k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} ] = \| y _ {k} ^ {(t)} - y _ {\lambda , k} ^ {*} \| ^ {2} - 2 \alpha_ {k} \langle \nabla_ {y} q _ {k} ^ {(t)}, y _ {k} ^ {(t)} - y _ {\lambda , k} ^ {*} \rangle + \mathbb {E} [ \| y _ {k} ^ {(t + 1)} - y _ {k} ^ {(t)} \| ^ {2} | \mathcal {F} _ {k} ]. +$$ + +As we start from $\lambda_0\geq 2\mu_f / \mu_g$ , all $\mathcal{L}_k$ is $(\lambda_k\mu_g / 2)$ -strongly convex in $y$ , and we have + +$$ +\max \left(\frac {\lambda_ {k} \mu_ {g}}{2} \| y _ {k} ^ {(t)} - y _ {\lambda , k} ^ {*} \| ^ {2}, \frac {1}{l _ {f , 1} + \lambda_ {k} l _ {g , 1}} \| \nabla_ {y} q _ {k} ^ {(t)} \| ^ {2}\right) \leq \langle \nabla_ {y} q _ {k} ^ {(t)}, y _ {k} ^ {(t)} - y _ {\lambda , k} ^ {*} \rangle . +$$ + +Using $\mathbb{E}[\| y_k^{(t + 1)} - y_k^{(t)}\|^2 |\mathcal{F}_k]\leq \alpha_k^2\| \nabla_yq_k^{(t)}\|^2 +\alpha_k^2\sigma_f^2 +\beta_k^2\sigma_g^2$ , have + +$$ +(i): \mathbb {E} [ \| y _ {k} ^ {(t + 1)} - y _ {\lambda , k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} ] \leq (1 - 3 \mu_ {g} \beta_ {k} / 4) \| y _ {k} ^ {(t)} - y _ {\lambda , k} ^ {*} \| ^ {2} + (\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}), +$$ + +where we use $\alpha_{k}(l_{f,1} + \lambda_{k}l_{g,1}) = \alpha_{k}l_{f,1} + \beta_{k}l_{g,1}\leq 1 / 4$ if we have (21). Repeating this $T$ times, we get (22). Note that $y_{k + 1} = y_k^{(T)}$ and $y_{k} = y_{k}^{(0)}$ . + +# B.3. Descent Lemma for $z_{k}$ towards $y_{k}^{*}$ + +Similar to the previous section, we provide the upper bound of $\mathcal{I}_{k + 1}$ first and then estimate $\| z_{k + 1} - y_k^*\|$ that appears in the upper bound. We work with the following step-size condition: + +$$ +\text {(s t e p - s i z e r u l e)}: \quad 2 M l _ {*}, 1 ^ {2} \xi^ {2} \beta_ {k} ^ {2} \leq T \mu_ {g} \gamma_ {k} / 1 6, \tag {23} +$$ + +This condition holds since $\beta_{k} \leq \gamma_{k}$ , and $\beta_{k} \leq \frac{1}{4T\mu_{g}}$ and $\frac{\xi^2}{T^2} \leq \frac{\mu_g^2}{8}(Ml_{*,1}^2)^{-1}$ . + +Lemma B.4. Under the step-size rule (23), at each $k^{th}$ iteration, the following holds: + +$$ +\begin{array}{l} \mathbb {E} \left[ \mathcal {J} _ {k + 1} \mid \mathcal {F} _ {k} \right] \leq \left(1 + \frac {3 T \gamma_ {k} \mu_ {g}}{8}\right) \cdot \mathbb {E} \left[ \left\| z _ {k + 1} - y _ {k} ^ {*} \right\| ^ {2} \mid \mathcal {F} _ {k} \right] \\ + O \left(\frac {\xi^ {2} \alpha_ {k} ^ {2} l _ {* , 0} ^ {2}}{T \mu_ {g} \gamma_ {k}}\right) \| q _ {k} ^ {x} \| ^ {2} + O \left(\xi^ {2} l _ {* , 0} ^ {2}\right) \left(\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}\right). \tag {24} \\ \end{array} +$$ + +Proof. We estimate each term in the following simple decomposition. + +$$ +\left\| z _ {k + 1} - y _ {k + 1} ^ {*} \right\| ^ {2} = \underbrace {\left\| z _ {k + 1} - y _ {k} ^ {*} \right\| ^ {2}} _ {(i)} + \underbrace {\left\| y _ {k + 1} ^ {*} - y _ {k} ^ {*} \right\| ^ {2}} _ {(i i)} - 2 \underbrace {\left\langle z _ {k + 1} - y _ {k} ^ {*} , y _ {k + 1} ^ {*} - y _ {k} ^ {*} \right\rangle}. +$$ + +Lemma 3.2 implies that + +$$ +\left(i i\right): \mathbb {E} [ \| y _ {k + 1} ^ {*} - y _ {k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} ] \leq l _ {*, 0} ^ {2} \xi^ {2} (\alpha_ {k} ^ {2} \| \nabla_ {x} q _ {k} \| ^ {2} + \alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}). +$$ + +For (iii), we recall Lemma A.5 with $v_{k} = z_{k + 1} - y_{k}^{*}$ and $\eta_{k} = T\mu_{g}\gamma_{k} / (8\xi \alpha_{k})$ , we have + +$$ +\begin{array}{l} (i i i): \left\langle z _ {k + 1} - y _ {k} ^ {*}, y _ {k + 1} ^ {*} - y _ {k} ^ {*} \right\rangle \leq \left(T \gamma_ {k} \mu_ {g} / 8 + M \xi^ {2} l _ {*, 1} ^ {2} \beta_ {k} ^ {2}\right) \mathbb {E} \left[ \| z _ {k + 1} - y _ {k} ^ {*} \| ^ {2} \mid \mathcal {F} _ {k} \right] \\ + \left(\frac {\xi^ {2} \alpha_ {k} ^ {2}}{4} + \frac {2 \xi^ {2} \alpha_ {k} ^ {2} l _ {* , 0} ^ {2}}{T \mu_ {g} \gamma_ {k}}\right) \| q _ {k} ^ {x} \| ^ {2} + \frac {\xi^ {2}}{4} (\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}). \\ \end{array} +$$ + +The above bounds and Lemma B.5 imply that + +$$ +\begin{array}{l} \mathbb {E} [ \mathcal {J} _ {k + 1} | \mathcal {F} _ {k} ] \leq \left(1 + \frac {T \gamma_ {k} \mu_ {g}}{4} + 2 M \xi^ {2} l _ {* , 1} ^ {2} \beta_ {k} ^ {2}\right) \cdot \mathbb {E} [ \| z _ {k + 1} - y _ {k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} ] \\ + \xi^ {2} \alpha_ {k} ^ {2} \cdot \left(l _ {*, 0} ^ {2} + \frac {4 l _ {*, 0} ^ {2}}{T \mu_ {g} \gamma_ {k}} + \frac {1}{2}\right) \| q _ {k} ^ {x} \| ^ {2} + \xi^ {2} \cdot \left(\frac {1}{2} + l _ {*, 0} ^ {2}\right) \left(\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}\right). \tag {25} \\ \end{array} +$$ + +Using (23), we conclude. + +Next, $\gamma_{k}$ is chosen to satisfy the following step-size rules: + +$$ +\text {(s t e p - s i z e r u l e)}: \quad l _ {g, 1} \gamma_ {k} \leq 1 / 4, \quad T \mu_ {g} \gamma_ {k} \leq 1 / 4, \tag {26} +$$ + +which directly comes from (3a). + +Lemma B.5. If (26) holds, then for each $k \in \mathbb{N}$ , the following holds: + +$$ +\mathbb {E} \left[ \| z _ {k + 1} - y _ {k} ^ {*} \| ^ {2} \mid \mathcal {F} _ {k} \right] \leq (1 - 3 T \mu_ {g} \gamma_ {k} / 4) \mathcal {J} _ {k} + T \gamma_ {k} ^ {2} \sigma_ {g} ^ {2}. \tag {27} +$$ + +Proof. We analyze one step iteration of the inner loop: for each $t = 0, \dots, T - 1$ , + +$$ +\begin{array}{l} \left\| z _ {k} ^ {(t + 1)} - y _ {k} ^ {*} \right\| ^ {2} = \left\| z _ {k} ^ {(t)} - y _ {k} ^ {*} \right\| ^ {2} + \left\| z _ {k} ^ {(t + 1)} - z _ {k} ^ {(t)} \right\| ^ {2} + 2 \langle z _ {k} ^ {(t + 1)} - z _ {k} ^ {(t)}, z _ {k} ^ {(t)} - y _ {k} ^ {*} \rangle \\ = \| z _ {k} ^ {(t)} - y _ {k} ^ {*} \| ^ {2} + \gamma_ {k} ^ {2} \| h _ {g z} ^ {k, t} \| ^ {2} - 2 \gamma_ {k} \langle h _ {g z} ^ {k, t}, z _ {k} - y _ {k} ^ {*} \rangle . \\ \end{array} +$$ + +Here, $z_{k + 1} = z_k^{(T)}$ and $z_{k} = z_{k}^{(0)}$ . Note that $\mathbb{E}[h_{gz}^{k,t}] = \nabla_yg(x_k,z_k^{(t)}) = \nabla_yg_k(z_k^{(t)})$ where $g_{k}(z_{k}^{(t)})\coloneqq g(x_{k},z_{k}^{(t)})$ . Taking expectation, + +$$ +\mathbb {E} \left[ \| z _ {k} ^ {(t + 1)} - y _ {k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} \right] \leq \| z _ {k} ^ {(t)} - y _ {k} ^ {*} \| ^ {2} + \gamma_ {k} ^ {2} \| \nabla g _ {k} (z _ {k} ^ {(t)}) \| ^ {2} + \gamma_ {k} ^ {2} \sigma_ {g} ^ {2} - 2 \gamma_ {k} \langle \nabla g _ {k} (z _ {k} ^ {(t)}), z _ {k} ^ {(t)} - y _ {k} ^ {*} \rangle . +$$ + +The strong convexity and smoothness of $g_{k}$ imply the coercivity and co-coercivity (Nesterov et al., 2018), that is, + +$$ +\max \left(\mu_ {g} \| z _ {k} ^ {(t)} - y _ {k} ^ {*} \| ^ {2}, \frac {1}{l _ {g , 1}} \| \nabla g _ {k} (z _ {k} ^ {(t)}) - \nabla g _ {k} (y _ {k} ^ {*}) \| ^ {2}\right) \leq \langle \nabla g _ {k} (z _ {k} ^ {(t)}) - \nabla g _ {k} (y _ {k} ^ {*}), z _ {k} ^ {(t)} - y _ {k} ^ {*} \rangle . +$$ + +Note that $y_{k}^{*}$ minimizes $g_{k}(y)$ . Use this to cancel out $\gamma_k^2\| \nabla g_k(z_k^{(t)})\|^2$ , yielding + +$$ +\begin{array}{l} \mathbb {E} [ \| z _ {k} ^ {(t + 1)} - y _ {k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} ] \leq \| z _ {k} ^ {(t)} - y _ {k} ^ {*} \| ^ {2} + \gamma_ {k} ^ {2} \sigma_ {g} ^ {2} - \gamma_ {k} (1 - l _ {g, 1} \gamma_ {k}) \langle \nabla g _ {k} (z _ {k} ^ {(t)}), z _ {k} ^ {(t)} - y _ {k} ^ {*} \rangle \\ \leq \left(1 - 3 \mu_ {g} \gamma_ {k} / 4\right) \| z _ {k} ^ {(t)} - y _ {k} ^ {*} \| ^ {2} + \gamma_ {k} ^ {2} \sigma_ {g} ^ {2}. \\ \end{array} +$$ + +For this to hold, we need a step-size condition (26). We can repeat this relation for $T$ times, and we get (27). + +![](images/0acf2070c5a08e07344349e041a94a060fca866b925980b2e6fefdd10cb16cdd.jpg) + +# B.4. Proof of Theorem 4.1 + +Recall $\mathbb{V}_k$ given in (4). In what follows, we examine + +$$ +\begin{array}{l} \mathbb {V} _ {k + 1} - \mathbb {V} _ {k} = F (x _ {k + 1}) - F (x _ {k}) + \lambda_ {k + 1} l _ {g, 1} \mathcal {I} _ {k + 1} - \lambda_ {k} l _ {g, 1} \mathcal {I} _ {k} \\ + \frac {\lambda_ {k + 1} l _ {g , 1}}{2} \mathcal {J} _ {k + 1} - \frac {\lambda_ {k} l _ {g , 1}}{2} \mathcal {J} _ {k}. \\ \end{array} +$$ + +Using the estimate of $F(x_{k + 1}) - F(x_k)$ given in Proposition B.1 and rearranging the terms, we have + +$$ +\begin{array}{l} \mathbb {E} [ \mathbb {V} _ {k + 1} - \mathbb {V} _ {k} | \mathcal {F} _ {k} ] \leq - \frac {\xi \alpha_ {k}}{2} \| \nabla F (x _ {k}) \| ^ {2} - \frac {\xi \alpha_ {k}}{4} \mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] + \frac {\xi \alpha_ {k}}{2} \cdot 3 C _ {\lambda} ^ {2} \lambda_ {k} ^ {- 2} + \frac {\xi^ {2} l _ {F , 1}}{2} (\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}) \\ + l _ {g, 1} \underbrace {\mathbb {E} \left[ \lambda_ {k + 1} \mathcal {I} _ {k + 1} + \frac {\lambda_ {k} T \beta_ {k} \mu_ {g}}{1 6} \left\| y _ {k + 1} - y _ {\lambda , k} ^ {*} \right\| ^ {2} - \lambda_ {k} \mathcal {I} _ {k} \mid \mathcal {F} _ {k} \right]} _ {(i)} \\ + \frac {l _ {g , 1}}{2} \underbrace {\mathbb {E} \left[ \lambda_ {k + 1} \mathcal {J} _ {k + 1} + \frac {\lambda_ {k} T \gamma_ {k} \mu_ {g}}{3 2} \| z _ {k + 1} - y _ {k} ^ {*} \| ^ {2} - \lambda_ {k} \mathcal {J} _ {k} | \mathcal {F} _ {k} \right]} _ {(i i)} \\ \end{array} +$$ + +Estimation of $(i)$ : From Lemma B.2, and $\lambda_{k + 1} = \lambda_k + \delta_k$ yield that + +$$ +(i) \leq \lambda_ {k} \left(1 + \frac {5 T \beta_ {k} \mu_ {g}}{1 6} + \frac {\delta_ {k}}{\lambda_ {k}}\right) \mathbb {E} [ \| y _ {k + 1} - y _ {\lambda , k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} ] - \lambda_ {k} \mathcal {I} _ {k} +$$ + +$$ ++ \underbrace {O (\xi^ {2} l _ {\lambda , 0} ^ {2}) \frac {\lambda_ {k} \alpha_ {k} ^ {2}}{\mu_ {g} T \beta_ {k}} \| q _ {k} ^ {x} \| ^ {2} + O (\xi^ {2} l _ {* , 0} ^ {2}) \lambda_ {k} (\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}) + O \left(\frac {l _ {f , 0} ^ {2}}{\mu_ {g} ^ {3}}\right) \cdot \frac {\delta_ {k}}{\lambda_ {k} ^ {2}}} _ {(i i i)}. +$$ + +Given the step-size rules (18), we obtain + +$$ +(i) \leq \lambda_ {k} \left(1 + \frac {T \beta_ {k} \mu_ {g}}{2}\right) \mathbb {E} \left[ \| y _ {k + 1} - y _ {\lambda , k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} \right] - \lambda_ {k} \mathcal {I} _ {k} + (i i i). +$$ + +The estimation of $\| y_{k + 1} - y_{\lambda ,k}^*\| ^2$ from Lemma B.3 yields that + +$$ +\begin{array}{l} (i) \leq - \frac {\lambda_ {k} T \mu_ {g} \beta_ {k}}{4} \mathcal {I} _ {k} + O \left(\xi^ {2} l _ {* , 0} ^ {2}\right) \frac {\alpha_ {k}}{\mu_ {g} T} \| q _ {k} ^ {x} \| ^ {2} + (i i i), \\ = - \frac {\lambda_ {k} T \mu_ {g} \beta_ {k}}{4} \mathcal {I} _ {k} + O (\xi^ {2} l _ {*}, 0) \frac {\alpha_ {k}}{\mu_ {g} T} \| q _ {k} ^ {x} \| ^ {2} + O (T + \xi^ {2} l _ {*}, 0) \lambda_ {k} (\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}) + O \left(\frac {l _ {f , 0} ^ {2}}{\mu_ {g} ^ {3}}\right) \cdot \frac {\delta_ {k}}{\lambda_ {k} ^ {2}}. \\ \end{array} +$$ + +Here, we use $(1 + a / 2)(1 - 3a / 4)\leq 1 - a / 4$ for $a > 0$ + +Estimation of $(ii)$ : Lemma B.4 yields that + +$$ +\begin{array}{l} (i i) \leq \lambda_ {k} \left(1 + \frac {\delta_ {k}}{\lambda_ {k}} + \frac {3 T \gamma_ {k} \mu_ {g}}{8} + \frac {\lambda_ {k} T \beta_ {k} \mu_ {g}}{3 2}\right) \mathbb {E} [ \| z _ {k + 1} - y _ {k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} ] - \lambda_ {k} \mathcal {J} _ {k} \\ + \underbrace {O (\xi^ {2} l _ {* , 0} ^ {2}) \frac {\lambda_ {k + 1} \alpha_ {k} ^ {2}}{T \mu_ {g} \gamma_ {k}} \| q _ {k} ^ {x} \| ^ {2} + O (\xi^ {2} \lambda_ {k + 1} l _ {* , 0} ^ {2}) (\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2})} _ {(i v)}. \\ \end{array} +$$ + +With $\beta_{k}\leq \gamma_{k}$ , and thus $\delta_k / \lambda_k < T\mu_g\gamma_k / 32$ , we have that + +$$ +(i i) \leq \lambda_ {k} \left(1 + \frac {T \gamma_ {k} \mu_ {g}}{2}\right) \mathbb {E} [ \| z _ {k + 1} - y _ {k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} ] - \lambda_ {k} \mathcal {J} _ {k} + (i v) +$$ + +Similar to the argument for $(i)$ above, Lemma B.5 yields + +$$ +(i i) \leq - \frac {\lambda_ {k} T \mu_ {g} \gamma_ {k}}{4} \mathcal {J} _ {k} + O (\xi^ {2} l _ {*}, 0) \frac {\alpha_ {k} \beta_ {k}}{T \mu_ {g} \gamma_ {k}} \| q _ {k} ^ {x} \| ^ {2} + O (\xi^ {2} \lambda_ {k} l _ {*}, 0) (\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}) + O (\lambda_ {k}) T \gamma_ {k} ^ {2} \sigma_ {g} ^ {2}. +$$ + +Plug the bound for $(i)$ and $(ii)$ , after rearranging terms, we get + +$$ +\begin{array}{l} \mathbb {E} [ \mathbb {V} _ {k + 1} - \mathbb {V} _ {k} | \mathcal {F} _ {k} ] \leq - \frac {\xi \alpha_ {k}}{2} \| \nabla F (x _ {k}) \| ^ {2} + \frac {\xi \alpha_ {k}}{2} \cdot 3 C _ {\lambda} ^ {2} \lambda_ {k} ^ {- 2} + \frac {\xi^ {2} l _ {F , 1}}{2} (\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}) \\ - \frac {\xi \alpha_ {k}}{4} \left(1 - O \left(\frac {\xi l _ {g , 1} l _ {* , 0} ^ {2} \beta_ {k}}{\mu_ {g} T \gamma_ {k}}\right) - O \left(\frac {\xi l _ {g , 1} l _ {* , 0} ^ {2}}{\mu_ {g} T}\right)\right) \mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] \\ - \frac {\lambda_ {k} l _ {g , 1} T \mu_ {g} \beta_ {k}}{4} \mathcal {I} _ {k} - \frac {\lambda_ {k} l _ {g , 1} T \mu_ {g} \gamma_ {k}}{4} \mathcal {J} _ {k} \\ + O (T + \xi^ {2} l _ {*}, 0) \cdot l _ {g, 1} \lambda_ {k} (\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + (\beta_ {k} ^ {2} + \gamma_ {k} ^ {2}) \sigma_ {g} ^ {2}) + O \left(\frac {l _ {g , 1} l _ {f , 0} ^ {2}}{\mu_ {g} ^ {3}}\right) \frac {\delta_ {k}}{\lambda_ {k} ^ {2}}, \\ \end{array} +$$ + +A crucial step here is to ensure that terms driven by $\mathbb{E}[\| q_k^x\| ^2 ]$ is negative. To ensure this, we require + +$$ +\begin{array}{l} \text {(s t e p - s i z e r u l e s)}: \quad \xi l _ {g, 1} l _ {* , 0} ^ {2} \beta_ {k} \leq c _ {1} \mu_ {g} T \gamma_ {k}, \\ \left. \xi l _ {g, 1} l _ {*}, 0 ^ {2} \right. \leq c _ {2} \mu_ {g} T, \\ \end{array} +$$ + +for some absolute constants $c_{1}, c_{2} > 0$ , which holds by $\beta_{k} \leq \gamma_{k}$ and (3b) with sufficiently small $c_{\xi} > 0$ . Once this holds, we can conclude that + +$$ +\begin{array}{l} \mathbb {E} \left[ \mathbb {V} _ {k + 1} - \mathbb {V} _ {k} \mid \mathcal {F} _ {k} \right] \leq - \frac {\xi \alpha_ {k}}{2} \| \nabla F (x _ {k}) \| ^ {2} - \frac {\lambda_ {k} T \mu_ {g} \gamma_ {k}}{4} \| z _ {k} - y _ {k} ^ {*} \| ^ {2} - \frac {\lambda_ {k} T \mu_ {g} \beta_ {k}}{4} \| y _ {k} - y _ {\lambda , k} ^ {*} \| ^ {2} \\ + O (\xi C _ {\lambda} ^ {2}) \frac {\alpha_ {k}}{\lambda_ {k} ^ {2}} + O \left(\frac {l _ {g , 1} l _ {f , 0} ^ {2}}{\mu_ {g} ^ {3}}\right) \frac {\delta_ {k}}{\lambda_ {k} ^ {2}} + O (\xi^ {2} l _ {F, 1}) (\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}) \\ + O (T + \xi^ {2} l _ {*, 0} ^ {2}) \cdot l _ {g, 1} \lambda_ {k} (\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + (\beta_ {k} ^ {2} + \gamma_ {k} ^ {2}) \sigma_ {g} ^ {2}). \\ \end{array} +$$ + +We can sum over $k = 0$ to $K - 1$ , and leaving only dominating terms, since $\sum_{k} \delta_{k} / \lambda_{k}^{2} = O(1)$ (because $\delta_{k} / \lambda_{k} = O(1 / k)$ and $\lambda_{k} = \mathrm{poly}(k)$ ), we have the theorem. + +# B.5. Proof of Corollary 4.2 + +We first show that with the step-size design in theorem, $\lambda_{k} = \gamma_{k} / (2\alpha_{k})$ for all $k$ . To check this, by design, $\lambda_0 = \gamma_0 / (2\alpha_0)$ and by mathematical induction, + +$$ +\frac {T \mu_ {g}}{1 6} \alpha_ {k} \lambda_ {k} ^ {2} = \frac {T}{3 2} \frac {c _ {\gamma}}{2 c _ {\alpha}} (k + k _ {0}) ^ {- 2 c + a}, +$$ + +and + +$$ +\frac {c _ {\gamma}}{2 c _ {\alpha}} ((k + k _ {0} + 1) ^ {a - c} - (k + k _ {0}) ^ {a - c}) \leq \frac {(a - c) c _ {\gamma}}{2 c _ {\alpha}} (k + k _ {0}) ^ {- 1 - c + a}. +$$ + +As long as $-2c + a \geq -1 - c + a$ , or equivalently, $c \leq 1$ and $T \geq 32$ , it always holds that + +$$ +\lambda_ {k + 1} = \frac {c _ {\gamma}}{2 c _ {\alpha}} (k + k _ {0} + 1) ^ {a - c} = \frac {\gamma_ {k + 1}}{2 \alpha_ {k + 1}}. \tag {28} +$$ + +Now applying the step-size designs, we obtain the following: + +$$ +\begin{array}{l} \sum_ {k = 0} ^ {K - 1} \frac {\mathbb {E} [ \| \nabla F (x _ {k}) \| ^ {2} ]}{(k + k _ {0}) ^ {a}} \leq O _ {\mathrm {P}} (1) \cdot \sum_ {k} \frac {1}{(k + k _ {0}) ^ {3 a - 2 c}} + O _ {\mathrm {P}} \left(\sigma_ {f} ^ {2}\right) \cdot \sum_ {k} \frac {1}{(k + k _ {0}) ^ {a + c}} \\ + O _ {\mathbb {P}} \left(\sigma_ {g} ^ {2}\right) \cdot \sum_ {k} \frac {1}{\left(k + k _ {0}\right) ^ {3 c - a}} + O _ {\mathbb {P}} (1). \tag {29} \\ \end{array} +$$ + +We decide the rates $a, c \in [0,1]$ will be decided differently for different stochasticity. Let $b = a - c$ . Note that with the step size deisng, we have $\lambda_{k} = \gamma_{k} / (2\alpha_{k}) = \frac{2\lambda_{0}}{k_{0}^{a - c}} (k + k_{0})^{a - c} = O(k^{b})$ . Let $R$ be a random variable uniformly distributed over $\{0,1,\dots,K\}$ . Note that the left hand side is larger than + +$$ +\frac {K}{(K + k _ {0}) ^ {a}} \sum_ {k = 1} ^ {K - 1} \frac {1}{K} \mathbb {E} [ \| \nabla F (x _ {k}) \| ^ {2} ] \geq K ^ {1 - a} \cdot \mathbb {E} [ \| \nabla F (x _ {R}) \| ^ {2} ]. +$$ + +We consider three regimes: + +Stochasticity in both upper-level and lower-level objectives: $\sigma_f^2, \sigma_g^2 > 0$ . In this case, we set $a = 5/7, c = 4/7$ , and thus $\lambda_k = k^{1/7}$ . The dominating term is $\sigma_g^2 \cdot \sum_k (\gamma_k^2 \lambda_k) = \sum_k O(k^{-1}) = O(\log K)$ and $C_\lambda^2 \cdot \sum_k (\alpha_k \lambda_k^{-2}) = O(\log K)$ . From the left-hand side, we have $K^{1-a} = K^{2/7}$ . Therefore, + +$$ +\mathbb {E} [ \| \nabla F (x _ {R}) \| ^ {2} ] = O \left(\frac {\log K}{K ^ {2 / 7}}\right). +$$ + +Stochasticity only in the upper-level: $\sigma_f^2 > 0$ , $\sigma_g^2 = 0$ . In this case, we can take $a = 3/5$ , $c = 2/5$ . When $\sigma_g^2 = 0$ , the dominating term is $\sigma_f \cdot \sum_k (\alpha_k^2 \lambda_k) = \sum_k O(k^{-1}) = O(\log K)$ and $O(C_\lambda^2) \cdot \sum_k (\alpha_k \lambda_k^{-2}) = \sum_k O(k^{-1}) = O(\log K)$ . Since $K^{1 - a} = O(K^{2/5})$ , yielding + +$$ +\mathbb {E} [ \| \nabla F (x _ {R}) \| ^ {2} ] = O \left(\frac {\log K}{K ^ {2 / 5}}\right). +$$ + +Deterministic case: $\sigma_f^2 = 0, \sigma_g^2 = 0$ . Here, we can take $a = 1/3, c = 0$ with a dominating term $\sum_k (\alpha_k \lambda_k^{-2}) = O(\log K)$ . Since there is no stochasticity in the algorithm, we have + +$$ +\left\| \nabla F (x _ {K}) \right\| ^ {2} = O \left(\frac {\log K}{K ^ {2 / 3}}\right). +$$ + +# C. Main Results for Algorithm 2 + +We start with a few definitions and additional auxiliary lemmas. We first define the momentum-assisted moving direction of variables. They can be recursively defined as + +$$ +\tilde {h} _ {z} ^ {k} := \nabla_ {y} g (x _ {k}, z _ {k}; \phi_ {z} ^ {k}) + (1 - \eta_ {k}) \left(\tilde {h} _ {z} ^ {k - 1} - \nabla_ {y} g (x _ {k - 1}, z _ {k - 1}; \phi_ {z} ^ {k})\right), +$$ + +$$ +\tilde {h} _ {f y} ^ {k} := \nabla_ {y} f (x _ {k}, y _ {k}; \zeta_ {y} ^ {k}) + (1 - \eta_ {k}) \left(\tilde {h} _ {f y} ^ {k - 1} - \nabla_ {y} f (x _ {k - 1}, y _ {k - 1}; \zeta_ {y} ^ {k})\right), +$$ + +$$ +\tilde {h} _ {g y} ^ {k} := \nabla_ {y} g (x _ {k}, y _ {k}; \phi_ {y} ^ {k}) + (1 - \eta_ {k}) \left(\tilde {h} _ {g y} ^ {k - 1} - \nabla_ {y} g (x _ {k - 1}, y _ {k - 1}; \phi_ {y} ^ {k})\right), +$$ + +for the inner variable updates, and + +$$ +\tilde {h} _ {f x} ^ {k} := \nabla_ {x} f (x _ {k}, y _ {k + 1}; \zeta_ {x} ^ {k}) + (1 - \eta_ {k}) \left(\tilde {h} _ {f x} ^ {k - 1} - \nabla_ {x} f (x _ {k - 1}, y _ {k}; \zeta_ {x} ^ {k})\right), +$$ + +$$ +\tilde {h} _ {g x y} ^ {k} := \nabla_ {x} g (x _ {k}, y _ {k + 1}; \phi_ {x} ^ {k}) + (1 - \eta_ {k}) \left(\tilde {h} _ {g x y} ^ {k - 1} - \nabla_ {x} g (x _ {k - 1}, y _ {k}; \phi_ {x} ^ {k})\right), +$$ + +$$ +\tilde {h} _ {g x z} ^ {k} := \nabla_ {x} g (x _ {k}, z _ {k + 1}; \phi_ {x} ^ {k}) + (1 - \eta_ {k}) \left(\tilde {h} _ {g x z} ^ {k - 1} - \nabla_ {x} g (x _ {k - 1}, z _ {k}; \phi_ {x} ^ {k})\right). +$$ + +for the outer variable update with some proper choice of $\eta_{k}$ . We also define stochastic error terms incurred by random sampling: + +$$ +\tilde {e} _ {k} ^ {x} := \tilde {h} _ {f x} ^ {k} + \lambda_ {k} (\tilde {h} _ {g x y} ^ {k} - \tilde {h} _ {g x z} ^ {k}) - q _ {k} ^ {x}, +$$ + +$$ +\tilde {e} _ {k} ^ {y} := (\tilde {h} _ {f y} ^ {k} + \lambda_ {k} \tilde {h} _ {g y} ^ {k}) - q _ {k} ^ {y}, +$$ + +$$ +\tilde {e} _ {k} ^ {z} := \tilde {h} _ {z} ^ {k} - q _ {k} ^ {z}, \tag {30} +$$ + +where $q_k^x, q_k^y, q_k^z$ are defined in (10) (we dropped $t$ from subscript since here we consider $T = 1$ ). + +# C.1. Additional Auxiliary Lemmas + +The following lemmas are analogous of Lemma A.4. + +Lemma C.1. At every $k$ iteration conditioned on $\mathcal{F}_k$ , we have + +$$ +\mathbb {E} [ \| y ^ {*} (x _ {k + 1}) - y ^ {*} (x _ {k}) \| ^ {2} | \mathcal {F} _ {k} ] \leq 2 \xi^ {2} l _ {*, 0} ^ {2} \alpha_ {k} ^ {2} \left(\mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] + \mathbb {E} [ \| \tilde {e} _ {k} ^ {x} \| ^ {2} ]\right). +$$ + +Lemma C.2. At every $k$ iteration conditioned on $\mathcal{F}_k$ , we have + +$$ +\mathbb {E} [ \| y _ {\lambda_ {k + 1}} ^ {*} (x _ {k + 1}) - y _ {\lambda_ {k}} ^ {*} (x _ {k}) \| ^ {2} | \mathcal {F} _ {k} ] \leq 4 \xi^ {2} l _ {*}, 0 ^ {2} \alpha_ {k} ^ {2} \left(\mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] + \mathbb {E} [ \| \tilde {e} _ {k} ^ {x} \| ^ {2} ]\right) + \frac {8 \delta_ {k} ^ {2} l _ {f , 0} ^ {2}}{\lambda_ {k} ^ {4} \mu_ {g} ^ {2}}. +$$ + +# C.2. Descent Lemma for Noise Variances + +A major change in the proof is that now we also track the decrease in stochastic error terms. Specifically, we show the following lemmas. + +Lemma C.3. + +$$ +\begin{array}{l} \mathbb {E} [ \| \tilde {e} _ {k + 1} ^ {z} \| ^ {2} ] \leq (1 - \eta_ {k + 1}) ^ {2} (1 + 8 l _ {g, 1} ^ {2} \gamma_ {k} ^ {2}) \mathbb {E} [ \| \tilde {e} _ {k} ^ {z} \| ^ {2} ] + 2 \eta_ {k + 1} ^ {2} \sigma_ {g} ^ {2} \\ + 8 l _ {g, 1} ^ {2} (1 - \eta_ {k + 1}) ^ {2} \left(\xi^ {2} \alpha_ {k} ^ {2} \mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} ] + \xi^ {2} \alpha_ {k} ^ {2} \mathbb {E} [ \| \tilde {e} _ {k} ^ {x} \| ^ {2} ] + \gamma_ {k} ^ {2} \mathbb {E} [ \| q _ {k} ^ {z} \| ^ {2} ]\right), \\ \end{array} +$$ + +$$ +\begin{array}{l} \mathbb {E} [ \| \tilde {e} _ {k + 1} ^ {y} \| ^ {2} ] \leq (1 - \eta_ {k + 1}) ^ {2} (1 + 9 6 l _ {g, 1} ^ {2} \beta_ {k} ^ {2}) \mathbb {E} [ \| \tilde {e} _ {k} ^ {y} \| ^ {2} ] + 2 \eta_ {k + 1} ^ {2} (\sigma_ {f} ^ {2} + \lambda_ {k + 1} ^ {2} \sigma_ {g} ^ {2}) + 1 2 \delta_ {k} ^ {2} \sigma_ {g} ^ {2} \\ + 9 6 l _ {g, 1} ^ {2} (1 - \eta_ {k + 1}) ^ {2} \beta_ {k} ^ {2} (\xi^ {2} \| q _ {k} ^ {x} \| ^ {2} + \xi^ {2} \| \tilde {e} _ {k} ^ {x} \| ^ {2} + \| q _ {k} ^ {y} \| ^ {2}). \\ \end{array} +$$ + +# Lemma C.4. + +$$ +\begin{array}{l} \mathbb {E} [ \| \tilde {e} _ {k + 1} ^ {x} \| ^ {2} ] \leq (1 - \eta_ {k + 1}) ^ {2} (1 + 2 4 0 l _ {g, 1} ^ {2} \xi^ {2} \beta_ {k} ^ {2}) \mathbb {E} [ \| \tilde {e} _ {k} ^ {x} \| ^ {2} ] + 6 \eta_ {k + 1} ^ {2} (\sigma_ {f} ^ {2} + \lambda_ {k + 1} ^ {2} \sigma_ {g} ^ {2}) + 8 0 \delta_ {k} ^ {2} \sigma_ {g} ^ {2} \\ + 2 4 0 l _ {g, 1} ^ {2} (1 - \eta_ {k + 1}) ^ {2} \lambda_ {k} ^ {2} \left(\xi^ {2} \alpha_ {k} ^ {2} \| q _ {k} ^ {x} \| ^ {2} + \alpha_ {k} ^ {2} (\| q _ {k} ^ {y} \| ^ {2} + \| \tilde {e} _ {k} ^ {y} \| ^ {2}) + \gamma_ {k} ^ {2} (\| q _ {k} ^ {z} \| ^ {2} + \| \tilde {e} _ {k} ^ {z} \| ^ {2})\right). \\ \end{array} +$$ + +Equipped with these lemmas, we can now proceed as previously in the main proof for Algorithm 1. + +# C.3. Descent Lemma for $z_{k}$ towards $y_{k}^{*}$ + +Lemma C.5. If $\gamma_{k}\mu_{g} < 1 / 8$ , then + +$$ +\begin{array}{l} \mathbb {E} \left[ \| z _ {k + 1} - y _ {k + 1} ^ {*} \| ^ {2} \mid \mathcal {F} _ {k} \right] \leq (1 + \gamma_ {k} \mu_ {g} / 4) \mathbb {E} \left[ \| z _ {k + 1} - y _ {k} ^ {*} \| ^ {2} \mid \mathcal {F} _ {k} \right] \\ + O \left(\frac {\xi^ {2} \alpha_ {k} ^ {2} l _ {* , 0} ^ {2}}{\gamma_ {k} \mu_ {g}}\right) \cdot (\mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] + \mathbb {E} [ \| \tilde {e} _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ]). \\ \end{array} +$$ + +Proof. As before, we can decompose $\| z_{k + 1} - y_{k + 1}^{*}\|^{2}$ as + +$$ +\begin{array}{l} \left\| z _ {k + 1} - y _ {k + 1} ^ {*} \right\| ^ {2} = \left\| z _ {k + 1} - y _ {k} ^ {*} \right\| ^ {2} + \left\| y _ {k + 1} ^ {*} - y _ {k} ^ {*} \right\| ^ {2} - 2 \langle z _ {k + 1} - y _ {k} ^ {*}, y _ {k + 1} ^ {*} - y _ {k} ^ {*} \rangle \\ \leq \left\| z _ {k + 1} - y _ {k} ^ {*} \right\| ^ {2} + \left(1 + \frac {1}{8 \gamma_ {k} \mu_ {g}}\right) \left\| y _ {k + 1} ^ {*} - y _ {k} ^ {*} \right\| ^ {2} + 4 \gamma_ {k} \mu_ {g} \left\| z _ {k + 1} - y _ {k} ^ {*} \right\| ^ {2}, \\ \end{array} +$$ + +where we used a general inequality $|\langle a,b\rangle |\leq c\| a\| ^2 +\frac{1}{4c}\| b\| ^2$ . We can apply Lemma C.1 for $\| y_{k + 1}^{*} - y_{k}^{*}\|^{2}$ , yielding the lemma. + +Lemma C.6. If $\gamma_k \leq 1/(16l_{g,1})$ , then + +$$ +\mathbb {E} [ \| z _ {k + 1} - y _ {k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} ] \leq (1 - \gamma_ {k} \mu_ {g} / 2) \mathbb {E} [ \| z _ {k} - y _ {k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} ] - \frac {\gamma_ {k}}{l _ {g , 1}} \| q _ {k} ^ {z} \| ^ {2} + O \left(\frac {\gamma_ {k}}{\mu_ {g}}\right) \mathbb {E} [ \| \tilde {e} _ {k} ^ {z} \| ^ {2} | \mathcal {F} _ {k}) ]. +$$ + +Proof. Note that + +$$ +\begin{array}{l} \left\| z _ {k + 1} - y _ {k} ^ {*} \right\| ^ {2} = \left\| z _ {k} - y _ {k} ^ {*} \right\| ^ {2} + \gamma_ {k} ^ {2} \left\| \tilde {h} _ {z} ^ {k} \right\| ^ {2} - 2 \gamma_ {k} \langle \tilde {h} _ {z} ^ {k}, z _ {k} - y _ {k} ^ {*} \rangle \\ \leq \| z _ {k} - y _ {k} ^ {*} \| ^ {2} + 2 \gamma_ {k} ^ {2} (\| q _ {k} ^ {z} \| ^ {2} + \| \tilde {e} _ {k} ^ {z} \| ^ {2}) - 2 \gamma_ {k} \langle q _ {k} ^ {z}, z _ {k} - y _ {k} ^ {*} \rangle - 2 \gamma_ {k} \langle \tilde {e} _ {k} ^ {z}, z _ {k} - y _ {k} ^ {*} \rangle . \\ \end{array} +$$ + +Since $q_k^z = \nabla_y g(x_k, z_k)$ by definition, by coercivity and co-coercivity of strongly-convex functions, we have + +$$ +\left(\mu_ {g} \| z _ {k} - y _ {k} ^ {*} \| ^ {2}, \frac {1}{l _ {g , 1}} \| q _ {k} ^ {z} \| ^ {2}\right) \leq \langle q _ {k} ^ {z}, z _ {k} - y _ {k} ^ {*} \rangle , +$$ + +and thus, given $\gamma_k \leq 1 / (16l_{g,1})$ , we have + +$$ +\mathbb {E} [ \| z _ {k + 1} - y _ {k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} ] \leq (1 - 3 \gamma_ {k} \mu_ {g} / 4) \mathbb {E} [ \| z _ {k} - y _ {k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} ] - \frac {\gamma_ {k}}{l _ {g , 1}} \| q _ {k} ^ {z} \| ^ {2} + 2 \gamma_ {k} ^ {2} \mathbb {E} [ \| \tilde {e} _ {k} ^ {z} \| ^ {2} | \mathcal {F} _ {k} ] - 2 \gamma_ {k} \langle \tilde {e} _ {k} ^ {z}, z _ {k} - y _ {k} ^ {*} \rangle . +$$ + +Finally, we can use general inequality $|\langle a,b\rangle |\leq c\| a\| ^2 +\frac{1}{4c}\| b\| ^2$ to get + +$$ +- 2 \gamma_ {k} \langle \tilde {e} _ {k} ^ {z}, z _ {k} - y _ {k} ^ {*} \rangle \leq \frac {\gamma_ {k} \mu_ {g}}{4} \| z _ {k} - y _ {k} ^ {*} \| ^ {2} + \frac {4 \gamma_ {k}}{\mu_ {g}} \| \tilde {e} _ {k} ^ {z} \| ^ {2}. +$$ + +Plugging this back, with $\gamma_k^2 \ll \frac{\gamma_k}{\mu_g}$ , we get the lemma. + +![](images/324e4c2f5cdd2a01951527f3e88a30cdbee670badf62e2646f072ab969ecb4f0.jpg) + +# C.4. Descent Lemma for $y_{k}$ towards $y_{\lambda ,k}^{*}$ + +Lemma C.7. If $\beta_{k}\mu_{g} < 1 / 8$ , then + +$$ +\begin{array}{l} \mathbb {E} [ \| y _ {k + 1} - y _ {\lambda , k + 1} ^ {*} \| ^ {2} | \mathcal {F} _ {k} ] \leq (1 + \beta_ {k} \mu_ {g} / 4) \mathbb {E} [ \| y _ {k + 1} - y _ {\lambda , k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} ] \\ + O \left(\frac {\xi^ {2} \alpha_ {k} ^ {2} l _ {* , 0} ^ {2}}{\beta_ {k} \mu_ {g}}\right) \cdot (\mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] + \mathbb {E} [ \| \tilde {e} _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ]) + O \left(\frac {\delta_ {k} ^ {2} l _ {f , 0} ^ {2}}{\lambda_ {k} ^ {4} \mu_ {g} ^ {2}}\right). \\ \end{array} +$$ + +Proof. As before, we can decompose $\| y_{k + 1} - y_{\lambda ,k + 1}^{*}\|^{2}$ as + +$$ +\begin{array}{l} \left\| y _ {k + 1} - y _ {\lambda , k + 1} ^ {*} \right\| ^ {2} = \left\| y _ {k + 1} - y _ {\lambda , k} ^ {*} \right\| ^ {2} + \left\| y _ {k + 1} ^ {*} - y _ {\lambda , k} ^ {*} \right\| ^ {2} - 2 \langle y _ {k + 1} - y _ {\lambda , k} ^ {*}, y _ {\lambda , k + 1} ^ {*} - y _ {\lambda , k} ^ {*} \rangle \\ \leq \left\| y _ {k + 1} - y _ {\lambda , k} ^ {*} \right\| ^ {2} + \left(1 + \frac {1}{8 \beta_ {k} \mu_ {g}}\right) \left\| y _ {\lambda , k + 1} ^ {*} - y _ {\lambda , k} ^ {*} \right\| ^ {2} + 4 \beta_ {k} \mu_ {g} \left\| y _ {k + 1} - y _ {\lambda , k} ^ {*} \right\| ^ {2}, \\ \end{array} +$$ + +where we used a general inequality $|\langle a, b \rangle| \leq c \|a\|^2 + \frac{1}{4c} \|b\|^2$ . We can apply Lemma C.2 for $\|y_{\lambda,k+1}^* - y_{\lambda,k}^*\|^2$ , since $\beta_k \mu_g \leq 1/16$ , we get the lemma. + +Lemma C.8. If $\beta_{k} \leq 1 / (16l_{g,1})$ , then + +$$ +\mathbb {E} [ \| y _ {k + 1} - y _ {\lambda , k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} ] \leq (1 - \beta_ {k} \mu_ {g} / 2) \mathbb {E} [ \| y _ {k} - y _ {\lambda , k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} ] - \frac {\alpha_ {k}}{\lambda_ {k} l _ {g , 1}} \| q _ {k} ^ {y} \| ^ {2} + O \left(\frac {\alpha_ {k} ^ {2}}{\mu_ {g} \beta_ {k}}\right) \mathbb {E} [ \| \tilde {e} _ {k} ^ {y} \| ^ {2} | \mathcal {F} _ {k} ]). +$$ + +Proof. Note that + +$$ +\left\| y _ {k + 1} - y _ {\lambda , k} ^ {*} \right\| ^ {2} = \left\| y _ {k} - y _ {\lambda , k} ^ {*} \right\| ^ {2} + 2 \alpha_ {k} ^ {2} \left(\left\| q _ {k} ^ {y} \right\| ^ {2} + \left\| \tilde {e} _ {k} ^ {y} \right\| ^ {2}\right) - 2 \alpha_ {k} \langle q _ {k} ^ {y}, y _ {k} - y _ {\lambda , k} ^ {*} \rangle - 2 \alpha_ {k} \langle \tilde {e} _ {k} ^ {y}, y _ {k} - y _ {\lambda , k} ^ {*} \rangle , +$$ + +where we used $y_{k + 1} - y_k = q_k^y +\tilde{e}_k^y$ . Since $q_{k}^{y} = \nabla_{y}\mathcal{L}_{\lambda_{k}}$ by definition, again by coercivity and co-coercivity of strongly-convex $\mathcal{L}_{\lambda_k}(x_k,\cdot)$ , we have + +$$ +\max \left(\frac {\lambda_ {k} \mu_ {g}}{2} \| y _ {k} - y _ {\lambda , k} ^ {*} \| ^ {2}, \frac {1}{l _ {f , 1} + \lambda_ {k} l _ {g , 1}} \| q _ {k} ^ {y} \| ^ {2}\right) \leq \langle q _ {k} ^ {y}, y _ {k} - y _ {\lambda , k} ^ {*} \rangle , +$$ + +and thus, given $\alpha_{k}\lambda_{k}\leq 1 / (16l_{g,1})$ and $l_{f,1}\leq \lambda_{k}l_{g,1}$ , we have + +$$ +\begin{array}{l} \mathbb {E} [ \| y _ {k + 1} - y _ {\lambda , k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} ] \leq (1 - 3 \beta_ {k} \mu_ {g} / 4) \mathbb {E} [ \| z _ {k} - y _ {k} ^ {*} \| ^ {2} | \mathcal {F} _ {k} ] - \frac {\alpha_ {k}}{\lambda_ {k} l _ {g , 1}} \| q _ {k} ^ {y} \| ^ {2} \\ + 2 \alpha_ {k} ^ {2} \mathbb {E} [ \| \tilde {e} _ {k} ^ {y} \| ^ {2} | \mathcal {F} _ {k} ] - 2 \alpha_ {k} \mathbb {E} [ \langle \tilde {e} _ {k} ^ {y}, y _ {k} - y _ {\lambda , k} ^ {*} \rangle | \mathcal {F} _ {k} ]. \\ \end{array} +$$ + +Finally, we can use general inequality $|\langle a,b\rangle |\leq c\| a\| ^2 +\frac{1}{4c}\| b\| ^2$ to get + +$$ +- 2 \alpha_ {k} \langle \tilde {e} _ {k} ^ {y}, y _ {k} - y _ {\lambda , k} ^ {*} \rangle \leq \frac {\beta_ {k} \mu_ {g}}{4} \| y _ {k} - y _ {\lambda , k} ^ {*} \| ^ {2} + \frac {4 \alpha_ {k} ^ {2}}{\beta_ {k} \mu_ {g}} \| \tilde {e} _ {k} ^ {y} \| ^ {2}. +$$ + +Plugging this back, with $\beta_{k}\mu_{g}\ll 1$ , we get the lemma. + +# C.5. Descent Lemma for $F(x_{k})$ + +Lemma C.9. If $\xi \alpha_{k}l_{F,1} < 1$ , then + +$$ +\begin{array}{l} \mathbb {E} [ F (x _ {k + 1}) - F (x _ {k}) | \mathcal {F} _ {k} ] \leq - \frac {\xi \alpha_ {k}}{4} \| \nabla F (x _ {k}) \| ^ {2} - \frac {\xi \alpha_ {k}}{4} \mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] + 2 \xi \alpha_ {k} \cdot \mathbb {E} [ \| \tilde {e} _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] \\ + \frac {3 \xi \alpha_ {k}}{2} \left(4 l _ {g, 1} ^ {2} \lambda_ {k} ^ {2} \| y _ {k + 1} - y _ {\lambda , k} ^ {*} \| ^ {2} + l _ {g, 1} ^ {2} \lambda_ {k} ^ {2} \| z _ {k + 1} - y _ {k} ^ {*} \| ^ {2} + C _ {\lambda} ^ {2} / \lambda_ {k} ^ {2}\right). \\ \end{array} +$$ + +Proof. Using the smoothness of $F$ , + +$$ +F (x _ {k + 1}) - F (x _ {k}) \leq \langle \nabla F (x _ {k}), x _ {k + 1} - x _ {k} \rangle + \frac {l _ {F , 1}}{2} \| x _ {k + 1} - x _ {k} \| ^ {2}. +$$ + +Note that $x_{k + 1} - x_{k} = \xi \alpha_{k}(q_{k}^{x} + \tilde{e}_{k}^{x})$ , and thus + +$$ +\begin{array}{l} F \left(x _ {k + 1}\right) - F \left(x _ {k}\right) \leq - \xi \alpha_ {k} \left\langle \nabla_ {x} F \left(x _ {k}\right), q _ {k} ^ {x} \right\rangle - \xi \alpha_ {k} \left\langle \nabla_ {x} F \left(x _ {k}\right), \tilde {e} _ {k} ^ {x} \right\rangle + \frac {l _ {F , 1}}{2} \| x _ {k + 1} - x _ {k} \| ^ {2} \\ \leq - \frac {\xi \alpha_ {k}}{2} (\| \nabla F (x _ {k}) \| ^ {2} + \| q _ {k} ^ {x} \| ^ {2} - \| \nabla F (x _ {k}) - q _ {k} ^ {x} \| ^ {2}) - \xi \alpha_ {k} \langle \nabla_ {x} F (x _ {k}), \tilde {e} _ {k} ^ {x} \rangle + \xi^ {2} \alpha_ {k} ^ {2} l _ {F, 1} (\| q _ {k} ^ {x} \| ^ {2} + \| \tilde {e} _ {k} ^ {x} \| ^ {2}). \\ \end{array} +$$ + +Using $|\langle a,b\rangle |\leq c\| a\|^2 +\frac{1}{4c}\| b\|^2$ , we have + +$$ +- \xi \alpha_ {k} \langle \nabla F (x _ {k}), \tilde {e} _ {k} ^ {x} \rangle \leq \frac {\xi \alpha_ {k}}{4} \| \nabla_ {x} F (x _ {k}) \| ^ {2} + \xi \alpha_ {k} \| \tilde {e} _ {k} ^ {x} \| ^ {2}. +$$ + +Finally, recall (16). Using $(a + b + c)^2 \leq 3(a^2 + b^2 + c^2)$ , we have + +$$ +\left\| \nabla F \left(x _ {k}\right) - q _ {k} ^ {x} \right\| ^ {2} \leq 3 \left(4 l _ {g, 1} ^ {2} \lambda_ {k} ^ {2} \left\| y _ {k + 1} - y _ {\lambda , k} ^ {*} \right\| ^ {2} + l _ {g, 1} ^ {2} \lambda_ {k} ^ {2} \left\| z _ {k + 1} - y _ {k} ^ {*} \right\| ^ {2} + C _ {\lambda} ^ {2} / \lambda_ {k} ^ {2}\right). +$$ + +Combining all, with $\xi \alpha_{k}l_{F,1} < 1$ , we get the lemma. + +# C.6. Decrease in Potential Function + +Define the potential function $\mathbb{V}_k$ as the following: + +$$ +\mathbb {V} _ {k} := F (x _ {k}) + l _ {g, 1} \lambda_ {k} \| y _ {k} - y _ {\lambda , k} ^ {*} \| ^ {2} + \frac {l _ {g , 1} \lambda_ {k}}{2} \| z _ {k} - y _ {k} ^ {*} \| ^ {2} + \frac {1}{c _ {\eta} l _ {g , 1} ^ {2} \gamma_ {k - 1}} \left(\frac {\| \tilde {e} _ {k} ^ {x} \| ^ {2}}{\lambda_ {k}} + \frac {\| \tilde {e} _ {k} ^ {y} \| ^ {2}}{\lambda_ {k}} + \lambda_ {k} \| \tilde {e} _ {k} ^ {z} \| ^ {2}\right), +$$ + +with some absolute constant $c_{\eta} > 0$ . We bound the difference in potential function: + +$$ +\begin{array}{l} \mathbb {E} [ \mathbb {V} _ {k + 1} - \mathbb {V} _ {k} | \mathcal {F} _ {k} ] \leq - \frac {\xi \alpha_ {k}}{4} \| \nabla F (x _ {k}) \| ^ {2} - \frac {\xi \alpha_ {k}}{4} \mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] + \frac {\xi \alpha_ {k}}{2} \frac {3 C _ {\lambda} ^ {2}}{\lambda_ {k} ^ {2}} \\ + l _ {g, 1} \lambda_ {k} \underbrace {\left(\left(1 + \frac {\delta_ {k}}{\lambda_ {k}}\right) \left\| y _ {k + 1} - y _ {\lambda , k + 1} ^ {*} \right\| ^ {2} + 6 \xi \alpha_ {k} \lambda_ {k} l _ {g , 1} \left\| y _ {k + 1} - y _ {\lambda , k} ^ {*} \right\| ^ {2} - \left\| y _ {k} - y _ {\lambda , k} ^ {*} \right\| ^ {2}\right)} _ {(i)} \\ + \frac {l _ {g , 1} \lambda_ {k}}{2} \underbrace {\left(\left(1 + \frac {\delta_ {k}}{\lambda_ {k}}\right) \| z _ {k + 1} - y _ {k + 1} ^ {*} \| ^ {2} + 3 \xi \alpha_ {k} \lambda_ {k} l _ {g , 1} \| z _ {k + 1} - z _ {k} ^ {*} \| ^ {2} - \| z _ {k} - y _ {k} ^ {*} \| ^ {2}\right)} _ {(i i)} \\ + \frac {1}{c _ {\eta} l _ {g , 1} ^ {2}} \underbrace {\left(\frac {\mathbb {E} [ \| \tilde {e} _ {k + 1} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ]}{\gamma_ {k} \lambda_ {k}} - \frac {\mathbb {E} [ \| \tilde {e} _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ]}{\gamma_ {k - 1} \lambda_ {k}}\right)} _ {(i i i)} + 2 \xi \alpha_ {k} \mathbb {E} [ \| \tilde {e} _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] \\ + \frac {1}{c _ {\eta} l _ {g , 1} ^ {2}} \underbrace {\left(\frac {\mathbb {E} [ \| \tilde {e} _ {k + 1} ^ {y} \| ^ {2} | \mathcal {F} _ {k} ]}{\gamma_ {k} \lambda_ {k}} - \frac {\mathbb {E} [ \| \tilde {e} _ {k} ^ {y} \| ^ {2} | \mathcal {F} _ {k} ]}{\gamma_ {k - 1} \lambda_ {k}}\right)} _ {(i v)} + \frac {\lambda_ {k}}{c _ {\eta} l _ {g , 1} ^ {2}} \underbrace {\left(\frac {\mathbb {E} [ \| \tilde {e} _ {k + 1} ^ {z} \| ^ {2} | \mathcal {F} _ {k} ]}{\gamma_ {k}} - \frac {\mathbb {E} [ \| \tilde {e} _ {k} ^ {z} \| ^ {2} | \mathcal {F} _ {k} ]}{\gamma_ {k - 1}}\right)} _ {(v)}. \\ \end{array} +$$ + +Using Lemmas C.7, C.8, C.5 and C.6, given that $\delta_k / \lambda_k < \mu_g\beta_k / 8$ , $(i)$ and $(ii)$ are bounded by + +$$ +\begin{array}{l} (i) \leq - \frac {\mu_ {g} \beta_ {k}}{8} \| y _ {k} - y _ {\lambda , k} ^ {*} \| ^ {2} - \frac {\alpha_ {k}}{\lambda_ {k} l _ {g , 1}} \| q _ {k} ^ {y} \| ^ {2} + O \left(\frac {\xi^ {2} \alpha_ {k} ^ {2} l _ {* , 0} ^ {2}}{\beta_ {k} \mu_ {g}} \mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} + \| \tilde {e} _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] + \frac {\delta_ {k} ^ {2} l _ {f , 0} ^ {2}}{\lambda_ {k} ^ {4} \mu_ {g} ^ {3}} + \frac {\alpha_ {k} ^ {2}}{\beta_ {k} \mu_ {g}} \mathbb {E} [ \| \tilde {e} _ {k} ^ {y} \| ^ {2} | \mathcal {F} _ {k} ]\right), \\ \left(i i\right) \leq - \frac {\mu_ {g} \gamma_ {k}}{8} \| z _ {k} - y _ {k} ^ {*} \| ^ {2} - \frac {\gamma_ {k}}{l _ {g , 1}} \| q _ {k} ^ {z} \| ^ {2} + O \left(\frac {\xi^ {2} \alpha_ {k} ^ {2} l _ {* , 0} ^ {2}}{\gamma_ {k} \mu_ {g}} \mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} + \| \tilde {e} _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] + \frac {\gamma_ {k}}{\mu_ {g}} \mathbb {E} [ \| \tilde {e} _ {k} ^ {z} \| ^ {2} | \mathcal {F} _ {k} ]\right), \\ \end{array} +$$ + +We can use Lemma C.4 to bound $(iii)$ , $(iv)$ and $(v)$ . Using the step-size condition given in (8b), we have + +$$ +\frac {(1 - \eta_ {k + 1})}{\gamma_ {k}} - \frac {1}{\gamma_ {k - 1}} = \frac {\frac {\gamma_ {k - 1} - \gamma_ {k}}{\gamma_ {k - 1}} - \eta_ {k + 1}}{\gamma_ {k}} \leq \frac {- \eta_ {k + 1}}{2 \gamma_ {k}}. +$$ + +Note that by the same step-size condition, $\eta_{k + 1}\gg O(l_{g,1}^2\gamma_k^2)$ , and thus, + +$$ +\begin{array}{l} (i i i) \leq - \frac {\eta_ {k + 1}}{2 \gamma_ {k} \lambda_ {k}} \mathbb {E} [ \| \tilde {e} _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] + O (\sigma_ {f} ^ {2}) \cdot \frac {\eta_ {k + 1} ^ {2}}{\lambda_ {k} \gamma_ {k}} + O (\sigma_ {g} ^ {2}) \cdot \left(\frac {\eta_ {k + 1} ^ {2} \lambda_ {k}}{\gamma_ {k}} + \frac {\delta_ {k} ^ {2}}{\gamma_ {k} \lambda_ {k}}\right) \\ + O (l _ {g, 1} ^ {2}) \cdot \left(\xi^ {2} \alpha_ {k} \| q _ {k} ^ {x} \| ^ {2} + \alpha_ {k} (\| q _ {k} ^ {y} \| ^ {2} + \| \tilde {e} _ {k} ^ {y} \| ^ {2}) + \gamma_ {k} \lambda_ {k} (\| q _ {k} ^ {z} \| ^ {2} + \| \tilde {e} _ {k} ^ {z} \| ^ {2})\right). \\ \end{array} +$$ + +Similarly, we can use Lemma C.3 and show that + +$$ +\begin{array}{l} (i v) \leq - \frac {\eta_ {k + 1}}{2 \gamma_ {k} \lambda_ {k}} \mathbb {E} [ \| \tilde {e} _ {k} ^ {y} \| ^ {2} | \mathcal {F} _ {k} ] + O (\sigma_ {f} ^ {2}) \cdot \frac {\eta_ {k + 1} ^ {2}}{\lambda_ {k} \gamma_ {k}} + O (\sigma_ {g} ^ {2}) \cdot \left(\frac {\eta_ {k + 1} ^ {2} \lambda_ {k}}{\gamma_ {k}} + \frac {\delta_ {k} ^ {2}}{\gamma_ {k} \lambda_ {k}}\right) \\ + O \left(l _ {g, 1} ^ {2}\right) \alpha_ {k} \cdot \left(\xi^ {2} \| q _ {k} ^ {x} \| ^ {2} + \xi^ {2} \| \tilde {e} _ {k} ^ {x} \| ^ {2} + \| q _ {k} ^ {y} \| ^ {2}\right), \\ \end{array} +$$ + +$$ +(v) \leq - \frac {\eta_ {k + 1}}{2 \gamma_ {k}} \mathbb {E} [ \| \tilde {e} _ {k} ^ {z} \| ^ {2} | \mathcal {F} _ {k} ] + O (\sigma_ {g} ^ {2}) \cdot \frac {\eta_ {k + 1} ^ {2}}{\gamma_ {k}} + O (l _ {g, 1} ^ {2}) \cdot \left(\frac {\xi^ {2} \alpha_ {k} ^ {2}}{\gamma_ {k}} \| q _ {k} ^ {x} \| ^ {2} + \frac {\xi^ {2} \alpha_ {k} ^ {2}}{\gamma_ {k}} \| \tilde {e} _ {k} ^ {x} \| ^ {2} + \gamma_ {k} \| q _ {k} ^ {z} \| ^ {2}\right). +$$ + +Plugging inequalities for $(i) - (v)$ back and arranging terms, we get + +$$ +\begin{array}{l} \mathbb {E} [ \mathbb {V} _ {k + 1} - \mathbb {V} _ {k} | \mathcal {F} _ {k} ] \leq - \frac {\xi \alpha_ {k}}{4} \| \nabla F (x _ {k}) \| ^ {2} - \frac {\xi \alpha_ {k}}{4} \mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] + \frac {3 C _ {\lambda} ^ {2}}{2} \frac {\xi \alpha_ {k}}{\lambda_ {k} ^ {2}} - l _ {g, 1} \lambda_ {k} (i) + \frac {l _ {g , 1} \lambda_ {k}}{2} (i i) \\ + \frac {1}{c _ {\eta} l _ {g , 1} ^ {2}} (i i i) + 2 \xi \alpha_ {k} \mathbb {E} [ \| \tilde {e} _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] + \frac {1}{c _ {\eta} l _ {g , 1} ^ {2}} (i v) + \frac {\lambda_ {k}}{c _ {\eta} l _ {g , 1} ^ {2}} (v) \\ \leq - \frac {\xi \alpha_ {k}}{4} \| \nabla F (x _ {k}) \| ^ {2} - \frac {\lambda_ {k} l _ {g , 1} \mu_ {g} \beta_ {k}}{4} \| y _ {k} - y _ {\lambda , k} ^ {*} \| ^ {2} - \frac {\lambda_ {k} l _ {g , 1} \mu_ {g} \gamma_ {k}}{4} \| z _ {k} - y _ {k} ^ {*} \| ^ {2} \\ - \frac {\xi \alpha_ {k}}{4} \mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] \left(1 - O (\xi l _ {g, 1} l _ {*}, 0 ^ {2} / \mu_ {g}) - O (\xi c _ {\eta} ^ {- 1})\right) \\ - \alpha_ {k} \mathbb {E} [ \| q _ {k} ^ {y} \| ^ {2} | \mathcal {F} _ {k} ] (1 - O (c _ {\eta} ^ {- 1})) - \frac {\gamma_ {k} \lambda_ {k}}{2} \mathbb {E} [ \| q _ {k} ^ {z} \| ^ {2} | \mathcal {F} _ {k} ] (1 - O (c _ {\eta} ^ {- 1})) \\ + O \left(\frac {C _ {\lambda} ^ {2} \xi \alpha_ {k}}{\lambda_ {k} ^ {2}} + \frac {l _ {f , 0} ^ {2} l _ {g , 1} \delta_ {k} ^ {2}}{\mu_ {g} ^ {3} \lambda_ {k} ^ {3}}\right) + \text {n o i s e v a r i a n c e t e r m s}, \\ \end{array} +$$ + +where noise variance terms are + +$$ +\begin{array}{l} \text {n o i s e} = - \frac {\mathbb {E} [ \| \tilde {e} _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ]}{c _ {\eta} l _ {g , 1} ^ {2}} \left(\frac {\eta_ {k + 1}}{2 \gamma_ {k} \lambda_ {k}} - O \left(l _ {g, 1} ^ {2} \xi^ {2} \alpha_ {k}\right) - \left(c _ {\eta} \xi \alpha_ {k} l _ {g, 1} ^ {2}\right)\right) \\ - \frac {\mathbb {E} [ \| \tilde {e} _ {k} ^ {y} \| ^ {2} | \mathcal {F} _ {k} ]}{l _ {g , 1} ^ {2} c _ {\eta}} \left(\frac {\eta_ {k + 1}}{2 \gamma_ {k} \lambda_ {k}} - O (l _ {g, 1} ^ {2}) \alpha_ {k} - O (c _ {\eta} l _ {g, 1} ^ {3} / \mu_ {g}) \alpha_ {k}\right) \\ - \frac {\lambda_ {k} \mathbb {E} [ \| \tilde {e} _ {k} ^ {z} \| ^ {2} | \mathcal {F} _ {k} ]}{l _ {g , 1} ^ {2} c _ {\eta}} \left(\frac {\eta_ {k + 1}}{2 \gamma_ {k}} - O (l _ {g, 1} ^ {2}) \gamma_ {k} - O (c _ {\eta} l _ {g, 1} ^ {3} / \mu_ {g}) \gamma_ {k}\right) \\ + \frac {1}{c _ {\eta} l _ {g , 1} ^ {2}} \left(O (\sigma_ {f} ^ {2}) \cdot \frac {\eta_ {k + 1} ^ {2}}{\lambda_ {k} \gamma_ {k}} + O (\sigma_ {g} ^ {2}) \cdot \left(\frac {\eta_ {k + 1} ^ {2} \lambda_ {k}}{\gamma_ {k}} + \frac {\delta_ {k} ^ {2}}{\gamma_ {k} \lambda_ {k}}\right)\right). \\ \end{array} +$$ + +For the all squared terms, with careful design of step-sizes, we can make the coefficient negative. Specifically, we need + +$$ +\left. \xi l _ {g, 1} l _ {*}, 0 ^ {2} / \mu_ {g} \ll 1, c _ {\eta} \gg 1, \right. +$$ + +to negate $q_{k}^{(\cdot)}$ terms, and + +$$ +1 > \eta_ {k + 1} \gg c _ {\eta} \gamma_ {k} ^ {2} (l _ {g, 1} ^ {3} / \mu_ {g}), +$$ + +to suppress noise variance terms, as required in our step-size rules (8). Then, we can simplify the bound for the potential function difference: + +$$ +\begin{array}{l} \mathbb {E} [ \mathbb {V} _ {k + 1} - \mathbb {V} _ {k} | \mathcal {F} _ {k} ] \leq - \frac {\xi \alpha_ {k}}{4} \| \nabla F (x _ {k}) \| ^ {2} + O (\xi C _ {\lambda} ^ {2}) \cdot \frac {\alpha_ {k}}{\lambda_ {k} ^ {2}} + O (l _ {f, 0} ^ {2} l _ {g, 1} / \mu_ {g} ^ {3}) \cdot \frac {\delta_ {k} ^ {2}}{\lambda_ {k} ^ {3}} \\ + \frac {1}{c _ {\eta} l _ {g , 1} ^ {2}} \left(O (\sigma_ {f} ^ {2}) \cdot \frac {\eta_ {k + 1} ^ {2}}{\lambda_ {k} \gamma_ {k}} + O (\sigma_ {g} ^ {2}) \cdot \left(\frac {\eta_ {k + 1} ^ {2} \lambda_ {k}}{\gamma_ {k}} + \frac {\delta_ {k} ^ {2}}{\gamma_ {k} \lambda_ {k}}\right)\right). \\ \end{array} +$$ + +Proof of Theorem 4.3 Summing the above over all $k = 0$ to $K - 1$ , using $\delta_k / \lambda_k = O(1 / k)$ and $1 / \lambda_k, \delta_k / \gamma_k = o(1)$ , we obtain Theorem 4.3. + +# C.7. Proof of Corollary 4.4 + +Using the step-sizes specified in (9), since $\lambda_{k} = \gamma_{k} / 2\alpha_{k}\asymp k^{a - c}$ $\delta_k\succ k^{a - c - 1}$ . As long as $a - c - 1 < - c$ , which is satisfied if $a < 1$ , we have $\delta_k / \gamma_k = o(1)$ . We can also check that + +$$ +\frac {\delta_ {k}}{\lambda_ {k}} \leq (k + k _ {0} + 1) ^ {- 1} < \frac {\mu_ {g} \beta_ {k}}{8} = \frac {(k + k _ {0}) ^ {- c}}{k _ {0} ^ {1 - c}}, +$$ + +as long as and $c < 1$ . Given the above, we have + +$$ +\begin{array}{l} \sum_ {k = 0} ^ {K - 1} \frac {\mathbb {E} [ \| \nabla F (x _ {k}) \| ^ {2} ]}{(k + k _ {0}) ^ {a}} \leq O _ {\mathbb {P}} (1) \cdot \sum_ {k} \frac {1}{(k + k _ {0}) ^ {3 a - 2 c}} + O _ {\mathbb {P}} (\sigma_ {f} ^ {2}) \cdot \sum_ {k} \frac {1}{(k + k _ {0}) ^ {- 2 c - a}} \\ + O _ {\mathbb {P}} \left(\sigma_ {g} ^ {2}\right) \cdot \sum_ {k} \frac {1}{\left(k + k _ {0}\right) ^ {- 4 c + a}} + O _ {\mathbb {P}} (1). \\ \end{array} +$$ + +Again, we consider three regimes: + +Stochasticity in both upper-level and lower-level objectives: $\sigma_f^2, \sigma_g^2 > 0$ . In this case, we set $a = 3/5, c = 2/5$ , and thus $\lambda_k \asymp k^{1/5}$ , which yields + +$$ +\mathbb {E} [ \| \nabla F (x _ {R}) \| ^ {2} ] \asymp \frac {\log K}{K ^ {2 / 5}}. +$$ + +Stochasticity only in the upper-level: $\sigma_f^2 > 0$ , $\sigma_g^2 = 0$ . In this case, we can take $a = 2/4$ , $c = 1/4$ , and thus $\lambda_k \asymp k^{1/4}$ which yields + +$$ +\mathbb {E} [ \| \nabla F (x _ {R}) \| ^ {2} ] \asymp \frac {\log K}{K ^ {2 / 4}}. +$$ + +Deterministic case: $\sigma_f^2 = 0$ , $\sigma_g^2 = 0$ . Here, we can take $a = 1/3$ , $c = 0$ and since there is no stochasticity in the algorithm, we have + +$$ +\left\| \nabla F \left(x _ {K}\right) \right\| ^ {2} \asymp \frac {\log K}{K ^ {2 / 3}}. +$$ + +# D. Deferred Proofs for Lemmas + +# D.1. Proofs for Main Lemmas + +# D.1.1. PROOF OF LEMMA 3.1 + +Proof. Let $y_{\lambda}^{*}(x) \coloneqq \arg \min_{y} \mathcal{L}_{\lambda}(x, y)$ . Note that $\nabla_y \mathcal{L}_\lambda(x, y_\lambda^*(x)) = 0$ , and thus + +$$ +\nabla \mathcal {L} _ {\lambda} ^ {*} (x) = \nabla_ {x} \mathcal {L} _ {\lambda} (x, y _ {\lambda} ^ {*} (x)) + \nabla_ {x} y _ {\lambda} ^ {*} (x) ^ {\top} \nabla_ {y} \mathcal {L} _ {\lambda} (x, y _ {\lambda} ^ {*} (x)) = \nabla_ {x} \mathcal {L} _ {\lambda} (x, y _ {\lambda} ^ {*} (x)). +$$ + +To compare this to $\nabla F(x)$ , we can invoke Lemma A.2 which gives + +$$ +\begin{array}{l} \| \nabla F (x) - \nabla_ {x} \mathcal {L} _ {\lambda} (x, y _ {\lambda} ^ {*} (x)) \| \\ \leq 2 \left(l _ {g, 1} / \mu_ {g}\right) \| y _ {\lambda} ^ {*} (x) - y ^ {*} (x) \| \left(l _ {f, 1} + \lambda \cdot \min \left(2 l _ {g, 1}, l _ {g, 2} \| y ^ {*} (x) - y _ {\lambda} ^ {*} (x) \|\right)\right). \\ \end{array} +$$ + +From Lemma 3.2, we use $\| y_{\lambda}^{*}(x) - y^{*}(x)\| \leq \frac{2l_{f,0}}{\lambda\mu_{g}}$ , and get + +$$ +\| \nabla F (x) - \nabla_ {x} \mathcal {L} _ {\lambda} (x, y _ {\lambda} ^ {*} (x)) \| \leq \frac {1}{\lambda} \cdot \frac {4 l _ {f , 0} l _ {g , 1}}{\mu_ {g} ^ {2}} \left(l _ {f, 1} + \frac {2 l _ {f , 0} l _ {g , 2}}{\mu_ {g}}\right). +$$ + +# D.1.2. PROOF OF LEMMA 3.2 + +Proof. Note that $\mathcal{L}_{\lambda}(x,y)$ is at least $\frac{\lambda\mu_g}{2}$ strongly-convex in $y$ once $\lambda \geq 2l_{f,1}\mu_g$ . To see this, + +$$ +\mathcal {L} _ {\lambda} (x, y) = f (x, y) + \lambda \left(g (x, y) - g ^ {*} (x)\right), +$$ + +which is at least $-l_{f,1} + \lambda \mu_g$ -strongly convex in $y$ . If $\lambda > 2l_{f,1} / \mu_g$ , this implies at least $\lambda \mu_g / 2$ strong-convexity of $\mathcal{L}_{\lambda}(x,y)$ in $y$ . + +By the optimality condition at $y_{\lambda_1}^* (x_1)$ with $x_{1},\lambda_{1}$ , we have + +$$ +\nabla_ {y} f \left(x _ {1}, y _ {\lambda_ {1}} ^ {*} \left(x _ {1}\right)\right) + \lambda_ {1} \nabla_ {y} g \left(x _ {1}, y _ {\lambda_ {1}} ^ {*} \left(x _ {1}\right)\right) = 0, +$$ + +which also implies that $\| g(x_{1},y_{\lambda_{1}}^{*}(x_{1}))\| \leq l_{f,0} / \lambda_{1}$ . Observe that + +$$ +\begin{array}{l} \nabla_ {y} f (x _ {2}, y _ {\lambda_ {1}} ^ {*} (x _ {1})) + \lambda_ {2} \nabla_ {y} g (x _ {2}, y _ {\lambda_ {1}} ^ {*} (x _ {1})) \\ = \left(\nabla_ {y} f \left(x _ {2}, y _ {\lambda_ {1}} ^ {*} \left(x _ {1}\right)\right) - \nabla_ {y} f \left(x _ {1}, y _ {\lambda_ {1}} ^ {*} \left(x _ {1}\right)\right)\right) + \nabla_ {y} f \left(x _ {1}, y _ {\lambda_ {1}} ^ {*} \left(x _ {1}\right)\right) \\ + \lambda_ {2} (\nabla_ {y} g (x _ {2}, y _ {\lambda_ {1}} ^ {*} (x _ {1})) - \nabla_ {y} g (x _ {1}, y _ {\lambda_ {1}} ^ {*} (x _ {1}))) + \lambda_ {2} \nabla_ {y} g (x _ {1}, y _ {\lambda_ {1}} ^ {*} (x _ {1})) \\ = \left(\nabla_ {y} f \left(x _ {2}, y _ {\lambda_ {1}} ^ {*} \left(x _ {1}\right)\right) - \nabla_ {y} f \left(x _ {1}, y _ {\lambda_ {1}} ^ {*} \left(x _ {1}\right)\right)\right) + \lambda_ {2} \left(\nabla_ {y} g \left(x _ {2}, y _ {\lambda_ {1}} ^ {*} \left(x _ {1}\right)\right) - \nabla_ {y} g \left(x _ {1}, y _ {\lambda_ {1}} ^ {*} \left(x _ {1}\right)\right)\right) \\ + \left(\lambda_ {2} - \lambda_ {1}\right) \nabla_ {y} g \left(x _ {1}, y _ {\lambda_ {1}} ^ {*} \left(x _ {1}\right)\right), \\ \end{array} +$$ + +where in the last equality, we applied the optimality condition for $y_{\lambda_1}^*(x_1)$ . Then applying the Lipschitzness of $\nabla_y f$ and $\nabla_y g$ in $x$ , we have + +$$ +\| \nabla_ {y} f (x _ {2}, y _ {\lambda_ {1}} ^ {*} (x _ {1})) + \lambda_ {2} \nabla_ {y} g (x _ {2}, y _ {\lambda_ {1}} ^ {*} (x _ {1})) \| \leq l _ {f, 1} \| x _ {1} - x _ {2} \| + l _ {g, 1} \lambda_ {2} \| x _ {2} - x _ {1} \| + (\lambda_ {2} - \lambda_ {1}) \frac {l _ {f , 0}}{\lambda_ {1}}. +$$ + +Since $\mathcal{L}_{\lambda_2}(x_2,y)$ is $\lambda_{2}\mu_{g} / 2$ -strongly convex in $y$ , from the coercivity property of strongly-convex functions, along with the optimality condition with $y_{\lambda_2}^* (x_2)$ , we have + +$$ +\frac {\lambda_ {2} \mu_ {g}}{2} \| y _ {\lambda_ {1}} ^ {*} (x _ {1}) - y _ {\lambda_ {2}} ^ {*} (x _ {2}) \| \leq \| \nabla_ {y} \mathcal {L} _ {\lambda_ {2}} (x _ {2}, y _ {\lambda_ {1}} ^ {*} (x _ {1})) \| \leq (l _ {f, 1} + \lambda_ {2} l _ {g, 1}) \| x _ {1} - x _ {2} \| + \frac {\lambda_ {2} - \lambda_ {1}}{\lambda_ {1}} l _ {f, 0}. +$$ + +Dividing both sides by $(\lambda_2\mu_g / 2)$ concludes the first part of the proof. Note that $y^{*}(x) = \lim_{\lambda \to \infty}y_{\lambda}^{*}(x)$ . Thus, for any $x$ and finite $\lambda \geq 2l_{f,1} / \mu_g$ , + +$$ +\| y _ {\lambda} ^ {*} (x) - y ^ {*} (x) \| \leq \frac {2 l _ {f , 0}}{\lambda \mu_ {g}}. +$$ + +![](images/31f37d01bf386af3bbc06b34cd9f7348dcf6d4d381408ae6a265877e98962d10.jpg) + +# D.2. Proofs for Auxiliary Lemmas + +# D.2.1. PROOF OF LEMMA A.1 + +Proof. The proof can be also found in Lemma 2.2 in (Ghadimi & Wang, 2018). We provide the proof for the completeness. Recall that $\nabla F(x)$ is given by + +$$ +\nabla F (x) = \nabla_ {x} f (x, y ^ {*} (x)) - \nabla_ {x y} ^ {2} g (x, y ^ {*} (x)) \nabla_ {y y} ^ {2} g (x, y ^ {*} (x)) ^ {- 1} \nabla_ {y} f (x, y ^ {*} (x)). +$$ + +Using the smoothness of functions and Hessian-continuity of $g$ in assumptions, for any $x_{1},x_{2}\in X$ , we get + +$$ +\begin{array}{l} \| \nabla F (x _ {1}) - \nabla F (x _ {2}) \| \leq \left(l _ {f, 1} + \frac {l _ {f , 0}}{\mu_ {g}} l _ {g, 2} + \frac {l _ {g , 1}}{\mu_ {g}} l _ {g, 1}\right) \left(\| x _ {1} - x _ {2} \| + \| y ^ {*} (x _ {1}) - y ^ {*} (x _ {2}) \|\right) \\ + l _ {g, 1} l _ {f, 0} \| \nabla_ {y y} ^ {2} g \left(x _ {1}, y ^ {*} \left(x _ {1}\right)\right) ^ {- 1} - \nabla_ {y y} ^ {2} g \left(x _ {2}, y ^ {*} \left(x _ {2}\right)\right) ^ {- 1} \| \\ \leq \left(l _ {f, 1} + \frac {l _ {f , 0}}{\mu_ {g}} l _ {g, 2} + \frac {l _ {g , 1}}{\mu_ {g}}\right) l _ {*, 0} \| x _ {1} - x _ {2} \| + \frac {l _ {g , 1} l _ {f , 0}}{\mu_ {g} ^ {2}} l _ {g, 2} l _ {*, 0} \| x _ {1} - x _ {2} \|. \\ \end{array} +$$ + +Thus, + +$$ +\begin{array}{l} l _ {F, 1} \leq l _ {* , 0} \left(l _ {f, 1} + \frac {l _ {f , 0} l _ {g , 2} + l _ {g , 1} ^ {2}}{\mu_ {g}} + \frac {l _ {f , 0} l _ {g , 1} l _ {g , 2}}{\mu_ {g} ^ {2}}\right) \\ \leq l _ {*, 0} \left(l _ {f, 1} + \frac {l _ {g , 1} ^ {2}}{\mu_ {g}} + \frac {2 l _ {f , 0} l _ {g , 1} l _ {g , 2}}{\mu_ {g} ^ {2}}\right), \\ \end{array} +$$ + +where in the last inequality we used $l_{g,1} / \mu_g\geq 1$ + +# D.2.2. PROOF OF LEMMA A.2 + +We use a short-hand $y^{*} = y^{*}(x)$ . + +$$ +\begin{array}{l} \nabla_ {x} \mathcal {L} _ {\lambda} (x, y) = \nabla_ {x} f (x, y) + \lambda (\nabla_ {x} g (x, y) - \nabla_ {x} g (x, y ^ {*})) \\ \nabla_ {y} \mathcal {L} _ {\lambda} (x, y) = \nabla_ {y} f (x, y) + \lambda \nabla_ {y} g (x, y). \\ \end{array} +$$ + +Check that + +$$ +\begin{array}{l} \nabla F (x) - \nabla_ {x} \mathcal {L} _ {\lambda} (x, y) = \nabla_ {x} f (x, y ^ {*}) - \nabla_ {x} f (x, y) \\ - \nabla_ {x y} ^ {2} g (x, y ^ {*}) \nabla_ {y y} ^ {2} g (x, y ^ {*}) ^ {- 1} \nabla_ {y} f (x, y ^ {*}) - \lambda (\nabla_ {x} g (x, y) - \nabla_ {x} g (x, y ^ {*})). \tag {31} \\ \end{array} +$$ + +We can rearrange terms for $(\nabla_{x}g(x,y) - \nabla_{x}g(x,y^{*}))$ as the following: + +$$ +\begin{array}{l} \nabla_ {x} g (x, y) - \nabla_ {x} g (x, y ^ {*}) = \nabla_ {x} g (x, y) - \nabla_ {x} g (x, y ^ {*}) - \nabla_ {x y} g (x, y ^ {*}) ^ {\top} (y - y ^ {*}) \\ + \nabla_ {x y} g (x, y ^ {*}) ^ {\top} (y - y ^ {*}). \tag {32} \\ \end{array} +$$ + +Note that from the optimality condition for $y^{*}$ , $\nabla_y g(x, y^*) = 0$ and from $\nabla_x f(x, y) + \lambda \nabla_y g(x, y) = \nabla_y \mathcal{L}(x, y)$ , we can express $y - y^{*}$ as + +$$ +\begin{array}{l} y - y ^ {*} = - \nabla_ {y y} g (x, y ^ {*}) ^ {- 1} (\nabla_ {y} g (x, y) - \nabla_ {y} g (x, y ^ {*}) - \nabla_ {y y} g (x, y ^ {*}) (y - y ^ {*})) \\ + \frac {1}{\lambda} \nabla_ {y y} g (x, y ^ {*}) ^ {- 1} \left(\nabla_ {y} \mathcal {L} (x, y) - \nabla_ {y} f (x, y)\right). \tag {33} \\ \end{array} +$$ + +Plugging (32) and (33) back to (31), we have + +$$ +\begin{array}{l} \nabla F (x) - \nabla_ {x} \mathcal {L} _ {\lambda} (x, y) = (\nabla_ {x} f (x, y ^ {*}) - \nabla_ {x} f (x, y)) - \nabla_ {x y} ^ {2} g (x, y ^ {*}) \nabla_ {y y} ^ {2} g (x, y ^ {*}) ^ {- 1} (\nabla_ {y} f (x, y ^ {*}) - \nabla_ {y} f (x, y)) \\ - \nabla_ {x y} ^ {2} g (x, y ^ {*}) \nabla_ {y y} ^ {2} g (x, y ^ {*}) ^ {- 1} \nabla_ {y} \mathcal {L} (x, y) \\ - \lambda \left(\nabla_ {x} g (x, y) - \nabla_ {x} g \left(x, y ^ {*}\right) - \nabla_ {x y} ^ {2} g \left(x, y ^ {*}\right) ^ {\top} \left(y - y ^ {*}\right)\right) \\ \end{array} +$$ + +$$ ++ \lambda \nabla_ {x y} ^ {2} g (x, y ^ {*}) \nabla_ {y y} ^ {2} g (x, y ^ {*}) ^ {- 1} (\nabla_ {y} g (x, y) - \nabla_ {y} g (x, y ^ {*}) - \nabla_ {y y} ^ {2} g (x, y ^ {*}) (y - y ^ {*})). +$$ + +By the smoothness of $\nabla g$ from Assumption 1, we have + +$$ +\left\| \nabla_ {y} g (x, y) - \nabla_ {y} g (x, y ^ {*}) - \nabla_ {y y} ^ {2} g (x, y ^ {*}) (y - y ^ {*}) \right\| \leq l _ {g, 2} \| y - y ^ {*} \| ^ {2}. +$$ + +When $\| y - y^{*} \|$ is too large, the smoothness of $g$ can be more useful: + +$$ +\left\| \nabla_ {y} g (x, y) - \nabla_ {y} g (x, y ^ {*}) - \nabla_ {y y} ^ {2} g (x, y ^ {*}) (y - y ^ {*}) \right\| \leq 2 l _ {g, 1} \| y - y ^ {*} \|. +$$ + +Similarly, we have + +$$ +\left\| \nabla_ {x} g (x, y) - \nabla_ {x} g (x, y ^ {*}) - \nabla_ {x y} ^ {2} g (x, y ^ {*}) ^ {\top} (y - y ^ {*}) \right\| \leq \min \left(l _ {g, 2} \| y - y ^ {*} \| ^ {2}, 2 l _ {g, 1} \| y - y ^ {*} \|\right). +$$ + +On the other hand, by smootheness of $f$ , we also have + +$$ +\| \nabla_ {x} f (x, y ^ {*}) - \nabla_ {x} f (x, y) \| \leq l _ {f, 1} \| y - y ^ {*} \|, \| \nabla_ {y} f (x, y ^ {*}) - \nabla_ {y} f (x, y) \| \leq l _ {f, 1} \| y - y ^ {*} \|. +$$ + +We can conclude that + +$$ +\begin{array}{l} \left\| \nabla F (x) - \nabla_ {x} \mathcal {L} _ {\lambda} (x, y) + \nabla_ {x y} ^ {2} g (x, y ^ {*}) \nabla_ {y y} ^ {2} g (x, y ^ {*}) ^ {- 1} \nabla_ {y} \mathcal {L} (x, y) \right\| \\ \leq l _ {f, 1} (1 + l _ {g, 1} / \mu_ {g}) \| y - y ^ {*} \| + \lambda (1 + l _ {g, 1} / \mu_ {g}) \| y - y ^ {*} \| \min (l _ {g, 2} \| y - y ^ {*} \|, 2 l _ {g, 1}). \\ \end{array} +$$ + +We know that $l_{g,1} / \mu_g\geq 1$ and thus, we have + +$$ +\begin{array}{l} \left\| \nabla F (x) - \nabla_ {x} \mathcal {L} _ {\lambda} (x, y) + \nabla_ {x y} ^ {2} g (x, y ^ {*}) \nabla_ {y y} ^ {2} g (x, y ^ {*}) ^ {- 1} \nabla_ {y} \mathcal {L} (x, y) \right\| \\ \leq 2 \left(l _ {g, 1} / \mu_ {g}\right) \| y - y ^ {*} \| \left(l _ {f, 1} + \lambda \cdot \min \left(2 l _ {g, 1}, l _ {g, 2} \| y - y ^ {*} \|\right)\right), \\ \end{array} +$$ + +yielding the lemma. + +# D.2.3. PROOF OF LEMMA A.3 + +Proof. Lipschitzness of $y_{\lambda}^{*}(x)$ is immediate from Lemma 3.2. By the optimality condition for $\nabla y_{\lambda}^{*}(x)$ , we have + +$$ +\nabla_ {y} \mathcal {L} _ {\lambda} (x, y _ {\lambda} ^ {*} (x)) = \nabla_ {y} f (x, y _ {\lambda} ^ {*} (x)) + \lambda \nabla_ {y} g (x, y _ {\lambda} ^ {*} (x)) = 0. +$$ + +Taking derivative with respect to $x$ , we get + +$$ +(\nabla_ {y y} ^ {2} f (x, y _ {\lambda} ^ {*} (x)) + \lambda \nabla_ {y y} ^ {2} g (x, y _ {\lambda} ^ {*} (x))) \nabla y _ {\lambda} ^ {*} (x) = - (\nabla_ {x y} ^ {2} f (x, y _ {\lambda} ^ {*} (x)) + \lambda \nabla_ {x y} ^ {2} g (x, y _ {\lambda} ^ {*} (x))). +$$ + +As $\lambda > 2l_{f,1} / \mu_g$ , the left-hand side is positive definite with minimum eigenvalue larger than $\lambda \mu_g / 2$ , and we have + +$$ +\nabla y _ {\lambda} ^ {*} (x) = - \left(\frac {1}{\lambda} \nabla_ {y y} ^ {2} f (x, y _ {\lambda} ^ {*} (x)) + \nabla_ {y y} ^ {2} g (x, y _ {\lambda} ^ {*} (x))\right) ^ {- 1} \left(\frac {1}{\lambda} \nabla_ {x y} ^ {2} f (x, y _ {\lambda} ^ {*} (x)) + \nabla_ {x y} ^ {2} g (x, y _ {\lambda} ^ {*} (x))\right). +$$ + +To get the smoothness result, we compare this at $x_{1}$ and $x_{2}$ , yielding + +$$ +\begin{array}{l} \frac {\lambda \mu_ {g}}{2} \| \nabla y _ {\lambda} ^ {*} (x _ {1}) - \nabla y _ {\lambda} ^ {*} (x _ {2}) \| \leq (l _ {f, 2} + \lambda l _ {g, 2}) (\| x _ {1} - x _ {2} \| + \| y _ {\lambda} ^ {*} (x _ {1}) - y _ {\lambda} ^ {*} (x _ {2}) \|) \max _ {x \in X} \| \nabla y _ {\lambda} ^ {*} (x) \| \\ + \left(l _ {f, 2} + \lambda l _ {g, 2}\right) \left(\| x _ {1} - x _ {2} \| + \| y _ {\lambda} ^ {*} (x _ {1}) - y _ {\lambda} ^ {*} (x _ {2}) \|\right) \\ \leq \left(l _ {f, 2} + \lambda l _ {g, 2}\right) \left(1 + l _ {\lambda , 0}\right) ^ {2} \| x _ {1} - x _ {2} \|. \\ \end{array} +$$ + +Arranging this, we get + +$$ +\left\| \nabla y _ {\lambda} ^ {*} \left(x _ {1}\right) - \nabla y _ {\lambda} ^ {*} \left(x _ {2}\right) \right\| \leq 3 2 \left(\frac {l _ {f , 2}}{\lambda} + l _ {g , 2}\right) \frac {l _ {g , 1} ^ {2}}{\mu_ {g} ^ {3}} \| x _ {1} - x _ {2} \|. +$$ + +# D.2.4. PROOF OF LEMMA A.4 + +Proof. This is immediate from Lipschitz continuity in Lemma 3.2 with sending $\lambda_1 = \lambda_2$ to infinity. + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\| y ^ {*} \left(x _ {k + 1}\right) - y ^ {*} \left(x _ {k}\right) \right\| ^ {2} \mid \mathcal {F} _ {k} \right] \leq l _ {*}, 0 ^ {2} \mathbb {E} \left[ \left\| x _ {k + 1} - x _ {k} \right\| ^ {2} \mid \mathcal {F} _ {k} \right] \\ \leq l _ {* 0} ^ {2} \xi^ {2} \left(\alpha_ {k} ^ {2} \mathbb {E} \left[ \| q _ {k} ^ {x} \| ^ {2} \mid \mathcal {F} _ {k} \right] + \alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}\right). \\ \end{array} +$$ + +![](images/b826e69e8824780e9fe1e788f12060285f3f4a438b28306897f56c33890e3262.jpg) + +# D.2.5. PROOF OF LEMMA A.5 + +Proof. We can use the smoothness property of $y^{*}(x)$ as in (Chen et al., 2021), which is crucial to control the noise variance induced from updating $x$ . We can start with the following: + +$$ +\begin{array}{l} \left\langle v _ {k}, y _ {k + 1} ^ {*} - y _ {k} ^ {*} \right\rangle = \left\langle v _ {k}, \nabla y ^ {*} \left(x _ {k}\right) \left(x _ {k + 1} - x _ {k}\right) \right\rangle \\ + \left\langle v _ {k}, y ^ {*} \left(x _ {k + 1}\right) - y ^ {*} \left(x _ {k}\right) - \nabla y ^ {*} \left(x _ {k}\right) \left(x _ {k + 1} - x _ {k}\right) \right\rangle . \\ \end{array} +$$ + +On the first term, taking expectation and using $\langle a,b\rangle \leq c\| a\| ^2 +\frac{1}{4c}\| b\| ^2$ + +$$ +\begin{array}{l} \mathbb {E} \left[ \langle v _ {k}, \nabla y ^ {*} (x _ {k}) (x _ {k + 1} - x _ {k}) \rangle \mid \mathcal {F} _ {k} \right] = - \xi \alpha_ {k} \mathbb {E} \left[ \langle v _ {k}, \nabla y ^ {*} (x _ {k}) q _ {k} ^ {x} \rangle \mid \mathcal {F} _ {k} \right] \\ \leq \xi \alpha_ {k} \eta_ {k} \mathbb {E} [ \| v _ {k} \| ^ {2} | \mathcal {F} _ {k} ] + \frac {\xi \alpha_ {k}}{4 \eta_ {k}} \mathbb {E} [ \| \nabla y ^ {*} (x _ {k}) q _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] \\ \leq \xi \alpha_ {k} \eta_ {k} \mathbb {E} [ \| v _ {k} \| ^ {2} | \mathcal {F} _ {k} ] + \frac {\xi \alpha_ {k} l _ {* , 0} ^ {2}}{4 \eta_ {k}} \mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ], \\ \end{array} +$$ + +where we used the Lipschitz continuity of $y^{*}(x)$ . For the second term, using smoothness of $y^{*}(x)$ , + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\langle v _ {k}, y ^ {*} \left(x _ {k + 1}\right) - y ^ {*} \left(x _ {k}\right) - \nabla y ^ {*} \left(x _ {k}\right) \left(x _ {k + 1} - x _ {k}\right) \right\rangle \mid \mathcal {F} _ {k} \right] \\ \leq \frac {l _ {* , 1}}{2} \mathbb {E} \left[ \left\| v _ {k} \right\| \left\| x _ {k + 1} - x _ {k} \right\| ^ {2} \mid \mathcal {F} _ {k} \right] \\ \leq \frac {l _ {* , 1}}{4} \mathbb {E} \left[ \left(l _ {* , 1} \| v _ {k} \| ^ {2} + \frac {1}{l _ {* , 1}}\right) \cdot \| x _ {k + 1} - x _ {k} \| ^ {2} | \mathcal {F} _ {k} \right] \\ \leq \frac {l _ {* , 1} ^ {2}}{4} \mathbb {E} [ \| v _ {k} \| ^ {2} \cdot \mathbb {E} \left[ \| x _ {k} - x _ {k + 1} \| ^ {2} | \mathcal {F} _ {k} ^ {\prime} \right] | \mathcal {F} _ {k} ] \\ + \frac {\xi^ {2}}{4} \left(\alpha_ {k} ^ {2} \mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} ] + \alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}\right), \\ \end{array} +$$ + +where $\mathcal{F}_k^{\prime}$ is a sigma-algebra generated by stochastic noises up to $k^{th}$ iteration and $v_{k}$ . Note that + +$$ +\mathbb {E} \left[ \| x _ {k} - x _ {k + 1} \| ^ {2} | \mathcal {F} _ {k} ^ {\prime} \right] \leq \xi^ {2} \alpha_ {k} ^ {2} \mathbb {E} \left[ \| q _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} \right] + \xi^ {2} (\alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}), +$$ + +and from boundedness of $\nabla_{x}f$ and $\nabla_{x}g$ in Assumption 4, we have $\alpha_{k}\| q_{k}^{x}\| \leq \alpha_{k}l_{f,0} + 2\beta_{k}l_{g,0}$ . With $M_{f} = l_{f,0}^{2} + \sigma_{f}^{2}$ , $M_g = l_{g,0}^2 +\sigma_g^2$ , and $M = \max (M_f,M_g)$ , we get + +$$ +\mathbb {E} \left[ \| x _ {k} - x _ {k + 1} \| ^ {2} | \mathcal {F} _ {k} ^ {\prime} \right] \leq 2 \xi^ {2} (M _ {f} \alpha_ {k} ^ {2} + 2 M _ {g} \beta_ {k} ^ {2}) \leq 4 M \xi^ {2} l _ {*}, 1 ^ {2} \beta_ {k} ^ {2}, +$$ + +which yields + +$$ +\begin{array}{l} \mathbb {E} \left[ \left\langle v _ {k}, y ^ {*} \left(x _ {k + 1}\right) - y ^ {*} \left(x _ {k}\right) - \nabla y ^ {*} \left(x _ {k}\right) \left(x _ {k + 1} - x _ {k}\right) \right\rangle \mid \mathcal {F} _ {k} \right] \\ \leq M \xi^ {2} l _ {*}, 1 ^ {2} \beta_ {k} ^ {2} \mathbb {E} [ \| v _ {k} \| ^ {2} | \mathcal {F} _ {k} ] + \frac {\xi^ {2}}{4} \left(\alpha_ {k} ^ {2} \mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} | \mathcal {F} _ {k} ] + \alpha_ {k} ^ {2} \sigma_ {f} ^ {2} + \beta_ {k} ^ {2} \sigma_ {g} ^ {2}\right). \\ \end{array} +$$ + +Combining all, we obtain the desired result. + +![](images/166fd4e224e16dba0dd5b6a5b87936ab2235646b6149e32f68cac6e599f0e09b.jpg) + +# D.2.6. PROOF OF LEMMA A.6 + +Proof. We can start with the following decomposition: + +$$ +\begin{array}{l} \left\langle v _ {k}, y _ {\lambda_ {k + 1}} ^ {*} \left(x _ {k + 1}\right) - y _ {\lambda_ {k}} ^ {*} \left(x _ {k}\right) \right\rangle = \left\langle v _ {k}, y _ {\lambda_ {k + 1}} ^ {*} \left(x _ {k + 1}\right) - y _ {\lambda_ {k}} ^ {*} \left(x _ {k + 1}\right) \right\rangle \\ + \left\langle v _ {k}, \nabla y _ {\lambda_ {k}} ^ {*} (x _ {k}) \left(x _ {k + 1} - x _ {k}\right) \right\rangle \\ + \left\langle v _ {k}, y _ {\lambda_ {k}} ^ {*} (x _ {k}) - y _ {\lambda_ {k}} ^ {*} (x _ {k}) - \nabla y _ {\lambda_ {k}} ^ {*} (x _ {k}) (x _ {k + 1} - x _ {k}) \right\rangle . \\ \end{array} +$$ + +For the second and third terms, we can apply the smoothness of $y_{\lambda}(x)$ similarly in the proof in D.2.5. + +On the first term, taking expectation and using $\langle a,b\rangle \leq c\| a\| ^2 +\frac{1}{4c}\| b\| ^2$ + +$$ +\begin{array}{l} \mathbb {E} \left[ \langle v _ {k}, y _ {\lambda_ {k + 1}} ^ {*} (x _ {k + 1}) - y _ {\lambda_ {k}} ^ {*} (x _ {k + 1}) \rangle \mid \mathcal {F} _ {k} \right] \leq c \mathbb {E} \left[ \| v _ {k} \| ^ {2} \right] + \frac {1}{4 c} \mathbb {E} \left[ \| y _ {\lambda_ {k + 1}} ^ {*} (x _ {k + 1}) - y _ {\lambda_ {k}} ^ {*} (x _ {k + 1}) \| ^ {2} \right] \\ \leq c \mathbb {E} [ \| v _ {k} \| ^ {2} ] + \frac {1}{c} \frac {\delta_ {k} ^ {2}}{\lambda_ {k} ^ {2} \lambda_ {k + 1} ^ {2}} \frac {l _ {f , 0} ^ {2}}{\mu_ {g} ^ {2}}, \\ \end{array} +$$ + +where we applied Lemma 3.2. Take $c = \frac{\delta_k}{\lambda_k}$ , getting + +$$ +\mathbb {E} \left[ \langle v _ {k}, y _ {\lambda_ {k + 1}} ^ {*} (x _ {k + 1}) - y _ {\lambda_ {k}} ^ {*} (x _ {k + 1}) \rangle | \mathcal {F} _ {k} \right] \leq \frac {\delta_ {k}}{\lambda_ {k}} \mathbb {E} \left[ \| v _ {k} \| ^ {2} \right] + \frac {l _ {f , 0} ^ {2} \delta_ {k}}{\mu_ {g} ^ {2} \lambda_ {k} ^ {3}}. +$$ + +Adding this with bounds on other two terms, we get the lemma. + +# D.3. Proofs for Auxiliary Lemmas with Momentum + +# D.3.1. PROOF OF LEMMA C.1 + +Due to Lipschitz continuity of $y^{*}(x)$ , we have + +$$ +\begin{array}{l} \mathbb {E} [ \| y ^ {*} (x _ {k + 1}) - y ^ {*} (x _ {k}) \| ^ {2} ] \leq l _ {* 0} ^ {2} \mathbb {E} [ \| x _ {k + 1} - x _ {k} \| ^ {2} ] \\ \leq \xi^ {2} \alpha_ {k} ^ {2} l _ {* 0} ^ {2} \mathbb {E} [ \| q _ {k} ^ {x} + \tilde {e} _ {k} ^ {x} \| ^ {2} ] \leq 2 \xi^ {2} \alpha_ {k} ^ {2} l _ {* 0} ^ {2} (\mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} ] + \mathbb {E} [ \| \tilde {e} _ {k} ^ {x} \| ^ {2} ]). \\ \end{array} +$$ + +# D.3.2. PROOF OF LEMMA C.2 + +Using Lemma 3.2, we have + +$$ +\begin{array}{l} \mathbb {E} \left[ \| y _ {\lambda_ {k + 1}} ^ {*} (x _ {k + 1}) - y _ {\lambda_ {k}} ^ {*} (x _ {k}) \| ^ {2} \right] \leq \frac {8 \delta_ {k} ^ {2}}{\lambda_ {k} ^ {2} \lambda_ {k + 1} ^ {2}} + 2 l _ {\lambda , 0} ^ {2} \mathbb {E} \left[ \| x _ {k + 1} - x _ {k} \| ^ {2} \right] \\ \leq 4 \xi^ {2} \alpha_ {k} ^ {2} l _ {*, 0} ^ {2} (\mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} ] + \mathbb {E} [ \| \tilde {e} _ {k} ^ {x} \| ^ {2} ]) + \frac {8 \delta_ {k} ^ {2}}{\lambda_ {k} ^ {4}}. \\ \end{array} +$$ + +# D.3.3. PROOF OF LEMMA C.3 + +We can start with unfolding the expression for $\mathbb{E}[\| \tilde{e}_{k + 1}^z\|^2 ]$ + +$$ +\begin{array}{l} \mathbb {E} [ \| \tilde {e} _ {k + 1} ^ {z} \| ^ {2} ] = \mathbb {E} [ \| \tilde {h} _ {z} ^ {k + 1} - q _ {k + 1} ^ {z} \| ^ {2} ] \\ = \mathbb {E} [ \| \nabla_ {y} g (x _ {k + 1}, z _ {k + 1}; \phi_ {z} ^ {k + 1}) + (1 - \eta_ {k + 1}) (\tilde {h} _ {z} ^ {k} - \nabla_ {y} g (x _ {k}, z _ {k}; \phi_ {z} ^ {k + 1})) - q _ {k + 1} ^ {z} \| ^ {2} ] \\ = \mathbb {E} [ \| (1 - \eta_ {k + 1}) \tilde {e} _ {k} ^ {z} + \nabla_ {y} g (x _ {k + 1}, z _ {k + 1}; \phi_ {z} ^ {k + 1}) \\ + (1 - \eta_ {k + 1}) \left(q _ {k} ^ {z} - \nabla_ {y} g \left(x _ {k}, z _ {k}; \phi_ {z} ^ {k + 1}\right)\right) - q _ {k + 1} ^ {z} \| ^ {2} ] \\ = \left(1 - \eta_ {k + 1}\right) ^ {2} \mathbb {E} \left[ \| \tilde {e} _ {k} ^ {z} \| ^ {2} \right] + \mathbb {E} \left[ \| \eta_ {k} \left(\nabla_ {y} g \left(x _ {k + 1}, z _ {k + 1}; \phi_ {z} ^ {k + 1}\right) - q _ {k + 1} ^ {z}\right) \right. \\ + (1 - \eta_ {k + 1}) (\nabla_ {y} g (x _ {k + 1}, z _ {k + 1}; \phi_ {z} ^ {k + 1}) - \nabla_ {y} g (x _ {k}, z _ {k}; \phi_ {z} ^ {k + 1}) + q _ {k} ^ {z} - q _ {k + 1} ^ {z}) \| ^ {2} ]. \\ \end{array} +$$ + +In the last equality, we used + +$$ +\mathbb {E} [ \left\langle \tilde {e} _ {k} ^ {z}, \nabla_ {y} g (x _ {k + 1}, z _ {k + 1}; \phi_ {z} ^ {k + 1}) - q _ {k + 1} ^ {z} \right\rangle | \mathcal {F} _ {k + 1} ] ] = 0, +$$ + +$$ +\mathbb {E} [ \mathbb {E} [ \langle \tilde {e} _ {k} ^ {z}, \nabla_ {y} g (x _ {k}, z _ {k}; \phi_ {z} ^ {k + 1}) - q _ {k} ^ {z} \rangle | \mathcal {F} _ {k + 1} ] ] = 0. +$$ + +Also note that + +$$ +\mathbb {E} [ \| \nabla_ {y} g (x _ {k + 1}, z _ {k + 1}; \phi_ {z} ^ {k + 1}) - q _ {k + 1} ^ {z} \| ^ {2} ] \leq \sigma_ {g} ^ {2}, +$$ + +from the variance boundedness (Assumption 3). We also observe that + +$$ +\begin{array}{l} \mathbb {E} \left[ \| \nabla_ {y} g \left(x _ {k + 1}, z _ {k + 1}; \phi_ {z} ^ {k + 1}\right) - \nabla_ {y} g \left(x _ {k}, z _ {k}; \phi_ {z} ^ {k + 1}\right) \| ^ {2} \right] \leq l _ {g, 1} ^ {2} \left(\| x _ {k + 1} - x _ {k} \| ^ {2} + \| z _ {k + 1} - z _ {k} \| ^ {2}\right) \\ = l _ {g, 1} ^ {2} (\xi^ {2} \alpha_ {k} ^ {2} \| q _ {k} ^ {x} + \tilde {e} _ {k} ^ {x} \| ^ {2} + \gamma_ {k} ^ {2} \| q _ {k} ^ {z} + \tilde {e} _ {k} ^ {z} \| ^ {2}), \\ \end{array} +$$ + +due to Assumption 6. The same inequality holds for $q_{k + 1}^{z} - q_{k}^{z}$ : + +$$ +\mathbb {E} \big [ \| q _ {k + 1} ^ {z} - q _ {k} ^ {z} \| ^ {2} \big ] \leq l _ {g, 1} ^ {2} (\xi^ {2} \alpha_ {k} ^ {2} \| q _ {k} ^ {x} + \tilde {e} _ {k} ^ {x} \| ^ {2} + \gamma_ {k} ^ {2} \| q _ {k} ^ {z} + \tilde {e} _ {k} ^ {z} \| ^ {2}). +$$ + +Now we plug these inequalities and using $\| a + b \|^2 \leq 2(\| a \|^2 + \| b \|^2)$ multiple times, we have + +$$ +\begin{array}{l} \mathbb {E} [ \| \tilde {e} _ {k + 1} ^ {z} \| ^ {2} ] \leq (1 - \eta_ {k + 1}) ^ {2} (1 + 8 l _ {g, 1} ^ {2} \gamma_ {k} ^ {2}) \mathbb {E} [ \| \tilde {e} _ {k} ^ {z} \| ^ {2} ] + 2 \eta_ {k + 1} ^ {2} \sigma_ {g} ^ {2} \\ + 8 l _ {g, 1} ^ {2} (1 - \eta_ {k + 1}) ^ {2} \left(\xi^ {2} \alpha_ {k} ^ {2} \mathbb {E} [ \| q _ {k} ^ {x} \| ^ {2} ] + \xi^ {2} \alpha_ {k} ^ {2} \mathbb {E} [ \| \tilde {e} _ {k} ^ {x} \| ^ {2} ] + \gamma_ {k} ^ {2} \mathbb {E} [ \| q _ {k} ^ {z} \| ^ {2} ]\right). \\ \end{array} +$$ + +Similarly, we can repeat similar steps for $\tilde{e}_{k + 1}^{y}$ . To simplify the notation, with slight abuse in notation, we let $q_{k}^{y}(\zeta ,\phi)\coloneqq \nabla_{y}f(x_{k},y_{k};\zeta) + \lambda_{k}\nabla_{y}g(x_{k},y_{k};\phi)$ . Note that $q_{k}^{y} = \mathbb{E}[q_{k}^{y}(\zeta ,\phi)]$ . Then we can get a similar bound for $\mathbb{E}[\| \tilde{e}_{k + 1}^{y}\| ^2 ]$ + +$$ +\begin{array}{l} \mathbb {E} [ \| \tilde {e} _ {k + 1} ^ {y} \| ^ {2} ] \leq (1 - \eta_ {k + 1}) ^ {2} \mathbb {E} [ \| \tilde {e} _ {k} ^ {y} \| ^ {2} ] + 2 \eta_ {k + 1} ^ {2} \mathbb {E} [ \| q _ {k + 1} ^ {y} (\zeta_ {y} ^ {k + 1}, \phi_ {y} ^ {k + 1}) - q _ {k + 1} ^ {y} \| ^ {2} ] \\ + 2 (1 - \eta_ {k + 1}) ^ {2} \mathbb {E} [ \| (q _ {k + 1} ^ {y} (\zeta_ {y} ^ {k + 1}, \phi_ {y} ^ {k + 1}) - q _ {k} ^ {y} (\zeta_ {y} ^ {k + 1}, \phi_ {y} ^ {k + 1})) + (q _ {k} ^ {y} - q _ {k + 1} ^ {y}) \| ^ {2} ]. \\ \end{array} +$$ + +Using the variance bound similarly, we have + +$$ +\mathbb {E} \left[ \| q _ {k + 1} ^ {y} \left(\zeta_ {y} ^ {k + 1}, \phi_ {y} ^ {k + 1}\right) - q _ {k + 1} ^ {y} \| ^ {2} \right] \leq \sigma_ {f} ^ {2} + \lambda_ {k + 1} ^ {2} \sigma_ {g} ^ {2}. +$$ + +Then, we unfold the last term such that + +$$ +\begin{array}{l} \mathbb {E} [ \| (q _ {k + 1} ^ {y} (\zeta_ {y} ^ {k + 1}, \phi_ {y} ^ {k + 1}) - q _ {k} ^ {y} (\zeta_ {y} ^ {k + 1}, \phi_ {y} ^ {k + 1})) + (q _ {k} ^ {y} - q _ {k + 1} ^ {y}) \| ^ {2} ] \\ = \mathbb {E} [ \| (\nabla_ {y} f (x _ {k + 1}, y _ {k + 1}; \zeta_ {y} ^ {k + 1}) - \nabla_ {y} f (x _ {k}, y _ {k}; \zeta_ {y} ^ {k + 1}) + \nabla_ {y} f (x _ {k}, y _ {k}) - \nabla_ {y} f (x _ {k + 1}, y _ {k + 1})) \\ + \lambda_ {k} \left(\nabla_ {y} g (x _ {k + 1}, y _ {k + 1}; \phi_ {y} ^ {k + 1}) - \nabla_ {y} g (x _ {k}, y _ {k}; \phi_ {y} ^ {k + 1}) + \nabla_ {y} g (x _ {k}, y _ {k}) - \nabla_ {y} g (x _ {k + 1}, y _ {k + 1})\right) \\ \left. + \delta_ {k} \left(\nabla_ {y} g \left(x _ {k + 1}, y _ {k + 1}; \phi_ {y} ^ {k + 1}\right) - \nabla_ {y} g \left(x _ {k + 1}, y _ {k + 1}\right) + \nabla_ {y} g \left(x _ {k}, y _ {k}\right) - \nabla_ {y} g \left(x _ {k}, y _ {k}; \phi_ {y} ^ {k + 1}\right)\right) \| ^ {2} \right] \\ \leq 1 2 \left(l _ {f, 1} ^ {2} + l _ {g, 1} ^ {2} \lambda_ {k} ^ {2}\right) \left(\left\| x _ {k + 1} - x _ {k} \right\| ^ {2} + \left\| y _ {k + 1} - y _ {k} \right\| ^ {2}\right) + 1 2 \delta_ {k} ^ {2} \sigma_ {g} ^ {2} \\ \leq 2 4 (l _ {f, 1} ^ {2} \alpha_ {k} ^ {2} + l _ {g, 1} ^ {2} \beta_ {k} ^ {2}) (\xi^ {2} \| q _ {k} ^ {x} \| ^ {2} + \xi^ {2} \| \tilde {e} _ {k} ^ {x} \| ^ {2} + \| q _ {k} ^ {y} \| ^ {2} + \| \tilde {e} _ {k} ^ {y} \| ^ {2}) + 1 2 \delta_ {k} ^ {2} \sigma_ {g} ^ {2}. \\ \end{array} +$$ + +We note that we set $\lambda_{k}\geq 2l_{f,1} / \mu_{g}$ , and thus $l_{f,1}\leq \lambda_k l_{g,1}$ . In total, we get + +$$ +\begin{array}{l} \mathbb {E} [ \| \tilde {e} _ {k + 1} ^ {y} \| ^ {2} ] \leq (1 - \eta_ {k + 1}) ^ {2} (1 + 9 6 l _ {g, 1} ^ {2} \beta_ {k} ^ {2}) \mathbb {E} [ \| \tilde {e} _ {k} ^ {y} \| ^ {2} ] + 2 \eta_ {k + 1} ^ {2} (\sigma_ {f} ^ {2} + \lambda_ {k + 1} ^ {2} \sigma_ {g} ^ {2}) + 2 4 \delta_ {k} ^ {2} \sigma_ {g} ^ {2} \\ + 9 6 (1 - \eta_ {k + 1}) ^ {2} l _ {g, 1} ^ {2} \beta_ {k} ^ {2} (\xi^ {2} \| q _ {k} ^ {x} \| ^ {2} + \xi^ {2} \| \tilde {e} _ {k} ^ {x} \| ^ {2} + \| q _ {k} ^ {y} \| ^ {2}). \\ \end{array} +$$ + +# D.3.4. PROOF OF LEMMA C.4 + +Similarly to the case for $\| \tilde{e}_{k + 1}^y\|^2$ , let us define $q_{k}^{x}(\zeta ,\phi)\coloneqq \nabla_{x}f(x_{k},y_{k + 1};\zeta) + \lambda_{k}(\nabla_{x}g(x_{k},y_{k + 1};\phi) - \nabla_{x}g(x_{k},z_{k + 1};\phi))$ . We note that $\zeta_x^k,\phi_x^k$ are sampled after $y_{k + 1},z_{k + 1}$ is updated but before $x_{k}$ is updated. Hence, + +$$ +\mathbb {E} \left[ \mathbb {E} \left[ \langle \tilde {e} _ {k} ^ {x}, q _ {k + 1} ^ {x} \left(\zeta_ {x} ^ {k + 1}, \phi_ {x} ^ {k + 1}\right) - q _ {k + 1} ^ {x} \rangle \mid \mathcal {F} _ {k + 1} ^ {\prime} \right] \right] = 0, +$$ + +$$ +\mathbb {E} [ \mathbb {E} [ \langle \tilde {e} _ {k} ^ {x}, q _ {k} ^ {x} (\zeta_ {x} ^ {k + 1}, \phi_ {x} ^ {k + 1}) - q _ {k} ^ {x} \rangle | \mathcal {F} _ {k + 1} ^ {\prime} ] ] = 0. +$$ + +Thus, following similar procedure, we have + +$$ +\mathbb {E} [ \| \tilde {e} _ {k + 1} ^ {x} \| ^ {2} ] = \mathbb {E} [ \| q _ {k + 1} ^ {x} (\zeta_ {x} ^ {k + 1}, \phi_ {x} ^ {k + 1}) + (1 - \eta_ {k + 1}) (q _ {k} ^ {x} + \tilde {e} _ {k} ^ {x} - q _ {k} ^ {x} (\zeta_ {x} ^ {k + 1}, \phi_ {x} ^ {k + 1})) - q _ {k + 1} ^ {x} \| ^ {2} ] +$$ + +$$ +\begin{array}{l} \leq \left(1 - \eta_ {k + 1}\right) ^ {2} \mathbb {E} \left[ \| \tilde {\epsilon} _ {k} ^ {x} \| ^ {2} \right] + 2 \eta_ {k} ^ {2} \mathbb {E} \left[ \| q _ {k + 1} ^ {x} \left(\zeta_ {x} ^ {k + 1}, \phi_ {x} ^ {k + 1}\right) - q _ {k + 1} ^ {x} \| ^ {2} \right] \\ + 2 \left(1 - \eta_ {k + 1}\right) ^ {2} \mathbb {E} \left[ \left\| \left(q _ {k + 1} ^ {x} \left(\zeta_ {x} ^ {k + 1}, \phi_ {x} ^ {k + 1}\right) - q _ {k} ^ {x} \left(\zeta_ {x} ^ {k + 1}, \phi_ {x} ^ {k + 1}\right)\right) + \left(q _ {k} ^ {x} - q _ {k + 1} ^ {x}\right) \right\| ^ {2} \right]. \\ \end{array} +$$ + +# Note that + +$$ +\begin{array}{l} \mathbb {E} \left[ \| q _ {k + 1} ^ {x} \left(\zeta_ {x} ^ {k + 1}, \phi_ {x} ^ {k + 1}\right) - q _ {k + 1} ^ {x} \| ^ {2} \right] \\ = \mathbb {E} [ \| (\nabla_ {x} f (x _ {k + 1}, y _ {k + 2}; \zeta_ {x} ^ {k + 1}) - \nabla_ {x} f (x _ {k + 1}, y _ {k + 2})) \\ \left. + \lambda_ {k} \left(\nabla_ {x} g \left(x _ {k + 1}, y _ {k + 2}; \phi_ {x} ^ {k + 1}\right) - \nabla_ {x} g \left(x _ {k + 1}, y _ {k + 2}\right)\right) + \lambda_ {k} \left(\nabla_ {x} g \left(x _ {k + 1}, z _ {k + 2}; \phi_ {x} ^ {k + 1}\right) - \nabla_ {x} g \left(x _ {k + 1}, z _ {k + 2}\right)\right) \| ^ {2} \right] \\ \leq 3 \left(\sigma_ {f} ^ {2} + \lambda_ {k} ^ {2} \sigma_ {g} ^ {2}\right). \\ \end{array} +$$ + +# Finally, we have + +$$ +\begin{array}{l} \mathbb {E} [ \| (q _ {k + 1} ^ {x} (\zeta_ {x} ^ {k + 1}, \phi_ {x} ^ {k + 1}) - q _ {k} ^ {x} (\zeta_ {x} ^ {k + 1}, \phi_ {x} ^ {k + 1})) + (q _ {k} ^ {x} - q _ {k + 1} ^ {x}) \| ^ {2} ] \\ = \mathbb {E} [ \| (\nabla_ {x} f (x _ {k + 1}, y _ {k + 2}; \zeta_ {x} ^ {k + 1}) - \nabla_ {x} f (x _ {k}, y _ {k + 1}; \zeta_ {x} ^ {k + 1}) + \nabla_ {y} f (x _ {k}, y _ {k + 1}) - \nabla_ {y} f (x _ {k + 1}, y _ {k + 2})) \\ + \lambda_ {k} \left(\nabla_ {x} g \left(x _ {k + 1}, y _ {k + 2}; \phi_ {x} ^ {k + 1}\right) - \nabla_ {x} g \left(x _ {k}, y _ {k + 1}; \phi_ {x} ^ {k + 1}\right) + \nabla_ {x} g \left(x _ {k}, y _ {k + 1}\right) - \nabla_ {x} g \left(x _ {k + 1}, y _ {k + 2}\right)\right) \\ + \lambda_ {k} \left(\nabla_ {x} g \left(x _ {k + 1}, z _ {k + 2}; \phi_ {x} ^ {k + 1}\right) - \nabla_ {x} g \left(x _ {k}, z _ {k + 1}; \phi_ {x} ^ {k + 1}\right) + \nabla_ {x} g \left(x _ {k}, z _ {k + 1}\right) - \nabla_ {x} g \left(x _ {k + 1}, z _ {k + 2}\right)\right) \\ + \delta_ {k} \left(\nabla_ {y} g \left(x _ {k + 1}, y _ {k + 2}; \phi_ {x} ^ {k + 1}\right) - \nabla_ {y} g \left(x _ {k + 1}, y _ {k + 2}\right) + \nabla_ {x} g \left(x _ {k}, y _ {k + 1}\right) - \nabla_ {x} g \left(x _ {k}, y _ {k + 1}; \phi_ {x} ^ {k + 1}\right)\right) \\ + \delta_ {k} \left(\nabla_ {x} g \left(x _ {k + 1}, z _ {k + 2}; \phi_ {x} ^ {k + 1}\right) - \nabla_ {x} g \left(x _ {k + 1}, z _ {k + 2}\right) + \nabla_ {x} g \left(x _ {k}, z _ {k + 1}\right) - \nabla_ {x} g \left(x _ {k}, z _ {k + 1}; \phi_ {x} ^ {k + 1}\right)\right) \| ^ {2} ]. \\ \end{array} +$$ + +Using Cauchy-Schwartz inequality, we get + +$$ +\begin{array}{l} \mathbb {E} [ \| (q _ {k + 1} ^ {x} (\zeta_ {x} ^ {k + 1}, \phi_ {x} ^ {k + 1}) - q _ {k} ^ {x} (\zeta_ {x} ^ {k + 1}, \phi_ {x} ^ {k + 1})) + (q _ {k} ^ {x} - q _ {k + 1} ^ {x}) \| ^ {2} ] \\ \leq 3 0 \left(l _ {f, 1} ^ {2} + l _ {g, 1} ^ {2} \lambda_ {k} ^ {2}\right) \left(\| x _ {k + 1} - x _ {k} \| ^ {2} + \| y _ {k + 2} - y _ {k + 1} \| ^ {2} + \| z _ {k + 2} - z _ {k + 1} \| ^ {2}\right) + 4 0 \delta_ {k} ^ {2} \sigma_ {g} ^ {2} \\ \leq 1 2 0 l _ {g, 1} ^ {2} \lambda_ {k} ^ {2} (\xi^ {2} \alpha_ {k} ^ {2} (\| q _ {k} ^ {x} \| ^ {2} + \| \tilde {e} _ {k} ^ {x} \| ^ {2}) + \alpha_ {k + 1} ^ {2} (\| q _ {k} ^ {y} \| ^ {2} + \| \tilde {e} _ {k} ^ {y} \| ^ {2}) + \gamma_ {k + 1} ^ {2} (\| q _ {k} ^ {z} \| ^ {2} + \| \tilde {e} _ {k} ^ {z} \| ^ {2})) + 4 0 \delta_ {k} ^ {2} \sigma_ {g} ^ {2}. \\ \end{array} +$$ + +Combining all, we obtain the result. \ No newline at end of file diff --git a/afullyfirstordermethodforstochasticbileveloptimization/images.zip b/afullyfirstordermethodforstochasticbileveloptimization/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5e842b433d04f295b8b08173bfe21790224ea3a4 --- /dev/null +++ b/afullyfirstordermethodforstochasticbileveloptimization/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9ebf9c2c80dfa60875323781482748d2b60354c478ed20e8ca8ae5121ba97b2 +size 2735428 diff --git a/afullyfirstordermethodforstochasticbileveloptimization/layout.json b/afullyfirstordermethodforstochasticbileveloptimization/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..51397c1d5bfe4960eb75cda1c040dca7744b949b --- /dev/null +++ b/afullyfirstordermethodforstochasticbileveloptimization/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:240f18d7527eda33cfc70b62fc9eedb50e9d21f6087a9494975aff1073434b42 +size 1710638 diff --git a/agametheoreticframeworkformanagingriskinmultiagentsystems/06be05ec-2afb-4656-b040-2e3aa81213aa_content_list.json b/agametheoreticframeworkformanagingriskinmultiagentsystems/06be05ec-2afb-4656-b040-2e3aa81213aa_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..2e31623ffc0e33f54ecb16239737a8c24594d344 --- /dev/null +++ b/agametheoreticframeworkformanagingriskinmultiagentsystems/06be05ec-2afb-4656-b040-2e3aa81213aa_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64f6ccbc3ac9f407d75d1d18ab30acf99fc83d88802605530df2563087a7583c +size 156746 diff --git a/agametheoreticframeworkformanagingriskinmultiagentsystems/06be05ec-2afb-4656-b040-2e3aa81213aa_model.json b/agametheoreticframeworkformanagingriskinmultiagentsystems/06be05ec-2afb-4656-b040-2e3aa81213aa_model.json new file mode 100644 index 0000000000000000000000000000000000000000..29accdc320f2faa2d32ac8c8c8a4942f8711c462 --- /dev/null +++ b/agametheoreticframeworkformanagingriskinmultiagentsystems/06be05ec-2afb-4656-b040-2e3aa81213aa_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4d53a7a1e0da2f444accca72736c7ede3564a546f1bc915235e6db8331a0aa45 +size 194573 diff --git a/agametheoreticframeworkformanagingriskinmultiagentsystems/06be05ec-2afb-4656-b040-2e3aa81213aa_origin.pdf b/agametheoreticframeworkformanagingriskinmultiagentsystems/06be05ec-2afb-4656-b040-2e3aa81213aa_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5f25f7a53664f595651d60bee16634c9c8fa2ec9 --- /dev/null +++ b/agametheoreticframeworkformanagingriskinmultiagentsystems/06be05ec-2afb-4656-b040-2e3aa81213aa_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:266f8d13f219c28dfb7369654b8dc1ac68d1224fa0c4f9a713c2c24cb90e3df5 +size 2437047 diff --git a/agametheoreticframeworkformanagingriskinmultiagentsystems/full.md b/agametheoreticframeworkformanagingriskinmultiagentsystems/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f5a60368887ddc614a31355fa99f31bacf59dd6a --- /dev/null +++ b/agametheoreticframeworkformanagingriskinmultiagentsystems/full.md @@ -0,0 +1,773 @@ +# A Game-Theoretic Framework for Managing Risk in Multi-Agent Systems + +Oliver Slumbers $^{1,2}$ David Henry Mguni $^{2}$ Stefano B. Blumberg $^{1}$ Stephen McAleer $^{3}$ Yaodong Yang $^{4}$ Jun Wang $^{1}$ + +# Abstract + +In order for agents in multi-agent systems (MAS) to be safe, they need to take into account the risks posed by the actions of other agents. However, the dominant paradigm in game theory (GT) assumes that agents are not affected by risk from other agents and only strive to maximise their expected utility. For example, in hybrid human-AI driving systems, it is necessary to limit large deviations in reward resulting from car crashes. Although there are equilibrium concepts in game theory that take into account risk aversion, they either assume that agents are risk-neutral with respect to the uncertainty caused by the actions of other agents, or they are not guaranteed to exist. We introduce a new GT-based Risk-Averse Equilibrium (RAE) that always produces a solution that minimises the potential variance in reward accounting for the strategy of other agents. Theoretically and empirically, we show RAE shares many properties with a Nash Equilibrium (NE), establishing convergence properties and generalising to risk-dominant NE in certain cases. To tackle large-scale problems, we extend RAE to the PSRO multi-agent reinforcement learning (MARL) framework. We empirically demonstrate the minimum reward variance benefits of RAE in matrix games with high-risk outcomes. Results on MARL experiments show RAE generalises to risk-dominant NE in a trust dilemma game and that it reduces instances of crashing by 7x in an autonomous driving setting versus the best performing baseline. + +# 1. Introduction + +Game Theory (GT) is a fundamental tool for resolving problems within multi-agent systems (MAS) (Wellman, 2006). Formalising scenarios in MAS as games allows practitioners to model stable outcomes in real-world settings by computing equilibrium solutions. A core challenge for MAS research is building systems that perform effectively while mitigating the risk of adverse events for the agents within the system. In particular, as the level of integration with humans and AI agents increases, it becomes increasingly important to avoid dangerous events for humans i.e. car crashes (Gal & Grosz, 2022). In AI-only systems, risk can come in the form of agents taking low probability yet dangerous strategies (e.g. $\epsilon$ -greedy policies outside of training (Mnih et al., 2015)). Human-AI MAS differ markedly from those of simulated systems that involve computerised agents only, however the problem remains the same - humans are prone to execution errors, a trait that seldom applies to computerised agents (Gal & Grosz, 2022). Tackling this challenge requires equilibrium concepts that can account for all possible rewards dependent on other agents' actions and, enable agents to adopt risk-averse strategies in response. + +There are two prominent approaches in GT that propose equilibrium concepts that address risk: Trembling Hand Perfect Equilibrium (THPE) (Bielefeld, 1988) and Quantal Response Equilibrium (QRE) (McKelvey & Palfrey, 1995)). However, due to the linearity of expected utility (EU) these concepts can undervalue comparatively large costs with low probability given an agent's tolerance for risk, which undermines their ability to successfully resolve notions of safety and risk in various practical MAS (e.g. crashing has a large negative utility). For example, as demonstrated in Fig.1, a probability of $0.01\%$ of Player 2 Overtaking only impacts Player 1's EU by 0.5. Therefore, based on EU, Overtaking is preferable for Player 1, however exposing it to the possi + +![](images/a2b3be6b52c211bc4438f9702cfb7d6aa42be60dc4f1745447c61c29c287a3a7.jpg) +Figure 1. Two cars are rewarded for reaching their destination quickly. They are stuck behind slow tractors but can stay in their lanes and arrive slowly. They can also overtake the tractors to arrive quickly, but if both overtake they will cause large delays, leading to large negative payoffs. + +
Stay in LaneOvertake
Stay in Lane5, 50, 20
Overtake20, 0-50, -50
+ +bility of congestion in the middle of the road. Furthermore, it is difficult to control the level of risk-aversion in these frameworks as they are defined only in terms of EU. + +Other approaches characterise risk in terms of concave/convex utility functions and study convergence to classical GT equilibria. In Fiat & Papadimitriou 2010, different attitudes to risk are theoretically considered, in particular a risk-averse variant based on including variance in the utility function similar to our proposal. However, they find that almost all of their risk-adjusted games will not have mixed Nash equilibrium. Even if the risk adjusted game does have a mixed Nash equilibrium, small mistakes made by players in interpreting the payoffs of the game will likely cause the equilibrium to be unstable. In contrast, our approach guarantees the existence of a risk-adjusted equilibrium. Furthermore, Fiat & Papadimitriou 2010 only provide theoretical analyses and we are interested in practical solvers to attain a solution. + +In this paper we propose a novel equilibrium concept called Risk-Averse Equilibrium (RAE). RAE distinguishes itself from THPE and QRE by including the second moment (variance) of the utility function with respect to the strategies of other agents. Unlike these approaches RAE has a risk-aversion parameter explicitly controlling the amount of risk an agent is willing to accept. This places larger emphasis on deviations from expected utility, where variance in RAE values the $0.01\%$ Overtake probability in Fig. 1 at $47.6\gamma$ (vs. 0.5), where $\gamma > 0$ and generally at minimum 0.1, see Appendix G. We show that RAE can be computed using numerical and iterative methods, unlike the theoretical approaches in (Fiat & Papadimitriou, 2010), which unlock its ability to resolve modelling large-scale MARL games simulating real-world settings. We also show that, for any desired expected utility, RAE minimises the corresponding variance. In other words, RAE reduces highly adverse outcomes caused by the actions of other agents in the system. + +To demonstrate the benefits of RAE, we perform a series of experiments in risky matrix games, and two larger-scale MARL problems which are closer to real-world scenarios: a grid-world Stag-Hunt trust dilemma, and an autonomous driving scenario. Our results validate our theoretical advances on risky matrix games by showing that the RAE solutions provide the same EU as our baselines, whilst being safer in providing lower variance. In the Stag-Hunt MARL setting, RAE outperforms the baselines and arrives at the safe solution but the other methods do not. In the autonomous driving setting, RAE reduces crashes $7\mathrm{x}$ in test episodes as compared with the best performing baseline equilibrium. + +# Our contributions are: + +1. We propose RAE, a new solution concept that accounts for expected utility (EU) and utility variance (UVar) in determining a strategy (Sec. 3). +2. We prove that RAE's strategy is the minimum variance solution (Prop. 3.2) and we prove RAE's existence (Thrm. 3.3). +3. We introduce two solvers to compute RAE in small action spaces (Sec 4.1) and large action spaces (Sec. 4.2). +4. We validate RAE on risky matrix games (Sec. 5.1) and in experiments on two MARL settings: a grid-world Stag-Hunt (Sec. 5.2) and autonomous driving scenario (Sec. 5.3), where RAE outperforms state-of-the-art GT approaches THPE, QRE and simple NE. + +# 2. Related Work + +Risk in GT: Baselines Harsanyi et al. 1988 introduced risk-dominant Nash equilibria (NE) (Nash, 1951), which, when the strategies of other agents is unknown, leads to the NE with the lowest losses if deviated from. However, + +if none of the NE are robust to risk initially, then the risk-dominant strategy amongst NEs also will not be. THPE (Bielefeld, 1988) models risk by having agents 'tremble' through all actions having positive probability in a mixed-strategy. However, iterated deletion of weakly dominated strategies can lead to the THPE strategy being removed from consideration. McKelvey & Palfrey 1995 introduce QRE which accounts for potential errors in strategy selection to more accurately represent human observed strategies. Note, THPE and QRE utilise EU as their risk measure which is insensitive to low probability, high-value deviations as long as they are offset by high probability mean values (Royset, 2022). THPE, QRE and NE are baselines. + +Risk in GT: Other Approaches Yekkehkhany et al. 2020 propose a mean-variance equilibrium where the variance relates to reward probabilities, rather than variance caused by the strategies of others as in RAE. Notably, in the model-free machine learning setting reward probabilities are generally not known, and is not within the scope of this work. Fiat & Papadimitriou 2010 consider how NE existence is impacted by non-EU maximising agents, and derive results for multiple risk categorisations, however limit their work to theoretical propositions that do not extend to baselines for this work. Risk in competitive network games (Wardrop, 1952) is widely studied, and is based on a generalisation of the classical selfish routing 'game' (Beckmann et al., 1956) to incorporate uncertain delays. The impact of mean-risk utility frameworks on WE is particularly well studied (Ordóñez & Stier-Moses, 2010; Lianeas et al., 2019; Cherukuri, 2019), however WE is not applicable to non-network games as studied in this work. In terms of NE, the major distinction in risk analysis is between non-atomic (i.e. infinite agents) and atomic (i.e. finite agents) settings. In non-atomic settings Meir & Parkes 2015; Nikolova & Stier-Moses 2012 the marginal impact in terms of risk of each agent on each other is infinitesimal and this leads to a distinctly different analysis than required in the atomic setting of this work (Nikolova & Stier-Moses, 2014). In the atomic setting which parallels more with our setting, Nikolova & Stier-Moses 2014 study a mean-standard deviation model of travel time along a network path, and whilst they are able to show the existence of pure-strategy NE in exogenous risk settings, they are unable to in endogenous risk settings concerned in our work. Piliouras et al. 2016 also study a setting where the risk is largely exogenous, as it is defined by randomised schedulers which controls the ordering through congestion edges in the network, which is not a concept in our setting and therefore can not act as a baseline. + +Risk in MARL: RAE fits broadly into risk-sensitive MARL solutions such as: RMIX (Qiu et al., 2021) which optimises decentralised CVaR policies in cooperative risk-sensitive settings, RAM-Q and RA3-Q (Gao et al., 2021) utilise an adversarial approach to promote variance reduction, or risk- + +sensitive DAPG (Eriksson et al., 2022) which approaches risk in Bayesian games in terms of the CVaR induced by the possible combinations of types in the game. However, as we are specifically concerned with GT-based equilibrium concepts we will not directly compare to these methods. + +GT and MARL GT and RL have overlapped in settings where the number of actions in a game becomes too big to write down trivially. For these games with larger action spaces, we consider Policy-Space Response Oracles (PSRO) (Lanctot et al., 2017; McAleer et al., 2020; Perez-Nieves et al., 2021; Feng et al., 2021; McAleer et al., 2022b) which generalises the Double Oracle (DO) (McMahan et al., 2003; Dinh et al., 2022; McAleer et al., 2021) framework from small action spaces to large action spaces by replacing actions with RL policies. In this work, we propose a framework at the overlap between risk-averse GT and risk-averse MARL which involves adaptations to the risk-neutral frameworks mentioned here. In future work we will investigate applying other recent deep RL approaches for finding equilibria (Perolat et al., 2022; McAleer et al., 2022a; Sokota et al.) to our solution concept. + +# 3. Risk-Averse Equilibrium Framework + +We introduce RAE and detail its derivation from a mean-variance utility function. We derive key properties of RAE which characterise the two following important benefits: 1. RAE is a minimum risk solution 2. The existence of RAE for any finite N-player game (under standard conditions). + +# 3.1. RAE Derivation + +We begin by describing the underlying formalism of a normal form game (NFG). An NFG is the standard representation of strategic interaction in GT. A finite $n$ -person NFG is a tuple $(N, A, u)$ , where $N$ is a finite set of $n$ players, $A = A^1 \times, \ldots, \times A^n$ is the joint action profile, with $A^i$ being the actions available to player $i$ , and $u = (u^1, \ldots, u^n)$ where $u^i: A \to \mathbb{R}$ is the real-valued utility function for each player. A player plays a mixed-strategy, $\sigma^i \in \Delta_{A^i}$ , which is a probability distribution over their possible actions. In Sec. 4.2 we replace atomic actions with neural networks and will therefore re-define our notation to keep clarity between the two game schemes. + +The objective of RAE is to provide an equilibrium solution that is robust to other agents taking any action. Our approach considers both the EU (mean) and the potential UVar caused by an opponent's strategy, in particular noting that all actions may be taken by the opponent as if mistakes (low probability actions) can happen. Whilst low probability actions are not literally mistakes, they are simply a technical device to mimic the idea of a mistake, as described in Bielefeld 1988. + +For simplicity, we provide definitions based on playing a symmetric game, such that two players share an action set $\mathcal{A}$ , and a utility function $u$ . We extend this to the non-symmetric case in Appendix K.1. Define the utility of action $a_i \in \mathcal{A}$ against action $a_j \in \mathcal{A}$ as $u(a_i, a_j)$ and the full utility matrix as $\mathbf{M}$ , where the entry $\mathbf{M}_{i,j}$ refers to $u(a_i, a_j)$ and $\mathbf{M}_i$ refers to $u(a_i, a_j) \forall j$ , i.e. the vector of utilities that action $a_i$ receives against all other actions. We now define the expected utility of the mixed-strategy for player 1 $\sigma$ versus the mixed strategy for player 2 $\varsigma$ as + +$$ +\begin{array}{l} u (\boldsymbol {\sigma}, \varsigma , \mathbf {M}) = \sum_ {a _ {i} \in A} \sum_ {a _ {j} \in A} \sigma \left(a _ {i}\right) \varsigma \left(a _ {j}\right) u \left(a _ {i}, a _ {j}\right) \tag {1} \\ = \boldsymbol {\sigma} ^ {T} \cdot \mathbf {M} \cdot \varsigma . \\ \end{array} +$$ + +The weighted co-variance matrix for $\mathbf{M}$ (i.e. the UVar values) is a $|A|\times |A|$ matrix $\Sigma_{\mathbf{M},\varsigma} = [c_{ij}]$ with entries + +$$ +c _ {j k} = \sum_ {a _ {i} \in A} \varsigma (a _ {i}) \left(u \left(a _ {i}, a _ {j}\right) - \bar {\mathbf {M}} _ {j}\right) \left(u \left(a _ {i}, a _ {k}\right) - \bar {\mathbf {M}} _ {k}\right), \tag {2} +$$ + +where $\bar{\mathbf{M}}_i = \sum_{k=1}^{|A|} \varsigma_k u(a_i, a_k)$ is the EU for action $i$ given the opponent mixed-strategy $\varsigma$ . This is a standard co-variance matrix, but the entries are weighted by the likelihood of them being selected by the opponent. A uniform weighting could be used, however we hypothesise that in terms of minimising UVar it is more appropriate to hedge against the variance caused by higher likelihood actions. Due to the nature of Eq. 5, all actions will, by design, receive positive probability ( $> \epsilon$ ) under our framework and therefore will always provide some weight in the variance calculation, leading to low likelihood high-variance actions still having a large impact. This accounts for the central concept of RAE, that safe play should account for all actions. This allows us to define the mixed-strategy $\sigma$ UVar as: + +$$ +\begin{array}{l} \operatorname {V a r} (\boldsymbol {\sigma}, \boldsymbol {\varsigma}, \mathbf {M}) = \sum_ {k = 1} ^ {| A |} \sum_ {n = 1} ^ {| A |} \sigma \left(a _ {k}\right) \sigma \left(a _ {n}\right) c _ {k n} \tag {3} \\ \mathbf {\Sigma} = \boldsymbol {\sigma} ^ {T} \cdot \boldsymbol {\Sigma} _ {\mathsf {M}, \varsigma} \cdot \boldsymbol {\sigma}. \\ \end{array} +$$ + +The total utility function $r$ which considers EU and UVar for mixed-strategy $\sigma$ is, + +$$ +r (\boldsymbol {\sigma}, \varsigma , \mathbf {M}) = u (\boldsymbol {\sigma}, \varsigma , \mathbf {M}) - \gamma \operatorname {V a r} (\boldsymbol {\sigma}, \varsigma , \mathbf {M}), \tag {4} +$$ + +where $\gamma \in \mathbb{R}$ is the risk-aversion parameter. + +Applying Eq. (4) to Fig. (1) we show why the utility function 4 is desirable. Consider two joint strategy profiles, $S_{1} = ((1 - \epsilon, 0 + \epsilon), (1 - \epsilon, 0 + \epsilon))$ and a THPE $S_{2} = ((0 + \epsilon, 1 - \epsilon), (1 - \epsilon, 0 + \epsilon))$ where $(1 - \epsilon, 0 + \epsilon)$ represents playing Stay in Lane with probability $(1 - \epsilon)$ . $\epsilon = 0.01$ for this example. Profile $S_{1}$ receives $u(S_{1}) \approx 5$ and the THPE profile receives $u(S_{2}) \approx 20$ . However, $\operatorname{Var}(S_{1}) = 0.32$ + +and $\operatorname{Var}(S_2) = 47.6$ , i.e. the THPE strategy has huge variance for Player 1. Therefore, $r(S_1) \approx 5 - 0.32\gamma$ and $r(S_2) \approx 20 - 47.6\gamma$ and we have for any risk-aversion parameter $\gamma > 0.32$ it is optimal to play $S_1$ . + +To define RAE, we first define the best-response map: + +$$ +\begin{array}{l} \boldsymbol {\sigma} ^ {*} (\varsigma) \in \underset {\boldsymbol {\sigma}} {\arg \max } r (\boldsymbol {\sigma}, \varsigma , \mathbf {M}) \\ \mathrm {s . t .} \sigma (a) \geq \epsilon , \forall a \in A \tag {5} \\ \boldsymbol {\sigma} ^ {T} \mathbf {1} = 1, \\ \end{array} +$$ + +where due to the quadratic term $\pmb{\sigma}^T\cdot \Sigma_{\mathbf{M},\varsigma}\cdot \pmb{\sigma}$ and the constraints $(\epsilon >0)$ , we have a Quadratic Programming (QP) problem. The programme finds $\pmb{\sigma}^{*}$ such that the total utility is maximised, whilst ensuring no actions are assigned negative probability, and that the probabilities sum to one. Finally, based on Eq. (5) we are able to define RAE: + +Definition 3.1 (RAE). A strategy profile $\{\sigma, \varsigma\}$ is a risk-averse equilibrium if both $\sigma$ and $\varsigma$ are risk-averse best responses, in that they satisfy Eq. 5, to each other. + +# 3.2. RAE Properties + +The first property of RAE is that, given an opponent's strategy $\varsigma$ , one's owns strategy $\sigma$ is the minimum UVar solution available given their desired EU $\mu_{b}$ : + +Proposition 3.2. Given $\varsigma$ and any desired $EU\mu_{b}$ , there exists a $\gamma$ such that the solution to Eq. 5 receives $\mu_{b}$ with the minimum possible UVar. + +Proof deferred to Appendix A.1. The consequence of this proposition matches with the goal of RAE - to provide a minimum risk solution given the level of risk tolerance of the user. $\gamma$ is used to select how much EU $\mu_{b}$ is desired and, given the opponent's strategy $\varsigma$ , there is no other solution that can provide better risk performance than RAE. The main downside of this is that $\gamma$ is a hyper-parameter and it may be difficult to know prior to training how it will exactly match to $\mu_{b}$ . We would expect that games that share similar reward distributions would share similar risk-variance trade-offs under the same values of $\gamma$ . Therefore, one way to select $\gamma$ is to use restricted and unrestricted versions of environments that share similar reward distributions. One can train on the restricted version more readily, and transfer the $\gamma$ results to the harder unrestricted version. In Appendix C we show over a range of NFG dimension sizes, with rewards drawn from the same distribution, that EU and Var remain mostly consistent across values of $\gamma$ and dimension, suggesting the proposed approach is feasible. + +Secondly, a common property of most GT equilibrium is that a solution exists, at least in the finite game setting. For RAE, we note the following result in mixed-strategies: + +Theorem 3.3. For any finite $N$ -player game where each + +![](images/f8e88e079873703dee950b2cee7da786b0d34f90d36c8156dd49986ab564ff90.jpg) +Figure 2. PSRO-RAE. Each element points to a corresponding subsection in the text, denoted in blue boxes. Note that $u(\cdot)$ and $\mathrm{Var}(\cdot)$ are overloaded to represent utility/variance between distributions over a population or utility/variance between two policies. + +player $i$ has a finite $k$ number of pure strategies, $A^i = \{a_1^i,\dots,a_k^i\}$ , an RAE exists. + +We defer the proof of the result to Appendix A.2. By establishing the existence of RAE solutions we have validated the practical relevance of RAE. + +# 4. Finding RAE + +This section proposes two solvers to find RAE, with stochastic fictitious play (SFP) for small action-spaces in Sec. (4.1) and with PSRO-RAE for large action-spaces in Sec. 4.2. + +# 4.1. Stochastic Fictitious Play + +We start by showing that our total utility function is a form of SFP (Fudenberg & Kreps, 1993) which can find an RAE in small NFGs. SFP has convergence guarantees in a selection of games, e.g. potential games (Monderer & Shapley, 1996a;b) and finite two-player zero-sum games (Robinson, 1951). Furthermore, SFP is also robust empirically in terms of convergence in game classes (Goldberg et al., 2013; Ganzfried, 2020) not listed, and we mirror these observations in Appendix B. + +SFP is a learning process where players choose a best response to others time-average strategies. In SFP, a group of $n \geq 2$ players repeatedly play a $n$ -player NFG. The state variable is $Z_{t} \in \Delta_{S}$ , whose components $Z_{t}^{i}$ describe the time averages of each player's behaviour, + +$$ +Z _ {t} ^ {i} = \frac {1}{t} \sum_ {u = 1} ^ {t} \pmb {\sigma} _ {t} ^ {i} +$$ + +where $\sigma_t^i\in \Delta_{A^i}$ represents the observed strategy of player $i$ at time-step $t$ . Each player best responds to the time-average strategy of their opponent, $Z_{t}^{-i}$ , by maximising a perturbed + +utility function $\bar{u}$ + +$$ +\begin{array}{l} \sigma_ {t + 1} ^ {i} = \underset {\sigma} {\arg \max } \bar {u} (6) \\ = \underset {\boldsymbol {\sigma}} {\arg \max } u ^ {i} (\boldsymbol {\sigma}, Z _ {t} ^ {- i}, \mathbf {M}) - \lambda v ^ {i} (\boldsymbol {\sigma}) (7) \\ \end{array} +$$ + +where $v^{i}(\sigma):\Delta_{A}\to \mathbb{R}$ perturbs $u^i$ such that it is strictly concave (unique global maximum) whilst applying greater than zero probability to all actions. + +Proposition 4.1. Replacing the best-response Eq. (6) with the best-response map Eq. (5) satisfies the conditions of $\bar{u}^i$ for a SFP process. + +Note, SFP does not necessarily converge in all game classes (but is robust empirically, see Appendix B). Therefore, we show that if the SFP process does converge to a strategy then that strategy is an RAE. + +Proposition 4.2. Suppose the SFP sequence $\{Z_t\}$ converges to $\sigma$ in observed strategies $^1$ , then $\sigma$ is a risk-averse equilibrium. + +Note for SFP we require a stronger notion of convergence in observed strategies $\sigma_t^i$ rather than in beliefs $Z_{t}^{i}$ , but a converged final $\sigma_t^i$ is a risk-averse equilibrium. + +# 4.2. PSRO-RAE + +For games that can't be displayed in the normal-form, we extend the iterative solution framework PSRO (Lanctot et al., 2017) to RAE, which uses RL policies as proxies for actions. PSRO-RAE approximates equilibria in large games by finding a small representative collection of risk-averse + +![](images/68937e9c9b925d1e6a345342121135f296cd7a89f464ec5b750d676767ca55fb.jpg) +Figure 3. a) SFP on NFGs with 100 actions, b) PSRO on NFGs with 500 actions. Final EU vs. UVar results. RAE values for multiple $\gamma$ form an efficient frontier (Markowitz, 1991) and show that, whilst baselines achieve similar EU they have large UVar solutions. In Fig. a) we exclude the payoff dominant NE result as its UVar was too large, whilst in Fig. b) we exclude the THPE result for the same reason. + +![](images/5612b5b798c2dc8487968f3bfd86822ec6fe6373f2c3acda2e1af9fb3597d5a3.jpg) + +RL policies which are sampled by RAE. Whilst PSRO does not have any practical convergence guarantees (in the limit all policies may be found and displayed in the normal-form, allowing for an exact solution), PSRO generally finds strong solutions without requiring all potential policies (Perez-Nieves et al., 2021; Feng et al., 2021; McAleer et al., 2022c). We provide a visualisation of the PSRO-RAE process in Fig. (2), and provide an algorithm in Appendix F. + +Consider two-player stochastic games $G$ defined by the tuple $\{\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{G}\}$ , where $\mathcal{S}$ is the set of states, $\mathcal{A} = \mathcal{A}^1\times \mathcal{A}^2$ is the joint action space, $\mathcal{P}:S\times \mathcal{A}\times S\to [0,1]$ is the state-transition function and $G = \{g_{1},g_{2}\}$ is the set of rewards where $g^{i}:S\times \mathcal{A}\rightarrow \mathbb{R}$ is the reward function for player $i$ (we use reward for MARL settings, and utility for NFG settings). An agent is a policy $\phi$ , where a policy is a mapping $\phi :S\times \mathcal{A}\to [0,1]$ which can be described in both a tabular form or as a neural network. The expected reward between two agents is defined to be $M(\phi_i,\phi_j)$ (i.e., in the same manner defined for NFGs in Sec. 3.1), and represents the expected reward to agent $\phi_{i}$ against opponent $\phi_{j}$ . + +# 4.2.1. PSRO OUTER LOOP + +PSRO does $T \in \mathbb{N}^{+}$ updates on a meta-game $\mathbf{M}$ (an NFG made up of RL policies as actions). At every iteration $t \leq T$ , a player is defined by a population of fixed policies $\Phi_{t} = \Phi_{0} \cup \{\phi_{1}, \phi_{2}, \dots, \phi_{t}\}$ , where $\Phi_{0}$ is the initial random policy. For notation convenience, we consider the single-population case where players share the same $\Phi_{t}$ , and refer the reader to Appendix K.2 for the multi-population formulation. + +# 4.2.2. PSRO INNER LOOP + +a, d) Meta-Game & Covariance Matrix At the start of the iteration $t$ inner loop, each population has a meta-game $\mathbf{M}_t$ , a reward matrix between all the policies in the popula + +tion, with individual entries $M(\phi_i,\phi_j)\forall \phi_i,\phi_j\in \Phi_t$ . In addition, each population also generates a covariance matrix $\Sigma_{\mathbf{M}_t}$ defined by Eq. 2. At the end of iteration $t$ inner loop, both $\mathbf{M}_t$ and $\Sigma_{\mathbf{M}_t}$ are updated to include a new policy. + +b) Meta Distribution We require a way to select which $\phi_t \in \Phi_t$ will be used for training opponents. The function $f$ is a mapping $f: \mathbf{M}_t \to [0,1]^t$ which takes as input a metagame $\mathbf{M}_t$ and outputs a meta-distribution $\sigma_t = f(\mathbf{M}_t)$ . The output $\sigma_t$ is a probability assignment to each policy in the population $\Phi_t$ , the equivalent of a mixed-strategy in a NFG, except actions are now RL policies. We apply RAE (Def. 3.1) as the meta-solver. As $\phi$ are RL policies then the policies are sampled by their respective probability in $\sigma_t$ . + +c) Best Response Oracle At each epoch $\Phi_t$ is augmented with a new policy that is a best-response (BR) to the meta-distribution $\sigma_t$ . The BR oracle aims to optimise the same objective function of that optimised by the meta-distribution. For example, in Vanilla PSRO, the Nash meta-distribution optimises for environment reward and the BR oracle also optimises a new agent in terms of EU. This can be found with any optimisation process such as RL or an evolutionary algorithm. In our setting the meta-distribution optimises two metrics, the EU and the UVar, and therefore, we need a BR oracle that optimises the same dual objective. + +In terms of RL quantities, this translates to maximising the expected total reward (i.e. the total of the per-step rewards) whilst minimising the variance of the total reward caused by the sampling of different RL agents from $\sigma_{t}$ . + +To achieve this, we follow the approach of (Zhang et al., 2021) that optimises both the total reward and per-step reward variance by solving an augmented MDP where the per-step reward $g_{t}^{i}$ is replaced by: + +![](images/77f441101122301c0792f06f4070bd765f5977a84ae27e68abc0fd7615925cc9.jpg) +Figure 4. Stag-hunt environment results. A) The environment B) Intra-distribution results, e.g. both agents are controlled by a PSRO-RAE population C) Cumulative results over 1000 test episodes for a PSRO-RAE population vs. PSRO-Nash population. + +$$ +\hat {g} _ {t} ^ {i} = g _ {t} ^ {i} - \lambda \left(g _ {t} ^ {i}\right) ^ {2} + \left(2 \lambda g _ {t} ^ {i} y _ {i}\right) +$$ + +where $y_{i} = \frac{1}{T}\sum_{t = 1}^{T}g_{t}^{i}$ is the average of the per-step rewards during the data collection phase. + +We choose the per-step reward as it is an upper bound of the total reward variance (Bisi et al., 2020), therefore reducing per-step variance will also reduce total variance, and is more effective computationally (Zhang et al., 2021). Additionally, as this variance is also with respect to the sampling probability defined by $\sigma_{t}$ this optimises the correct co-variance matrix which is also weighted by $\sigma_{t}$ . + +# 5. Experiments and Results + +We validate RAE through three questions: 1) Does RAE find lower UVar solutions than baselines? 2) Do RAE strategies overlap with risk-dominant NE in some scenarios? 3) Are RAE strategies safe in a safety-critical driving scenario? Full experimental details in Appendix H. + +# 5.1. Does RAE find lower UVar solutions than baselines? + +Motivation/Overview Prop. 3.2 shows RAE can find a spectrum of solutions encompassing many EU values, whilst minimising the corresponding UVar. Therefore, if RAE can match the baseline's EU, whilst achieving lower UVar, then RAE is a better solution if safety is of concern. + +Baselines NE (including risk/payoff dominant), THPE and + +QRE introduced in Sec. 2. + +Experiment: Matrix Coordination Games NFGs where some actions provide a high EU if other agents select them, but have large costs if not. Other actions have lower coordinated EU but lower costs. These games are designed to highlight the issues of focusing on EU and ignoring UVar. + +Results Results in Fig. (3), where (A) represents games with 100 actions solved using SFP (Sec. 4.1), and (B) represents games with 500 actions solved using PSRO (Sec. 4.2). We plot RAE solutions for multiple values of $\gamma$ to generate a theoretical efficient frontier. An efficient frontier shows for values of EU what is the minimum possible UVar. Our results show that, whilst the baselines achieve a diverse range of EU values, they are unable to find the minimum UVar solution which RAE finds. This shows the strong flexibility of our approach, in that it is able to attain any EU that the baselines can achieve, whilst finding a lower UVar solution. This suggests that, if safety is of concern, then RAE is a better choice as any desired EU can be achieved whilst achieving a lower UVar than any of our baselines. + +# 5.2. Are RAE strategies risk-dominant? + +Motivation In some scenarios, it is likely that the only viable solutions overlap with the set of NE (for example, in trust dilemmas). Therefore, in these scenarios, if RAE finds safe solutions, we would expect that the RAE solutions would overlap with the risk-dominant NE solutions. + +Experiment: MDP Stag-Hunt: We use an MDP-based adaptation of a classic trust dilemma game from + +A) Intra-Distribution Results + +
Eqm RewardEqm VarianceWorst-CaseNum. CrashesNum. Arrivals
Self-Play0.97 ±2.14Reward1.62 ±0.12Variancew080C5.92Num. Crashes 4.95 Num. ArrivalsArrivals 50 ± 4.95
PSRO-Uniform PSRO-Nash -0.69 ±0.65872.641.705 ±0.0024-4.800 ±5.260139.5 ±42.122.8110.5 ±28.20 ±2.83
PSRO-TPSRO-Uniform PSRO-Nash -0.34 ±0.69±0.871.607 ±0.099-7.00 ±2.014042 ±49.81 ±216.80 ±28.30 ±2.12
PSRO-PSEPSRO-Nash -1.60 ±0.0971.60 ±0.14-5.32 ±3.4041.5 ±2.168.50 ±2.12
PSRO-QREPSRO-Nash -1.60 ±0.0971.44 ±0.13-2.84 ±0.9443.0 ±2.857.00 ±2.85
PSRO-NasSelf-Play PSRO-Nash -0.85 ±0.64±2.141.515 ±0.042-4.684 ±5.52638.5 ±39.452.1211.50 ±41.55 ±2.12
PSRO-RAE-RAE (Ours) 4.36 ±2.072.070.33 ±0.001400.10 ±2.2685.5 ±5.5 ±0.7146.00 ±46.00 ±1.41
+ +![](images/3e4a0faca0359400911fe1da96d5c24209073e5edf4c4a4fc069f2f98e51ba1a.jpg) +B) PSRO-RAE Average Position Heatmap + +![](images/99003fe87c510c5a5a1765aecc8bceba5b444b068ae53ef9cb7bea4af12c3130.jpg) +C) PSRO-Nash Average Position Heatmap +Figure 5. A) Results on 50 episodes over 5 seeds for intra-distribution testing, e.g. both agents controlled by PSRO-RAE B) Average position heatmap for PSRO-RAE solution over 200 episodes C) Average position heatmap for PSRO-Nash solution over 200 episodes. + +(Peysakhovich & Lerer, 2018) where there is a payoff-dominant equilibrium (chasing the stag) and a risk-dominant equilibrium (gathering plants). + +Baselines As this is a MDP task we integrate Nash, Uniform, Self-Play, THPE, QRE, described in Sec. 2 and Appendix I, as the meta-distributions in the PSRO framework, denoted as e.g. PSRO-{Nash}. In this setting we limit our baselines to PSRO-{Variant} algorithms, and do not consider non-population risk-aversion algorithms (standard in PSRO literature). Full details in Appendix I. + +Results Fig. (4B) shows all baselines arrive at the payoff-dominant stag catching strategy, whereas RAE arrives at the 'risk-dominant' plant gathering strategy. The baselines solution is risky due to its susceptibility to coordination failure i.e. when only one agent hunts the stag. This could occur frequently, for example, in situations where agents are not able to communicate with each other, or agents do not know each others strategies. For example, we show this by placing a PSRO-RAE population and a PSRO-Nash population into the environment together as co-players, shown in Fig. (4C). The Nash population still attempts to hunt the stag (unable to communicate, has a fixed strategy), but in this case the RAE population is still gathering plants - leading to the Nash population being caught by the stag many times, and the RAE population remaining safe. + +# 5.3. Are RAE strategies safe? + +Motivation RAE is designed to avoid strategies that are overly susceptible to other agent's strategies by limiting UVar. We examine how RAE acts in a larger-scale MARL autonomous driving setting where avoiding any large negative outcome (e.g. crashing) is critical, in particular in the presence of strategies that can look overly advantageous unless coordination between agents fails. + +Experiment: Autonomous Driving Scenario (Leurent, 2018): MDP recreation of Fig. (1) scenario; two-way traffic with slow-moving vehicles and faster moving agents behind that may be interested in overtaking. + +Baselines MDP task, refer to Sec. 5.2. + +Results Results are in Fig. (5), with results for a larger-scale representation of the environment (by increasing the number of vehicles from 6 to 25) in Appendix E. In Table (5A) we provide a collection of environment metrics where the average value is based on 50 environment episodes and the standard deviation is from 5 training seeds. In terms of EU and UVar, RAE actually outperforms the baselines considerably, whilst maintaining strong worst-case performance. Notably, RAE arrives at a strategy that very rarely crashes, and nearly always arrives at the final destination. The same conclusion can not be drawn for the baselines which often crash and fail to reach the destination. To understand this better, in Fig. (5B, C) we provide position heat-maps of a PSRO-RAE and a PSRO-Nash car respectively. RAE exe + +cutes the safe strategy, i.e. follow behind until all vehicles in the on-coming lane have passed and then proceeds to overtake. This strategy remains sensitive to the risk-element of the environment, overtaking and crashing, which is our desired outcome. On the other hand, the Nash strategy overtakes straight away and nearly always ends up in a crash due to congestion in the middle of the road. + +# 6. Conclusion + +We introduce a new risk-averse equilibrium, RAE, based on agents considering the variance of the utility function alongside the expected value. Theoretically, we prove the existence and solvability of RAE and provide methods for arriving at an RAE in both small and large-scale game settings. Empirically, we show that our RAE is able to locate minimum variance solutions for any EU, act as a NE selection method in the presence of risk-dominant NE, and is effective at finding a safe equilibrium in a safety-sensitive autonomous driving environment. Avenues for future work should focus on the limitations of the current RAE approach, namely non-convergence guarantees in certain classes of games and the fact that RAE minimises upside and downside variance, where minimising downside variance only would be a desirable property. + +# References + +Beckmann, M., McGuire, C. B., and Winsten, C. B. Studies in the economics of transportation. 1956. +Bielefeld, R. S. Reexamination of the perfectness concept for equilibrium points in extensive games. In Models of Strategic Rationality, pp. 1-31. Springer, 1988. +Bisi, L., Sabbioni, L., Vittori, E., Papini, M., Restelli, M., et al. Risk-averse trust region optimization for rewardvolatility reduction. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, pp. 4583-4589, 2020. +Cherukuri, A. Sample average approximation of cvar-based wardrop equilibrium in routing under uncertain costs. In 2019 IEEE 58th Conference on Decision and Control (CDC), pp. 3164-3169. IEEE, 2019. +Dinh, L. C., McAleer, S., Tian, Z., Perez-Nieves, N., Slumbers, O., Mguni, D. H., Wang, J., Bou Ammar, H., and Yang, Y. Online double oracle. TMLR: Transactions on Machine Learning Research, 2022. +Eriksson, H., Basu, D., Alibeigi, M., and Dimitrakakis, C. Risk-sensitive bayesian games for multi-agent reinforcement learning under policy uncertainty. arXiv preprint arXiv:2203.10045, 2022. +Feng, X., Slumbers, O., Wan, Z., Liu, B., McAleer, S., Wen, Y., Wang, J., and Yang, Y. Neural auto-curricula in two-player zero-sum games. Advances in Neural Information Processing Systems, 34, 2021. +Fiat, A. and Papadimitriou, C. When the players are not expectation maximizers. In International Symposium on Algorithmic Game Theory, pp. 1-14. Springer, 2010. +Fudenberg, D. and Kreps, D. M. Learning mixed equilibria. Games and economic behavior, 5(3):320-367, 1993. +Gal, K. and Grosz, B. J. Multi-agent systems: Technical & ethical challenges of functioning in a mixed group. Daedalus, 151(2):114-126, 2022. +Ganzfried, S. Fictitious play outperforms counterfactual regret minimization. arXiv preprint arXiv:2001.11165, 2020. +Gao, Y., Lui, K. Y. C., and Hernandez-Leal, P. Robust risk-sensitive reinforcement learning agents for trading markets. arXiv preprint arXiv:2107.08083, 2021. +Goldberg, P. W., Savani, R., Sørensen, T. B., and Ventre, C. On the approximation performance of fictitious play in finite games. International Journal of Game Theory, 42 (4):1059-1083, 2013. + +Harsanyi, J. C., Selten, R., et al. A general theory of equilibrium selection in games. MIT Press Books, 1, 1988. +Lanctot, M., Zambaldi, V., Gruslys, A., Lazaridou, A., Tuyls, K., Pérolat, J., Silver, D., and Graepel, T. A unified game-theoretic approach to multiagent reinforcement learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 4193-4206, 2017. +Leurent, E. An environment for autonomous driving decision-making. https://github.com/eleurent/highway-env, 2018. +Lianeas, T., Nikolova, E., and Stier-Moses, N. E. Risk-averse selfish routing. Mathematics of Operations Research, 44(1):38-57, 2019. +Markowitz, H. M. Foundations of portfolio theory. The journal of finance, 46(2):469-477, 1991. +McAleer, S., Lanier, J., Fox, R., and Baldi, P. Pipeline PSRO: A scalable approach for finding approximate nash equilibria in large games. In Advances in Neural Information Processing Systems (NeurIPS), 2020. +McAleer, S., Lanier, J., Wang, K., Baldi, P., and Fox, R. XDO: A double oracle algorithm for extensive-form games. Advances in Neural Information Processing Systems (NeurIPS), 2021. +McAleer, S., Farina, G., Lanctot, M., and Sandholm, T. Escher: Eschewing importance sampling in games by computing a history value function to estimate regret. arXiv preprint arXiv:2206.04122, 2022a. +McAleer, S., Lanier, J., Wang, K., Baldi, P., Fox, R., and Sandholm, T. Self-play psro: Toward optimal populations in two-player zero-sum games. arXiv preprint arXiv:2207.06541, 2022b. +McAleer, S., Wang, K., Lanctot, M., Lanier, J., Baldi, P., and Fox, R. Anytime optimal psro for two-player zero-sum games. arXiv preprint arXiv:2201.07700, 2022c. +McKelvey, R. D. and Palfrey, T. R. Quantal response equilibria for normal form games. Games and economic behavior, 10(1):6-38, 1995. +McMahan, H. B., Gordon, G. J., and Blum, A. Planning in the presence of cost functions controlled by an adversary. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp. 536-543, 2003. +Meir, R. and Parkes, D. Congestion games with distance-based strict uncertainty. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 29, 2015. + +Merton, R. C. An analytic derivation of the efficient portfolio frontier. Journal of Financial and Quantitative Analysis, 7(4):1851-1872, 1972. doi: 10.2307/2329621. +Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. Human-level control through deep reinforcement learning. nature, 518(7540): 529-533, 2015. +Monderer, D. and Shapley, L. S. Fictitious play property for games with identical interests. Journal of economic theory, 68(1):258-265, 1996a. +Monderer, D. and Shapley, L. S. Potential games. Games and economic behavior, 14(1):124-143, 1996b. +Nash, J. Non-cooperative games. Annals of mathematics, pp. 286-295, 1951. +Nikolova, E. and Stier-Moses, N. Stochastic selfish routing. ACM SIGcom Exchanges, 11(1):21-25, 2012. +Nikolova, E. and Stier-Moses, N. E. A mean-risk model for the traffic assignment problem with stochastic travel times. Operations Research, 62(2):366-382, 2014. +Ordonez, F. and Stier-Moses, N. E. Wardrop equilibria with risk-averse users. *Transportation Science*, 44(1):63–86, 2010. +Perez-Nieves, N., Yang, Y., Slumbers, O., Mguni, D. H., Wen, Y., and Wang, J. Modelling behavioural diversity for learning in open-ended games. In Meila, M. and Zhang, T. (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 8514-8524. PMLR, 18-24 Jul 2021. URL https://proceedings.mlrpress/v139/perez-nieves21a.html. +Perolat, J., De Vylder, B., Hennes, D., Tarassov, E., Strub, F., de Boer, V., Muller, P., Connor, J. T., Burch, N., Anthony, T., et al. Mastering the game of strategy with model-free multiagent reinforcement learning. Science, 378(6623): 990-996, 2022. +Peysakhovich, A. and Lerer, A. Prosocial learning agents solve generalized stag hunts better than selfish ones. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pp. 2043-2044, 2018. +Piliouras, G., Nikolova, E., and Shamma, J. S. Risk sensitivity of price of anarchy under uncertainty. ACM Transactions on Economics and Computation (TEAC), 5(1):1-27, 2016. + +Qiu, W., Wang, X., Yu, R., Wang, R., He, X., An, B., Obraztsova, S., and Rabinovich, Z. Rmix: Learning risk-sensitive policies for cooperative reinforcement learning agents. Advances in Neural Information Processing Systems, 34:23049-23062, 2021. +Raffin, A., Hill, A., Gleave, A., Kanervisto, A., Ernestus, M., and Dormann, N. Stable-baselines3: Reliable reinforcement learning implementations. Journal of Machine Learning Research, 22(268):1-8, 2021. URL http://jmlr.org/papers/v22/20-1364.html. +Robinson, J. An iterative method of solving a game. Annals of Mathematics, 54(2):296-301, 1951. ISSN 0003486X. URL http://www.jstor.org/stable/1969530. +Royset, J. O. Risk-adaptive approaches to learning and decision making: A survey. arXiv preprint arXiv:2212.00856, 2022. +Sokota, S., D'Orazio, R., Kolter, J. Z., Loizou, N., Lanctot, M., Mitliagkas, I., Brown, N., and Kroer, C. A unified approach to reinforcement learning, quantal response equilibria, and two-player zero-sum games. In Deep Reinforcement Learning Workshop NeurIPS 2022. +Wardrop, J. G. Road paper. some theoretical aspects of road traffic research. Proceedings of the institution of civil engineers, 1(3):325-362, 1952. +Wellman, M. P. Methods for empirical game-theoretic analysis. In AAAI, pp. 1552-1556, 2006. +Yekkehkhany, A., Murray, T., and Nagi, R. Risk-averse equilibrium for games, 2020. +Zhang, S., Liu, B., and Whiteson, S. Mean-variance policy iteration for risk-averse reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 10905-10913, 2021. + +# A. Full Proofs + +# A.1. Proposition 3.2 [Minimum Variance Solution] + +The solution to the Eq. 5 provides the same solutions to the following: + +$$ +\boldsymbol {\sigma} ^ {*} \in \underset {\boldsymbol {\sigma}} {\arg \min} \boldsymbol {\sigma} ^ {T} \cdot \Sigma_ {\mathsf {M}} \cdot \varsigma +$$ + +$$ +\text {s . t .} \boldsymbol {\sigma} ^ {T} \cdot \mathbf {M} \cdot \boldsymbol {\sigma} \geq \mu_ {\mathrm {b}} \tag {8} +$$ + +$$ +\sigma (a) \geq 0 \forall a \in A +$$ + +$$ +\boldsymbol {\sigma} ^ {T} \mathbf {1} = 1 +$$ + +where $\mu_{\mathrm{b}} \in \mathbb{R}$ is the lowest level of expected return that the actor is willing to accept. + +Proof. (Merton, 1972) shows by a Lagrange multiplier argument that the optimisation problem, + +$$ +\boldsymbol {\sigma} ^ {*} \in \underset {\boldsymbol {\sigma}} {\arg \min } \boldsymbol {\sigma} ^ {T} \cdot \Sigma_ {\mathsf {M}} \cdot \boldsymbol {\sigma} +$$ + +$$ +\mathrm {s . t .} \boldsymbol {\sigma} ^ {T} \cdot \mathbf {M} \cdot \varsigma \geq \mu_ {\mathrm {b}} \tag {9} +$$ + +$$ +\sigma (a) \geq 0 \forall a \in A +$$ + +$$ +\boldsymbol {\sigma} ^ {T} \mathbf {1} = 1 +$$ + +can be rewritten as + +$$ +\boldsymbol {\sigma} ^ {*} \in \underset {\boldsymbol {\sigma}} {\arg \min } \boldsymbol {\sigma} ^ {T} \cdot \Sigma_ {\mathbf {M}} \cdot \boldsymbol {\sigma} - \tau \left(\boldsymbol {\sigma} ^ {T} \cdot \mathbf {M} \cdot \varsigma\right) +$$ + +$$ +\text {s . t .} \sigma (a) \geq 0 \forall a \in A \tag {10} +$$ + +$$ +\boldsymbol {\sigma} ^ {T} \mathbf {1} = 1 +$$ + +which can be equivalently expressed as, + +$$ +\boldsymbol {\sigma} ^ {*} \in \operatorname * {a r g m i n} _ {\boldsymbol {\sigma}} - \left(\boldsymbol {\sigma} ^ {T} \cdot \mathbf {M} \cdot \boldsymbol {\varsigma} - \lambda \boldsymbol {\sigma} ^ {T} \cdot \Sigma_ {\mathbf {M}} \cdot \boldsymbol {\sigma}\right) +$$ + +$$ +\text {s . t .} \sigma (a) \geq 0 \forall a \in A \tag {11} +$$ + +$$ +\boldsymbol {\sigma} ^ {T} \mathbf {1} = 1 +$$ + +where $\lambda = \frac{1}{\tau}$ + +# A.2. Theorem 3.3 [RAE Existence] + +For any finite N-player game where each player $i$ has a finite $k$ number of pure strategies, $A^i = \{a_1^i,\dots,a_k^i\}$ , an RAE exists + +Proof. We base our proof on Kakutani's Fixed Point Theorem + +Lemma (Kakutani Fixed Point Theorem). Let $A$ be a non-empty subset of a finite dimensional Euclidean space. + +Let $f: A \Rightarrow A$ be a correspondence, with $x \in A \longmapsto f(x) \subseteq A$ , satisfying the following conditions: + +1. $A$ is a compact and convex set. +2. $f(x)$ is non-empty for all $x\in A$ +3. $f(x)$ is a convex-valued correspondence: for all $x \in A$ , $f(x)$ is a convex set. +4. $f(x)$ has a closed graph: that is, if $\{x^n, y^n\} \to \{x, y\}$ with $y^n \in f(x^n)$ , then $y \in f(x)$ . + +Then, $f$ has a fixed point, that is, there exists some $x \in A$ , such that $x \in f(x)$ . + +We define our best-response function as $B_{i}(\pmb{\sigma}_{-i}) = \arg \max_{a\in \Delta_{i}}r^{i}(a,\pmb{\sigma}_{-i})$ where $u_{i}$ is defined as in Eq. (5) and by definition $s$ must satisfy all of the properties of a proper mixed-strategy, and the best-response correspondence is $B:\Delta \Rightarrow \Delta$ such that for all $\pmb{\sigma}\in \Delta$ , we have: + +$$ +B (\boldsymbol {\sigma}) = \left[ B _ {i} \left(\boldsymbol {\sigma} _ {- i}\right) \right] _ {i \in N} \tag {12} +$$ + +We show that $B(\sigma)$ satisfies the conditions of Kakutani's Fixed Point Theorem + +1. $\Delta$ is compact, convex and non-empty. + +By definition + +$$ +\Delta = \Pi_ {i \in N} \Delta_ {i} \tag {13} +$$ + +where each $\Delta_i = \{a|\sum_j a_j = 1\}$ is a simplex of dimension $|A^i| - 1$ , thus each $\Delta_i$ is closed and bounded, and thus compact. Their product set is also compact. + +2. $B(\sigma)$ is non-empty. + +By the definition of $B_{i}(\sigma_{-i})$ where $\Delta_{i}$ is non-empty and compact, and $r^i$ is a quadratic and hence a polynomial function in $a$ . It is known that all polynomial functions are continuous, we can invoke Weirstrass's Extreme Value Theorem which states + +Lemma. If a real valued-function $f$ is continuous on the closed interval $[a, b]$ , then $f$ must attain a maximum and a minimum, each at least once. That is, there exist numbers $c$ and $d$ in $[a, b]$ such that: + +$$ +f (c) \geq f (x) \geq f (d) \quad \forall x \in [ a, b ] +$$ + +Therefore, as $\Delta_{i}$ is non-empty and compact and $r^i$ is continuous in $a$ , $B_{i}(\sigma_{-i})$ is non-empty, and therefore $B(\sigma)$ is also non-empty. + +3. $B(\sigma)$ is a convex-valued correspondence. + +Equivalently, $B(\sigma) \subset \Delta$ is convex if and only if $B_{i}(\sigma_{-i})$ is convex for all $i$ . + +In order to show that $B_{i}(\sigma_{-i})$ is convex for all $i$ , we instead show that the Quadratic Programme defined by Eq. (6) is a special case of convex optimisation under certain conditions, and therefore by definition has a feasible set which is a convex set. + +A convex optimisation problem is one of the form, + +$$ +\text {m i n i m i z e} \quad f _ {0} (x) +$$ + +$$ +\text {s . t .} f _ {i} (x) < 0, i = 1, \dots , m \tag {14} +$$ + +$$ +a _ {i} ^ {T} x = b _ {i}, \quad i = 1, \dots , p +$$ + +where $f_0, \dots, f_m$ are convex functions. The requirements for a problem to be a convex optimisation problem are: + +(a) the objective function must be convex +(b) the inequality constraint functions must be convex +(c) the equality constraint functions $h_i(x) = a_i^T x = b_i$ must be affine + +We note that a quadratic form $\mathbf{x}^T\mathbf{A}\mathbf{x}$ is convex if $\mathbf{A}$ is positive semi-definite, and strictly convex if $\mathbf{A}$ is positive definite. In our constrained optimisation, the quadratic term $\sigma^T\Sigma \sigma$ is always guaranteed to be at least convex as $\Sigma$ , the covariance matrix, is always at least PSD. Therefore, our objective function is convex. Additionally, it is easy to see that our inequality constraint functions are also convex and that our equality constraint function is affine. Therefore, our Quadratic Programme is an instance of a convex optimisation problem. + +Importantly, the feasible set of a convex optimisation problem is convex, since it is the intersection of the domain of the problem + +$$ +\mathcal {D} = \bigcap_ {i = 0} ^ {m} \mathbf {d o m} f _ {i}, \tag {15} +$$ + +which itself is a convex set. + +Therefore, for all members of the feasible set $x, y \in B_i(\sigma_{-i})$ and all $\theta \in [0,1]$ we have that $\theta x + (1 - \theta)y \in S$ and we have a convex-valued correspondence. + +4. $B(\sigma)$ has a closed graph. + +Suppose to obtain a contradiction, that $B(\sigma)$ does not have a closed graph. Then, there exists a sequence $(\sigma^n, \hat{\sigma}^n) \to (\sigma, \hat{\sigma})$ with $\hat{\sigma}^n \in B(\sigma^n)$ , but $\hat{\sigma} \notin B(\sigma)$ , i.e. there exists some $i$ such that $\hat{\sigma}_i \notin B_i(\sigma_{-i})$ . This implies that there exists some $\sigma_i' \in \Delta_i$ and some $\epsilon > 0$ such that + +$$ +r _ {i} \left(\boldsymbol {\sigma} _ {i} ^ {\prime}, \boldsymbol {\sigma} _ {- i}\right) > r _ {i} \left(\hat {\boldsymbol {\sigma}} _ {i}, \boldsymbol {\sigma} _ {- i}\right) + 3 \epsilon . \tag {16} +$$ + +By the continuity of $r_i$ and the fact that $\sigma_{-i}^n\rightarrow \sigma_{-i}$ , we have for sufficiently large $n$ , + +$$ +r _ {i} \left(\boldsymbol {\sigma} _ {i} ^ {\prime}, \boldsymbol {\sigma} _ {- i} ^ {n}\right) \geq r _ {i} \left(\boldsymbol {\sigma} _ {i} ^ {\prime}, \boldsymbol {\sigma} _ {- i}\right) - \epsilon . \tag {17} +$$ + +and combining the preceding two relations we obtain + +$$ +r _ {i} \left(\boldsymbol {\sigma} _ {i} ^ {\prime}, \boldsymbol {\sigma} _ {- i} ^ {n}\right) > r _ {i} \left(\hat {\boldsymbol {\sigma}} _ {i}, \boldsymbol {\sigma} _ {- i}\right) + 2 \epsilon \geq r _ {i} \left(\hat {\boldsymbol {\sigma}} _ {i} ^ {n}, \boldsymbol {\sigma} _ {- i} ^ {n}\right) + \epsilon \tag {18} +$$ + +where the second relation follows from the continuity of $r_i$ . This contradicts the assumption that $\hat{\sigma}_i^n \in B(\sigma_{-i}^n)$ and completes the proof. + +Therefore, $B(\sigma)$ satisfies the conditions of Kakutani's Fixed Point Theorem, and therefore if $\sigma^{*} \in B(\sigma^{*})$ then $\sigma^{*}$ is an equilibrium. + +# A.3. Proposition 4.1 [SFP Convergence] + +Replacing the best-response Eq. (6) with the best-response map Eq. (5) satisfies the conditions of $\bar{u}^i$ for a SFP process. + +Proof. For the perturbed utility function $\bar{u}^i$ to be permissible in an SFP process, there exist two conditions: + +1. That there exists a unique global solution to $\bar{u}^i$ . +2. That the arg max assigns strictly positive probability to all pure strategies. + +We let $\bar{u}^i$ be replaced by the best-response map Eq. 5 and show that the 2 conditions noted above are met. + +Condition 1 - To show that there exists a global solution to $\bar{u}^i$ we need to show that $\bar{u}^i$ is strictly concave which guarantees a unique global maximum. + +As the EU term $u^{i}(\pmb{\sigma}, Z_{t}^{-i}\mathbf{M})$ is linear, we therefore require that the perturbation term $v^{i}(\pmb{\sigma})$ is strictly convex such that the perturbed utility function $\bar{u}^{i} = \arg \max_{\pmb{\sigma}} u^{i}(\pmb{\sigma}, Z_{t}^{-i}, \mathbf{M}) - \lambda v^{i}(\pmb{\sigma})$ is strictly concave. In A.2 we have already shown that, as long as the covariance matrix $\pmb{\Sigma}_{\mathbf{M}}$ is positive definite, then the quadratic term $\pmb{\sigma}^T \cdot \pmb{\Sigma}_{\mathbf{M}} \cdot \pmb{\sigma}$ is strictly convex. By design, $\pmb{\Sigma}_{\mathbf{M}}$ is strictly convex and therefore $\bar{u}^{i}$ is also strictly concave and therefore has a unique global maximum. + +Condition 2 - By design the QP that solves Eq. 5 is constrained such that all pure strategies receive strictly positive probability (this is by design to induce mistakes in agents). Therefore, the output of the best-response map is always within $\mathrm{int}(\Delta^i)$ and condition 2 is satisfied. We show that our utility measure can be embedded as a version of stochastic fictitious play and therefore can be used to find equilibrium in two-player zero-sum games and potential games. + +# A.4. Proposition 4.2 [SFP is RAE] + +Suppose the SFP sequence $\{Z_t\}$ converges to $\sigma$ in the observed strategy sense $^2$ , then $\sigma$ is a Risk-Averse equilibrium. + +Proof. Assume the observed strategy has converged to $\sigma = (\sigma^1, \sigma^2)$ and that the strategy is not an RAE. This implies there exists some $\sigma^{i,\prime}$ such that: + +$$ +r ^ {i} \left(\boldsymbol {\sigma} ^ {i, \prime}, \boldsymbol {\sigma} ^ {- i}\right) > r ^ {i} \left(\boldsymbol {\sigma} ^ {i}, \boldsymbol {\sigma} ^ {- i}\right) \tag {19} +$$ + +However, because $\sigma$ has converged then the SFP sequence $\{Z_t\}$ will also converge such that $\lim_{t \to \infty} Z_t = \sigma$ and because we are in an SFP process it must be the case that: + +$$ +r ^ {i} \left(\boldsymbol {\sigma} ^ {i}, \boldsymbol {\sigma} ^ {- i}\right) > r ^ {i} \left(\boldsymbol {\sigma} ^ {i, \prime}, \boldsymbol {\sigma} ^ {- i}\right) \quad \forall \boldsymbol {\sigma} ^ {i, \prime} \in \Delta^ {i} \tag {20} +$$ + +and therefore $\sigma^{i,'}$ can not be a best response to $\sigma^{-i}$ . + +![](images/28916662e843e1b9992de9b78588ff53563d81eb7a44818564e00d432e35f963.jpg) + +# B. SFP Robustness + +![](images/69cc7c7ed6d48a6d60e6979737ce0bab39b958656c03cdab7c1e36b44ac4b89b.jpg) + +![](images/fac901b78e8405a2d51a2de9e7c63103c7d144ccb6d10211d3dbc8acd5620781.jpg) +Figure 6. Euclidean distance between observed actions after each iteration on randomly generated anti-coordination games. A distance of 0 implies that the process has converged. + +![](images/403d1f5f1f09977f7030df9606277351c45d06767884492065d92317acf7f0b6.jpg) +Figure 7. Euclidean distance between observed actions after each iteration on randomly generated coordination games. A distance of 0 implies that the process has converged. +Figure 8. Euclidean distance between observed actions after each iteration on randomly generated games. A distance of 0 implies that the process has converged. + +# C. $\gamma$ Generalisation + +![](images/55d422818cfeb5a5d207c14ba7ab0db7e944335f3f607b4d37b640fb1626a36b.jpg) +Figure 9. Expected utility values for different $\gamma$ values. Each line represents a different game dimension, and is intended to show largely similar values in terms of expected utility for a given $\gamma$ across the dimensions. + +![](images/79d37262297a373f525eddf8ce17aaf76bdb95721171679f4a1d2491222e60bb.jpg) +Figure 10. Variance utility values for different $\gamma$ values. Each line represents a different game dimension, and is intended to show largely similar values in terms of variance utility for a given $\gamma$ across the dimensions. + +# D. Figure 3 Training Curves + +![](images/78b34145f3d46a3e71dcb7ba9332a0de6b787c183688442661ab5ad09058b96a.jpg) +Figure 11. Training curves over multiple seeds for Fig. 3. + +![](images/5492fc8be3a78db5541164a03371734e741bbd5822f10939aaf6a555e0ae4854.jpg) +Figure 12. Training curves over multiple seeds for Fig. 3. + +# E. Large-Scale Driving Results + +A) Intra-Distribution Results + +
Eqm RewardEqm VarianceWorst-CaseNum. CrashesNum. Arrivals
Self-Play3.47 ± 0.081.20 ± 0.21-2.05 ± 0.8850.0 ± 0.000.00 ± 0.00
PSRO-Uniform3.60 ± 0.011.14 ± 0.061.49 ± 0.1750.0 ± 0.000.00 ± 0.00
PSRO-THPE3.61 ± 0.131.11 ± 0.180.99 ± 0.8350.0 ± 0.000.00 ± 0.00
PSRO-QRE3.65 ± 0.220.96 ± 0.231.78 ± 0.5050.0 ± 0.000.00 ± 0.00
PSRO-Nash3.76 ± 0.071.46 ± 0.081.93 ± 0.0350.0 ± 0.000.00 ± 0.00
PSRO-RAE7.14 ± 0.393.27 ± 0.54-3.48 ± 2.357.00 ± 2.0043.00 ± 2.00
+ +Figure 13. Results on large autonomous driving scenario, on 50 episodes over 5 seeds for intra-distribution testing, e.g. both agents controlled by PSRO-RAE. + +# F. Pseudo-code + +# Algorithm 1 SFP + +1: Initialise: Payoff Matrix $\mathbf{M}$ , uniform initial time-average strategy $Z_{0}$ . +2: for iteration $t \in \{1, 2, \ldots\}$ do: +3: Find best-response $\sigma_t$ to $Z_{t}$ via Eq. 5. +4: Update time-average strategy $Z_{t}$ with respect to $\sigma_{t}$ . +5: Return: Final time-average strategy $Z_{T}$ . + +# Algorithm 2 PSRO-RAE + +1: Initialise: the policy set $\Phi = \prod_{i\in \mathcal{N}}\Phi^i$ , meta-game $\mathbf{M}_0$ , co-variance matrix $\Sigma_{\mathbf{M}_T}$ +2: for iteration $t \in \{1, 2, \ldots\}$ do: +3: for each player $i\in \mathcal{N}$ do: +4: Compute meta-policy $\sigma_{t}$ by SFP (Alg. 1). +5: Find new policy by Oracle: $\phi_t^i = \mathcal{O}^i (\pmb {\sigma}_t)$ +6: Expand $\Phi_{t + 1}^{i}\gets \Phi_{t}^{i}\cup \{\phi_{t}^{i}\}$ +7: Update meta-payoff $\mathbf{M}_{t + 1}$ , co-variance matrix $\Sigma_{\mathbf{M}_{t + 1}}$ . +8: Return: $\sigma_T$ and $\Phi_T$ . + +# G. Hyperparameter Settings + +Table 1. Hyper-parameter settings for our experiments. + +
SETTINGSVALUEDESCRIPTION
SFP COORDINATION GAMES
ACTION DIMENSION100NUMBER OF PURE STRATEGIES AVAILABLE
FP ITERATIONS100NUMBER OF FP BELIEF UPDATES
TREMBLE PROBABILITY0.001PROBABILITY OF TREMBLING TO ANOTHER STRATEGY
QUANTAL TYPESOFTMAXTYPE OF QUANTAL RESPONSE EQUILIBRIUM
# OF SEEDS50# TRIALS
PSRO NFG COORDINATION GAMES
ORACLE METHODREINFORCESUBROUTINE OF GETTING ORACLES
PSRO ITERATIONS15NUMBER OF PSRO ITERATIONS
ACTION DIMENSION500NUMBER OF PURE STRATEGIES AVAILABLE
LEARNING RATE0.005ORACLE LEARNING RATE
ORACLE EPOCHS2000ORACLE TOTAL EPOCHS
ORACLE EPOCH TIMESTEPS100TIMESTEPS PER ORACLE EPOCH
RAE GAMMA0.1, 0.5VARIANCE AVERSION PARAMETER
METASOLVERRAE SFPMETASOLVER METHOD
METASOLVER ITERATIONS100METASOLVER ITERATIONS
# OF SEEDS20# OF TRIALS
STAG-HUNT GRID-WORLD
ORACLE METHODMV-PPO (ZHANG ET AL., 2021)SUBROUTINE OF GETTING ORACLES
PSRO ITERATIONS10NUMBER OF PSRO ITERATIONS
GORE COST2COST FOR GETTING CAUGHT BY STAG
PPO HYPERPARAMSDEFAULT SB3 (RAFFIN ET AL., 2021)PPO HYPERPARAMETER VALUES
MV-PPO VARIANCE AVERSION0.15PPO VARIANCE AVERSION PARAMETER
RAE GAMMA0.15VARIANCE AVERSION PARAMETER
METASOLVERRAE SFPMETASOLVER METHOD
METASOLVER ITERATIONS100METASOLVER ITERATIONS
# OF SEEDS5# OF TRIALS
TWO-WAY ENVIRONMENT
ORACLE METHODMV-PPO (ZHANG ET AL., 2021)SUBROUTINE OF GETTING ORACLES
PSRO ITERATIONS7NUMBER OF PSRO ITERATIONS
PPO HYPERPARAMSDEFAULT SB3 (RAFFIN ET AL., 2021)PPO HYPERPARAMETER VALUES
MV-PPO VARIANCE AVERSION0.5PPO VARIANCE AVERSION PARAMETER
RAE GAMMA0.5VARIANCE AVERSION PARAMETER
METASOLVERRAE SFPMETASOLVER METHOD
METASOLVER ITERATIONS100METASOLVER ITERATIONS
# OF SEEDS5# OF TRIALS
+ +# H. Environments + +# H.1. Randomly Generated NFGs + +![](images/dd38700393359b86485577682e289950547bdbb3104254096533b9277e4b63bb.jpg) +Low Risk, Low Reward +High Risk, High Reward +Figure 14. An example of a 3 action game. Action $S_{3}$ (dotted outline) provides a high return assuming successful coordination but high variance in case the opponent does not coordinate correctly. + +# H.1.1. CHARACTERISTICS + +1. High risk high-reward actions. In these strategies if both players play the same action then they receive a high payoff, however if a player takes a different action then a big negative payoff is received. For example, in Fig. 14, $S_{3}$ is the high-risk high-reward strategy. +2. Low risk low-reward actions. In these strategies if both players play the same action then they receive a lower payoff, however if a player takes a different action than a big smaller payoff is received. For example, in Fig. 14, $S_{1}$ and $S_{2}$ are the low-risk low-reward strategies. + +# H.1.2. GENERATION + +We randomly generate coordination games with $N$ actions in the following way: + +Algorithm 3 Iterative RAE Generator +1: Initialise: Empty $N\times N$ payoff matrix $P$ +2: for each action i do: +3: Sample coordination element, $p_{ii}\sim \mathcal{U}(5,15)$ +4: Set Payoff matrix element $P_{ii} = |p_{ii}|$ +5: if $P(X\leq p_{ii}) > 0.9$ do +6: for all other actions $j$ do +7: Sample anti-coordination element $p_{ij}\sim \mathcal{U}(-10,15)$ +8: Set Payoff matrix element $P_{ij} = P_{ji} = p_{ij}$ +9: else do +10: for all other actions $j$ do +11: Sample anti-coordination element $p_{ij}\sim \mathcal{U}(0,10)$ +12: Set Payoff matrix element $P_{ij} = P_{ji} = p_{ij}$ +13: Return: $P$ + +# H.2. Stag Hunt Grid World + +Our stag-hunt environment is taken from (Peysakhovich & Lerer, 2018) with minor adjustments to the parameters of the game. + +# H.2.1. CHARACTERISTICS + +Details - 2 Players, 1 Stag, 2 Plants. All spawned in random positions dependent on the seed. + +State Space - $5 \times 5$ grid-world. Grid spots are marked as: 0 if nothing on them, 1 if Player is on them, 2 if Stag is on them, 3 if a Plant is on them. Players see the full grid-world with the state space being $S \in \{0,1,2,3\}^{5 \times 5}$ . + +Action Space - Actions involve movement in the grid-world in the four cardinal directions. $\mathcal{A} = \{\text{left, right, up, down}\}$ . + +Reward Space - There are 4 different rewards signals in the game: + +1. If a Player moves over a Plant they get $r = 2$ , and the Plant respawns elsewhere. +2. If both Players move over the Stag at the same time both receive $r = 5$ and the Stag respawns elsewhere. +3. If a Player moves over the Stag on their own, or the Stag moves over them on their own, the Player receives $r = -2$ and the Stag respawns elsewhere on the grid. +4. Otherwise $r = 0$ . + +The Stag - At each time-step $t$ , the Stag will take one grid-step in the four cardinal directions towards the Player that is closest to it. + +# H.3. Autonomous Driving Environment + +Our driving environment is based on the two-way environment from (Leurent, 2018) where we make modifications to the reward function to introduce a larger factor of risk-aversion into the game. The goal of the controlled drivers is to reach the end of the road (the destination) whilst avoiding crashing and coming into too close contact with other vehicles. Slow moving drivers populate the roads moving at a constant speed of 20. + +# H.3.1. CHARACTERISTICS + +Details - 2 Players heading opposite directions, 6 other cars on road with even split heading each direction. All spawned in random positions dependent on the seed. + +State Space - For the state space we use the KinematicObservation in Leurent 2018. The KinematicObservation is a $V \times F$ array that describes a list of $V$ nearby vehicles by a set of features of size $F$ . We use the default feature set in Leurent 2018, which is the $x$ and $y$ coordinate of the $V$ nearby vehicles and their velocities in the $x$ and $y$ direction. + +Action Space - For the action space we use the DiscreteAction in Leurent 2018. The DiscreteAction is a uniform quantization of the ContinuousAction which allows the agent to directly set the vehicle kinematics, by controlling the throttle $a$ and the steering angle $\delta$ . + +Reward Space - There are 4 reward signals in the environment: + +1. If the car crashes $r = -2$ . +2. If the car arrives at the destination $r = 2$ . +3. If the car is travelling at a good speed ([25,30]), $r = 0.2$ . +4. If the car comes very close to another car $r = -0.1$ +5. Otherwise $r = 0$ . + +Other Cars - Non-Player cars on the road travel at a constant speed of $v = 20$ and do not change direction. On each road there are 3 spawned NPC cars ahead of the Player cars. + +# I. Baselines + +For NFG tasks we use the baselines: NE (including risk/dominant payoff NE), THPE and QRE introduced in Sec. 2. For MDP tasks we use the baselines: PSRO-{Nash, Uniform, Self-Play, THPE, QRE}. where the brackets refer to the meta-solver used. In the PSRO setting we limit our baselines to algorithms that operate within this framework, and to not consider non-population risk-aversion algorithms (as is standard in PSRO literature). + +THPE and QRE - For SFP settings, we solve for these equilibria using Fictitious Play by replacing the best-responses with a trembling hand best-response and a Logit Quantal best-response. Empirically, we clarify that the time-average does converge using FP and therefore we do have a THPE and QRE in the games where we used FP (i.e. smaller games) + +For PSRO settings, we treated the new agent best-response for THPE and QRE as vanilla PSRO optimisation agents, i.e. in terms of expected reward. Therefore, it is likely that these baselines are only approximations of the true THPE and QRE, however the lack of a notable solver in these games is an obvious downside of these equilibria. + +Nash - We use vanilla Fictitious Play for any Nash equilibrium solving on NFGs. + +Uniform - All actions / policies receive equal mixed-strategy probability. + +Self-Play - Only the most recent policy in a populations equals probability in the mixed-strategy, i.e. a pure strategy. + +# J. Compute Architecture + +All experiments run on one machine with: + +AMD Ryzen Threadripper 3960X 24 Core +- 1 x NVIDIA GeForce RTX 3090 + +# K. Asymmetric Formulations + +In the following section we will show the formulation of Sec. 3 but for the asymmetric case. + +# K.1. RAE Derivation + +Define the utility for player $i$ of action $a_{k}^{i}\in \mathcal{A}^{i}$ against action $a_{k^{\prime}}^{j}\in \mathcal{A}^{j}$ as $u^{i}(a_{k}^{i},a_{k^{\prime}}^{j})$ and the full utility matrix as $\mathbf{M}^i$ , where the entry $\mathbf{M}_{k,k^{\prime}}^{i}$ refers to $u^{i}(a_{k}^{i},a_{k^{\prime}}^{j})$ and $\mathbf{M}_k^i$ refers to $u^{i}(a_{k}^{i},a_{k^{\prime}}^{j})\forall k^{\prime}$ , i.e. the vector of utilities that action $a_{k}^{i}$ receives against all other actions. We now define the expected utility of the mixed-strategy for player $1\sigma^i$ versus the mixed strategy for player $2\varsigma^{j}$ as + +$$ +\begin{array}{l} u ^ {i} \left(\boldsymbol {\sigma} ^ {i}, \varsigma^ {j}, \mathbf {M} ^ {i}\right) = \sum_ {a _ {k} ^ {i} \in A ^ {i}} \sum_ {a _ {k ^ {\prime}} ^ {j} \in A ^ {j}} \sigma \left(a _ {k} ^ {i}\right) \varsigma \left(a _ {k ^ {\prime}} ^ {j}\right) u ^ {i} \left(a _ {k} ^ {i}, a _ {k ^ {\prime}} ^ {j}\right) \tag {21} \\ = \boldsymbol {\sigma} ^ {i, T} \cdot \mathbf {M} ^ {i} \cdot \varsigma^ {j}. \\ \end{array} +$$ + +The weighted co-variance matrix for $\mathbf{M}^i$ (i.e. the variance of utility values) is a $|A^i| \times |A^i|$ matrix $\Sigma_{\mathbf{M}^i, \varsigma^j} = [c_{k,k'}^i]$ with entries + +$$ +c _ {k, k ^ {\prime}} ^ {i} = \sum_ {a _ {d} ^ {j} \in A ^ {j}} \varsigma^ {j} \left(a _ {d} ^ {j}\right) \left(u ^ {i} \left(a _ {d} ^ {i}, a _ {k} ^ {j}\right) - \bar {\mathbf {M}} _ {k} ^ {i}\right) \left(u ^ {i} \left(a _ {d} ^ {i}, a _ {k ^ {\prime}} ^ {j}\right) - \bar {\mathbf {M}} _ {k ^ {\prime}} ^ {i}\right), \tag {22} +$$ + +where $\bar{\mathbf{M}}_k^i = \sum_{k' = 1}^{|A^j|}\varsigma (a_{k'}^j)u(a_k^i,a_{k'}^j)$ is the EU for Player $i$ 's action $k$ given the opponent mixed-strategy $\varsigma^j$ . This allows us to define the mixed-strategy $\sigma^i$ variance utility as: + +$$ +\begin{array}{l} \operatorname {V a r} \left(\boldsymbol {\sigma} ^ {i}, \boldsymbol {\varsigma} ^ {j}, \mathbf {M} ^ {i}\right) = \sum_ {k = 1} ^ {| A ^ {i} |} \sum_ {k ^ {\prime} = 1} ^ {| A ^ {j} |} \sigma \left(a _ {k} ^ {i}\right) \sigma \left(a _ {k ^ {\prime}} ^ {j}\right) c _ {k, k ^ {\prime}} \tag {23} \\ = \boldsymbol {\sigma} ^ {i, T} \cdot \Sigma_ {\mathsf {M} ^ {i}, \varsigma^ {j}} \cdot \boldsymbol {\sigma} ^ {i}. \\ \end{array} +$$ + +The final utility function $r^i$ which considers expected and variance utility for mixed-strategy $\sigma^i$ is, + +$$ +r \left(\boldsymbol {\sigma} ^ {i}, \boldsymbol {\varsigma} ^ {j}, \mathbf {M} ^ {i}\right) = u ^ {i} \left(\boldsymbol {\sigma} ^ {i}, \boldsymbol {\varsigma} ^ {j}, \mathbf {M} ^ {i}\right) - \gamma^ {i} \operatorname {V a r} \left(\boldsymbol {\sigma} ^ {i}, \boldsymbol {\varsigma} ^ {j}, \mathbf {M} ^ {i}\right), \tag {24} +$$ + +where $\gamma^i\in \mathbb{R}$ is the risk-aversion parameter. + +In order to define RAE, we first define the best-response map: + +$$ +\boldsymbol {\sigma} ^ {*}, (\boldsymbol {\varsigma} ^ {j}) \in \underset {\boldsymbol {\sigma} ^ {i}} {\arg \max } r ^ {i} (\boldsymbol {\sigma} ^ {i}, \boldsymbol {\varsigma} ^ {j}, \mathsf {M} ^ {i}) +$$ + +$$ +\text {s . t .} \sigma (a) \geq \epsilon , \forall a \in A ^ {i} \tag {25} +$$ + +$$ +\boldsymbol {\sigma} ^ {i, T} \mathbf {1} = 1, +$$ + +Definition K.1 (RAE). A strategy profile $\{\sigma^i, \varsigma^j\}$ is a risk-averse equilibrium if both $\sigma^i$ and $\varsigma^j$ are risk-averse best responses, in that they satisfy Eq. 25, to each other. + +# K.2. Multi-population PSRO-RAE + +# K.2.1. PSRO OUTER LOOP + +At every iteration $t \leq T$ , a player $i$ is defined by a population of fixed agents $\Phi_t^i = \Phi_0^i \cup \{\phi_1^i, \phi_2^i, \dots, \phi_t^i\}$ , where $\Phi_0^i$ is the initial random agent. + +# K.2.2. PSRO INNER LOOP + +a, d) Meta-Game & Covariance Matrix At the start of the iteration $t$ inner loop, each player $i$ has population with a meta-game $\mathbf{M}_t^i$ , an EU matrix between all the agents in its own population and that of the opponent $j$ , with individual entries + +$M^{i}(\phi_{k}^{i},\phi_{k^{\prime}}^{j})\forall \phi_{k}^{i}\in \Phi_{t}^{i},\phi_{k^{\prime}}^{j}\in \Phi_{t}^{j}$ In addition, each player $i$ with population also generates a covariance matrix $\Sigma_{\mathsf{M}_t^i}$ defined by Eq. 2. At the end of iteration $t$ inner loop, both $\mathsf{M}_t^i$ and $\Sigma_{\mathsf{M}_t^i}$ are updated to include a new agent. + +b) Meta Distribution To use $\Phi_t^i$ we require a way to select which $\phi_t^i \in \Phi_t^i$ will be used as training opponents. The function $f^i$ is a mapping $f^i: \mathbf{M}_t^i \to [0,1]^t$ which takes as input a meta-game $\mathbf{M}_t^i$ and outputs a meta-distribution $\sigma_t^i = f^i(\mathbf{M}_t^i)$ . The output $\sigma_t^i$ is a probability assignment to each agent in the population $\Phi_t^i$ which is the equivalent of a mixed-strategy in a NFG, except actions are now RL policies. We apply RAE (Def. 3.1) as the meta-solver. As $\phi^i$ are RL policies then the policies are sampled by their respective probability in $\sigma_t^i$ . + +c) Best Response Oracle At each epoch $\Phi_t^i$ is augmented with a new agent that is a best-response (BR) to the meta-distribution $\sigma_t^i$ . When selecting the BR oracle one aims to optimise the same objective function of that optimised by the meta-distribution. For example, in Vanilla PSRO the Nash meta-distribution optimises for environment reward and the BR oracle also optimises a new agent in terms of environment reward. This can be found with any optimisation process such as RL or an evolutionary algorithm. In our setting the meta-distribution optimises two metrics, the EU and the variance utility and therefore we similarly need a BR oracles that optimises the same dual objective. + +In terms of RL quantities, this translates to maximising the expected total-RL reward (i.e. the total of the per-step rewards) whilst minimising the variance of the total-RL reward caused by the sampling of different RL agents from $\sigma_t^i$ . + +To achieve this, we follow the approach of (Zhang et al., 2021) that optimises both the total RL-reward and per-step RL-reward variance by solving an augmented MDP where the per-step reward $g_{t}^{i}$ is replaced by: + +$$ +\hat {g} _ {t} ^ {i} = g _ {t} ^ {i} - \lambda (g _ {t} ^ {i}) ^ {2} + (2 \lambda g _ {t} ^ {i} y _ {i}) +$$ + +where $y_{i} = \frac{1}{T}\sum_{t = 1}^{T}g_{t}^{i}$ is the average of the per-step rewards during the data collection phase. + +We choose the per-step RL reward as it is an upper bound of the total-RL reward variance (Bisi et al., 2020), therefore reducing per-step variance will also reduce total variance, and is more effective computationally (Zhang et al., 2021). Additionally, as this variance is also with respect to the sampling probability defined by $\sigma_t^i$ this optimises the correct co-variance matrix which is also weighted by $\sigma_t^i$ . + +# L. QRE Failure Case + +In the following section we present results on the two-action driving game described in Sec. 1 of the main article and displayed in Fig. 15. + +![](images/c66651a83b5902681c74cef0b22a2bb1123c6f81285abaf9722db962622cb101.jpg) +Figure 15. Two-action driving risk game. + +We specifically utilise this game to show a failure case of QRE as a risk-sensitive solution. Ideally, a risk-sensitive solution concept would only play the Stay in Lane strategy as the Overtake strategy has far too high potential downside risk. + +![](images/a43e415143fb67194b6c769727b3c584ed69be7472b0d5538b68a2a4a87d56fe.jpg) +Figure 16. QRE and RAE results on two-action driving game. + +![](images/694cc8ed05a581a75c4938c2a62e43f81cd0f40d010e4de087e21233ed77478f.jpg) + +![](images/4e9254f94979ff31a961c86a251913dc353bd975af427c787042c8bcb129dde0.jpg) + +As can be seen from the results in Fig. 6, for a large sample of QRE hyperparameters the equilibrium found is high variance with potential poor downside performance. We believe this is because the very large costs of the errors are easily picked up by variance analysis, but not so easily by the setup of QRE. + +# M. Variance vs. Standard Deviation + +It is worth noting that a property of variance is that it is not scale-invariant with relation to the utilities of the game, and one might suggest using standard deviation (STD) instead as it is scale-invariant. However, we choose to stick with variance due to its mathematical properties, notably that because it is quadratic with respect to the strategy probabilities $\sigma$ QP can be used, whereas STD is not quadratic w.r.t $\sigma$ and QP cannot be used. \ No newline at end of file diff --git a/agametheoreticframeworkformanagingriskinmultiagentsystems/images.zip b/agametheoreticframeworkformanagingriskinmultiagentsystems/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5c8fed929aaef374c22ef36ec052b9ab187410eb --- /dev/null +++ b/agametheoreticframeworkformanagingriskinmultiagentsystems/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0da84afef653be34821d25cc800c75fb14d1d7b732df5390e3c97b895db71e55 +size 1140290 diff --git a/agametheoreticframeworkformanagingriskinmultiagentsystems/layout.json b/agametheoreticframeworkformanagingriskinmultiagentsystems/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d46a23121d929e9b8748dd8ce0de54b8d778117f --- /dev/null +++ b/agametheoreticframeworkformanagingriskinmultiagentsystems/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d9df682f04bd6704c369e1c047e57cd17e17b89b275be73afda154e01f6ea895 +size 1023430 diff --git a/ageneralizationofvitmlpmixertographs/70245573-75be-4669-a241-87e08499ff69_content_list.json b/ageneralizationofvitmlpmixertographs/70245573-75be-4669-a241-87e08499ff69_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0dac07747942d934ce1617cd5eb40de7e6db2820 --- /dev/null +++ b/ageneralizationofvitmlpmixertographs/70245573-75be-4669-a241-87e08499ff69_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce4ae065bbcb5462d95d67b0abb33c668b363b69d6d6906c5ba9740452a098cb +size 157634 diff --git a/ageneralizationofvitmlpmixertographs/70245573-75be-4669-a241-87e08499ff69_model.json b/ageneralizationofvitmlpmixertographs/70245573-75be-4669-a241-87e08499ff69_model.json new file mode 100644 index 0000000000000000000000000000000000000000..1adc86bc7e601d8b8d64e68baf8d5cabd4f4d928 --- /dev/null +++ b/ageneralizationofvitmlpmixertographs/70245573-75be-4669-a241-87e08499ff69_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5954573b1df52178ad217759256fd6368a8412eca608cf39e7ca9b2eca22ffd3 +size 193647 diff --git a/ageneralizationofvitmlpmixertographs/70245573-75be-4669-a241-87e08499ff69_origin.pdf b/ageneralizationofvitmlpmixertographs/70245573-75be-4669-a241-87e08499ff69_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6f017d5b0556c80ec1c10a6af6f202ae93b5bcfa --- /dev/null +++ b/ageneralizationofvitmlpmixertographs/70245573-75be-4669-a241-87e08499ff69_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1f1958d2d284e93266b37cb3a9064ed9b9d822b56f20738daa61a9ee2bdb589 +size 1013404 diff --git a/ageneralizationofvitmlpmixertographs/full.md b/ageneralizationofvitmlpmixertographs/full.md new file mode 100644 index 0000000000000000000000000000000000000000..66fab5993d20b17ffebe0d74b9ee366a39f102b8 --- /dev/null +++ b/ageneralizationofvitmlpmixertographs/full.md @@ -0,0 +1,594 @@ +# A Generalization of ViT/MLP-Mixer to Graphs + +Xiaoxin He $^{1}$ Bryan Hooi $^{1,2}$ Thomas Laurent $^{3}$ Adam Perold $^{4}$ Yann LeCun $^{5,6}$ Xavier Bresson $^{1}$ + +# Abstract + +Graph Neural Networks (GNNs) have shown great potential in the field of graph representation learning. Standard GNNs define a local message-passing mechanism which propagates information over the whole graph domain by stacking multiple layers. This paradigm suffers from two major limitations, over-squashing and poor long-range dependencies, that can be solved using global attention but significantly increases the computational cost to quadratic complexity. In this work, we propose an alternative approach to overcome these structural limitations by leveraging the ViT/MLP-Mixer architectures introduced in computer vision. We introduce a new class of GNNs, called Graph ViT/MLP-Mixer, that holds three key properties. First, they capture long-range dependency and mitigate the issue of oversquashing as demonstrated on Long Range Graph Benchmark and TreeNeighbourMatch datasets. Second, they offer better speed and memory efficiency with a complexity linear to the number of nodes and edges, surpassing the related Graph Transformer and expressive GNN models. Third, they show high expressivity in terms of graph isomorphism as they can distinguish at least 3-WL non-isomorphic graphs. We test our architecture on 4 simulated datasets and 7 real-world benchmarks, and show highly competitive results on all of them. The source code is available for reproducibility at: https://github.com/XiaoxinHe/Graph-ViT-MLPMixer. + +# 1. Message-Passing GNNs and the Limitations + +In this first section, we present the background of the project by introducing the standard Message-Passing (MP) GNNs + +$^{1}$ School of Computing, University of Singapore $^{2}$ Institute of Data Science, National University of Singapore $^{3}$ Loyola Marymount University $^{4}$ Element, Inc. $^{5}$ New York University $^{6}$ Meta AI. Correspondence to: Xiaoxin He . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +and their two major limitations; low expressivity representation and poor long-range dependency. We also present the current techniques that address these issues, i.e. Weisfeiler-Leman GNNs, graph positional encoding and Graph Transformers, as well as their shortcomings. + +Message-Passing GNNs (MP-GNNs). GNNs have become the standard learning architectures for graphs based on their flexibility to work with complex data domains s.a. recommendation (Monti et al., 2017; Berg et al., 2017), chemistry (Duvenaud et al., 2015; Gilmer et al., 2017), physics (Cranmer et al., 2019; Bapat et al., 2020), transportation (Derrow-Pinion et al., 2021), vision (Han et al., 2022a), natural language processing (NLP) (Wu et al., 2021a), knowledge graphs (Schlichtkrull et al., 2018), drug design (Stokes et al., 2020; Gaudelet et al., 2020) and medical domain (Li et al., 2020b; 2021). Most GNNs are designed to have two core components. First, a structural message-passing mechanism s.a. Defferrard et al. (2016); Kipf & Welling (2017); Hamilton et al. (2017); Monti et al. (2017); Bresson & Laurent (2017); Gilmer et al. (2017); Velicković et al. (2018) that computes node representations by aggregating the local 1-hop neighborhood information. Second, a stack of $L$ layers that aggregates $L$ -hop neighborhood nodes to increase the expressivity of the network and transmit information between nodes that are $L$ hops apart. + +Weisfeiler-Leman GNNs (WL-GNNs). One of the major limitations of MP-GNNs is their inability to distinguish (simple) non-isomorphic graphs. This limited expressivity can be formally analyzed with the Weisfeiler-Leman graph isomorphism test (Weisfeiler & Leman, 1968), as first proposed in Xu et al. (2019); Morris et al. (2019). Later on, Maron et al. (2018) introduced a general class of $k$ -order WL-GNNs that can be proved to universally represent any class of $k$ -WL graphs (Maron et al., 2019; Chen et al., 2019). But to achieve such expressivity, this class of GNNs requires using $k$ -tuples of nodes with memory and speed complexities of $O(N^{k})$ , with $N$ being the number of nodes and $k \geq 3$ . Although the complexity can be reduced to $O(N^{2})$ and $O(N^{3})$ respectively (Maron et al., 2019; Chen et al., 2019; Azizian & Lelarge, 2020), it is still computationally costly compared to the linear complexity $O(E)$ of MP-GNNs with $E$ being the number of edges, which often reduces to $O(N)$ for real-world graphs that exhibit sparse structures. In order to reduce memory and speed complexi + +ties of WL-GNNs while keeping high expressivity, several works have focused on designing graph networks from their sub-structures s.a. sub-graph isomorphism (Bouritsas et al., 2022), sub-graph routing mechanism (Alsentzer et al., 2020), cellular WL sub-graphs (Bodnar et al., 2021), and k-hop egonet sub-graphs (Xu et al., 2019; Zhang & Li, 2021; Chen et al., 2019; Zhao et al., 2021; Frasca et al., 2022). + +Graph Positional Encoding (PE). Another aspect of the limited expressivity of GNNs is their inability to recognize simple graph structures s.a. cycles or cliques, which are often present in molecules and social graphs (Chen et al., 2020). We can consider $k$ -order WL-GNNs with value $k$ to be the length of cycle/clique, but with high complexity $O(N^{k})$ . An alternative approach is to add positional encoding to the graph nodes. It was proved in Murphy et al. (2019); Loukas (2020) that unique and equivariant PE increases the representation power of any MP-GNN while keeping the linear complexity. This theoretical result was applied with great empirical success with index PE (Murphy et al., 2019), Laplacian eigenvectors (Dwivedi et al., 2020; Dwivedi & Bresson, 2021; Kreuzer et al., 2021; Lim et al., 2022) and k-step Random Walk (Li et al., 2020a; Dwivedi et al., 2021). All these graph PEs lead to GNNs strictly more powerful than the 1-WL test, which seems to be enough expressivity in practice (Zopf, 2022). However, none of the PE proposed for graphs can provide a global position of the nodes that is unique, equivariant and distance sensitive. This is due to the fact that a canonical positioning of nodes does not exist for arbitrary graphs, as there is no notion of up, down, left and right on graphs. For example, any embedding coordinate system like graph Laplacian eigenvectors (Belkin & Niyogi, 2003) can flip up-down directions, right-left directions, and would still be a valid PE. This introduces ambiguities for the GNNs that require to (learn to) be invariant with respect to the graph or PE symmetries. A well-known example is given by the eigenvectors: there exist $2^{k}$ number of possible sign flips for $k$ eigenvectors that require to be learned by the network. + +Issue of long-range dependencies. Another major limitations of MP-GNNs is the well-known issue of long-range dependencies. Standard MP-GNNs require $L$ layers to propagate the information from one node to their $L$ -hop neighborhood. This implies that the receptive field size for GNNs can grow exponentially, for example with $O(2^{L})$ for binary tree graphs. This causes over-squashing; exponentially growing information is compressed into a fixed-length vector by the aggregation mechanism (Alon & Yahav, 2020; Topping et al., 2022). It is worth noting that the poor long-range modeling ability of deep GNNs can be caused by the combined effect of multiple factors, such as over-squashing, vanishing gradients, poor isomorphism expressivity, etc. but, in this work, we focus our effort on alleviating over-squashing s.a. Deac et al. (2022); Arnaiz-Rodriguez et al. (2022). Over + +squashing is well-known since recurrent neural networks (Hochreiter & Schmidhuber, 1997), which have led to the development of the (self- and cross-)attention mechanisms for the translation task (Bahdanau et al., 2014; Vaswani et al., 2017) first, and then for more general NLP tasks (Devlin et al., 2018; Brown et al., 2020). Transformer architectures are the most elaborated networks that leverage attention, and have gained great success in NLP and computer vision (CV). Several works have generalized the transformer architecture for graphs, alleviating the issue of long-range dependencies and achieving competitive or superior performance against standard MP-GNNs. We highlight the most recent research works in the next paragraph. + +Graph Transformers. Dwivedi & Bresson (2021) generalize Transformers to graphs, with graph Laplacian eigenvectors as node PE, and incorporating graph structure into the permutation-invariant attention function. SAN and LSPE (Kreuzer et al., 2021; Dwivedi et al., 2021) further improve with PE learned from Laplacian and random walk operators. GraphiT (Mialon et al., 2021) encodes relative PE derived from diffusion kernels into the attention mechanism. GraphTrans (Wu et al., 2021b) add Transformers on the top of standard GNN layers. SAT (Chen et al., 2022a) proposes a novel self-attention mechanism that incorporates structural information into the standard self-attention module by using a GNN to compute subgraph representations. Graphormer (Ying et al., 2021) introduce three structural encodings, with great success on large molecular benchmarks. GPS (Rampášek et al., 2022) categorizes the different types of PE and puts forward a hybrid MPNN+Transformer architecture. We refer to Min et al. (2022) for an overview of graph-structured Transformers. Generally, most Graph Transformer architectures address the problems of oversquashing and limited long-range dependencies in GNNs but they also increase significantly the complexity from $O(E)$ to $O(N^2)$ , resulting in a computational bottleneck. A detailed description of related literature can be found in Appendix A. + +# 2. Generalizing ViT/MLP-Mixer to Graphs + +In the following, we explain the importance of generalizing the ViT/MLP-Mixer architectures from CV to graphs. + +ViT and MLP-Mixer in computer vision. Transformers have gained remarkable success in CV and NLP, most notably with architectures like ViT (Dosovitskiy et al., 2020) and BERT (Devlin et al., 2018). The success of transformers has been long attributed to the attention mechanism (Vaswani et al., 2017), which is able to model long-range dependency by making "everything connected to everything". But recently, this prominent line of networks has been challenged by more cost efficient alternatives. A novel family of models based on the MLP-Mixer introduced by + +Table 1. Differences between ViT/MLP-Mixer components for images and graphs. + +
ImagesGraphs
Regular grid +Same data resolution +(Height, Width)Irregular domain +Variable data structure +(# Nodes and # Edges)
InputVia pixel reordering +Non-overlapping patches +Same patches at each epochVia graph clustering algorithm +Overlapping patches +Different patches at each epoch
Patch ExtractionSame patch resolution +(Patch Height, Patch Width) +MLP (equivalently CNN)Variable patch structure +(# Nodes and # Edges) +GNN (e.g. GCN, GAT, GT)
Positional InformationImplicitly ordered +(No need for explicit PE)No universal ordering +Node PE for patch encoder +Patch PE for token mixer
ViT / MLP-MixerMLP / Channel mixer +MHA / Token mixerMLP / Channel mixer +gMHA / Token mixer
+ +Tolstikhin et al. (2021) has emerged and gained recognition for its simplicity and its efficient implementation. Overall, MLP-Mixer replaces the attention module with multi-layer perceptrons (MLPs) which are also not affected by oversquashing and poor long-range interaction. The original architecture is simple (Tolstikhin et al., 2021); it takes image patches (or tokens) as inputs, encodes them with a linear layer (equivalent to a convolutional layer over the image patches), and updates their representations with a series of feed-forward layers applied alternatively to image patches (or tokens) and features. These plain networks (Tolstikhin et al., 2021; Touvron et al., 2021; Liu et al., 2021; Wang et al., 2022) can perform competitively with state-of-the-art (SOTA) vision Transformers, which tends to indicate that attention is not the only important inductive bias, but other elements like the general architecture of Transformers with patch embedding, residual connection and layer normalization, and carefully-curated data augmentation techniques seem to play essential roles as well (Yu et al., 2022). + +The benefits of generalizing ViT/MLP-Mixer from grids to graphs. Standard MP-GNNs have linear learning/inference complexities but low representation power and poor long-range dependency. Graph Transformers address these two problems but loose the computational effi + +ciency with a quadratic complexity price. A generalization of ViT/MLP-Mixer to graphs overcomes the computational bottleneck of Graph Transformers and solves the issue of long-distance dependency, hence going beyond standard MP-GNNs. However, achieving such a successful generalization is challenging given the irregular and variable nature of graphs. See Section 3 for a detailed presentation of theses challenges. + +Contribution. Our contributions are listed as follows. + +- We identify the key challenges to generalize ViT/MLP-Mixer from images to graphs and design a new efficient class of GNNs, namely Graph ViT/MLP-Mixer, that simultaneously captures long-range dependency, keeps linear speed/memory complexity, and achieves high graph isomorphic expressivity. +- We show competitive results on multiple benchmarks from Benchmarking GNNs (Dwivedi et al., 2020) and the Open Graph Benchmark (OGB) (Hu et al., 2020); specifically, with 0.073 MAE on ZINC and 0.7997 ROCAUC on MolHIV. +- We demonstrate the capacity of the proposed model to capture long-range dependencies with SOTA + +performance on Long Range Graph Benchmark (LRGB) (Dwivedi et al., 2022) while keeping low complexity, to mitigate the over-squashing issue on the TreeNeighbourMatch dataset (Alon & Yahav, 2020), and to reach the 3-WL expressive power on the SR25 dataset (Balcilar et al., 2021). + +- Our approach forms a bridge between CV, NLP and graphs under a unified architecture, that can potentially benefit cross-over domain collaborations to design better networks. + +# 3. Generalization Challenges + +We list the main questions when adapting MLP-Mixer from images to graphs in the following and in Table 1. + +(1) How to define and extract graph patches/tokens? One notable geometrical property that distinguishes graph-structured data from regular structured data, such as images and sequences, is that there does not exist in general a canonical grid to embed graphs. As shown in Table 1, images are supported by a regular lattice, which can be easily split into multiple grid-like patches (also referred to as tokens) of the same size via fast pixel reordering. However, graph data is irregular: the number of nodes and edges in different graphs is typically different. Hence, graphs cannot be uniformly divided into similar patches across all examples in the dataset. Finally, the extraction process for graph patches cannot be uniquely defined given the lack of canonical graph embedding. This raises the questions of how we identify meaningful graph tokens, and quickly extract them. + +(2) How to encode graph patches into a vectorial representation? Since images can be reshaped into patches of the same size, they can be linearly encoded with an MLP, or equivalently with a convolutional layer with kernel size and stride values equal to the patch size. However, graph patches are not all the same size: they have variable topology with different number of nodes, edges and connectivity. Another important difference is the absence of a unique node ordering for graphs, which constrains the process to be invariant to node re-indexing for generalization purposes. In summary, we need a process that can transform graph patches into a fixed-length vectorial representation for arbitrary subgraph structures while being permutation invariant. + +(3) How to preserve positional information for nodes and graph patches? As shown in Table 1, image patches in the sequence have implicit positions since image data is always ordered the same way due to its unique embedding in the Euclidean space. For instance, the image patch at the upper-left corner is always the first one in the sequence and the image patch at the bottom-right corner is the last one. On this basis, the token mixing operation of the MLP-Mixer is able to fuse the same patch information. However, + +graphs are naturally not-aligned and the set of graph patches are therefore unordered. We face a similar issue when we consider the positions of nodes within each graph patch. In images, the pixels in each patch are always ordered the same way; in contrast, nodes in graph tokens are naturally unordered. Thus, how do we preserve local and global positional consistency for nodes and graph patches? + +(4) How to reduce over-fitting for Graph ViT/MLP-Mixer? ViT/MLP-Mixer architectures are known to be strong over-fitters (Liu et al., 2021). Most MLP-variants (Tolstikhin et al., 2021; Touvron et al., 2021; Wang et al., 2022) first pre-train on large-scale datasets, and then fine-tune on downstream tasks, coupled with a rich set of data augmentation and regularization techniques, e.g. cropping, random horizontal flipping, RandAugment (Cubuk et al., 2020), mixup (Zhang et al., 2017), etc. While data augmentation has drawn much attention in CV and NLP, graph data augmentation methods are not yet as effective, albeit interest and works on this topic (Zhao et al., 2020). Variable number of nodes, edges and connectivity make graph augmentation challenging. Thus, how do we augment graph-structured data given this nature of graphs? + +# 4. Proposed Architecture + +# 4.1. Overview + +The basic architecture of Graph MLP-Mixer is illustrated in Figure 1. The goal of this section is to detail the choices we made to implement each component of the architecture. On the whole, these choices lead to a simple framework that provides speed and quality results. + +Notation. Let $G = (\mathcal{V}, \mathcal{E})$ be a graph with $\mathcal{V}$ being the set of nodes and $\mathcal{E}$ the set of edges. The graph has $N = |\mathcal{V}|$ nodes and $E = |\mathcal{E}|$ edges. The connectivity of the graph is represented by the adjacency matrix $A \in \mathbb{R}^{N \times N}$ . The node features of node $i$ are denoted by $h_i$ , while the features for an edge between nodes $i$ and $j$ are indicated by $e_{ij}$ . Let $\{\mathcal{V}_1, \dots, \mathcal{V}_P\}$ be the nodes partition, $P$ be the pre-defined number of patches, and $G_i = (\mathcal{V}_i, \mathcal{E}_i)$ be the induced subgraph of $G$ with all the nodes in $\mathcal{V}_i$ and all the edges whose endpoints belong to $\mathcal{V}_i$ . Let $h_G$ be the graph-level representation and $y_G$ be the graph-level target. + +# 4.2. Patch Extraction + +When generalizing MLP-Mixer to graphs, the first step is to extract patches. This extraction is straightforward for images. Indeed, all image data $x \in \mathbb{R}^{H \times W \times C}$ are defined on a regular grid with the same fixed resolution $(H, W)$ , where $H$ and $W$ are respectively the height and the width, and $C$ is the number of channels. Hence, all images can be easily reshaped into a sequence of flattened patches $x_{p} \in$ + +![](images/40c253e7863935cf961944eaa960e025c0ae8992d29ba3dcf073570bafdcf555.jpg) +Figure 1. The basic architecture of the proposed Graph ViT/MLP-Mixer. They consist of a patch extraction module, a patch embedding module, a sequence of mixer layers, a global average pooling, and a classifier head. The patch extraction module partitions graphs into overlapping patches. The patch embedding module transforms these graph patches into corresponding token representations, which are fed into a sequence of mixer layers to generate the output tokens. A global average pooling layer followed by a fully-connected layer is finally used for prediction. Each Mixer Layer, MLP or graph-based multi-head attention (gMHA), is a residual network that alternates between a Token Mixer applied to all patches, and a Channel Mixer applied to each patch independently (see right side). + +![](images/01d87eed8ea646d6b29ca3ceea1d554f38f153b52c52cba7fe0bddcdda550f1b.jpg) + +$\mathbb{R}^{P \times (R^2 C)}$ , where $(R, R)$ is the resolution of each image patch, and $P = HW / R^2$ is the resulting number of patches. + +Unlike images with fixed resolution, extracting graph patches is more challenging, see Table 1. Generally, graphs have different sizes, i.e. number of nodes, and therefore cannot be uniformly divided like image data. Additionally, meaningful sub-graphs must be identified in the sense that nodes and edges composing a patch must share similar semantic or information, s.a. a community of friends sharing biking interest in a social network. As such, a graph patch extraction process must satisfy the following conditions: 1) the same extraction algorithm can be applied to any arbitrary graph, 2) the nodes in the sub-graph patch must be more closely connected than for those outside the patch, and 3) the extraction complexity must be fast, that is at most linear w.r.t. the number of edges, i.e. $O(E)$ . + +METIS (Karypis & Kumar, 1998) is a graph clustering algorithm with one of the best trade-off accuracy and speed. It partitions a graph into a pre-defined number of clusters such that the number of within-cluster links is much higher than between-cluster links in order to better capture good community structure. For these fine properties, we select it as our patch extraction algorithm. However, METIS is limited to finding non-overlapping clusters, as visualized in Figure 1. In this example, METIS partitions the graph into four non-overlapping parts, i.e. $\{1,2,3\}$ , $\{4,5,6\}$ , $\{7,8,9\}$ and $\{10,11,12\}$ , resulting in 5 edge cuts. Unlike images, extracting non-overlapping patches could imply losing important edge information, i.e. the cutting edges, and thus + +decreasing the predictive performance, as we will observe experimentally. To overcome this issue and to retain all original edges, we allow graph patches to overlap with each other. For example in Figure 1, if the source and destination nodes of an edge are not in the same patch, we assign both nodes to the patches they belong to. As such, node 3 and node 4 are in two different patches, here the blue and red one, but are connected with each other. After our overlapping adjustment, these two nodes belong to both the blue and red patches. This practice is equivalent to expanding the graph patches to the one-hop neighbourhood of all nodes in that patch. Formally, METIS is first applied to partition a graph into $P$ non-overlapping patches: $\{\mathcal{V}_1,\dots ,\mathcal{V}_P\}$ such that $\mathcal{V} = \mathcal{V}_1\cup \ldots \cup \mathcal{V}_P$ and $\mathcal{V}_i\cap \mathcal{V}_j = \emptyset ,\forall i\neq j$ . Then, patches are expanded to their one-hop neighbourhood in order to preserve the information of between-patch links and make use of all graph edges: $\mathcal{V}_i\gets \mathcal{V}_i\cup \{\mathcal{N}_1(j)\mid j\in \mathcal{V}_i\}$ , where $\mathcal{N}_k(j)$ defines the $k$ -hop neighbourhood of node $j$ . + +# 4.3. Patch Encoder + +For images, patch encoding can be done with a simple linear transformation given the fixed resolution of all image patches. This operation is fast and well-defined. For graphs, the patch encoder network must be able to handle complex data structure such as invariance to index permutation, heterogeneous neighborhood, variable patch sizes, convolution on graphs, and expressive to differentiate graph isomorphisms. As a result, the graph patch encoder is a GNN, whose architecture is designed to best transform a graph + +token $G_{p}$ into a fixed-size representation $x_{G_p}$ into 3 steps. + +Step 1. Raw node and edge linear embedding. The input node features $\alpha_{i}\in \mathbb{R}^{d_{n}\times 1}$ and edge features $\beta_{ij}\in \mathbb{R}^{d_e\times 1}$ are linearly projected into $d$ -dimensional hidden features: + +$$ +h _ {i} ^ {0} = U ^ {0} \alpha_ {i} + u ^ {0}; \quad e _ {i j} ^ {0} = V ^ {0} \beta_ {i j} + v ^ {0} \tag {1} +$$ + +where $h_i^0 \in \mathbb{R}^d$ , $e_{ij}^0 \in \mathbb{R}^d$ , $U^0 \in \mathbb{R}^{d \times d_n}$ , $V^0 \in \mathbb{R}^{d \times d_e}$ and $u^0, v^0 \in \mathbb{R}^d$ are learnable parameters. + +Step 2. Graph convolutional layers with MP-GNN. We apply a series of $L$ convolution layers, where the node and edge representations are updated with a MP-GNN applied to each graph patch $G_{p} = (\mathcal{V}_{p},\mathcal{E}_{p})$ as follows: + +$$ +\begin{array}{l} h _ {i, p} ^ {\ell + 1} = f _ {\text {n o d e}} \left(h _ {i, p} ^ {\ell}, \left\{h _ {j, p} ^ {\ell} \mid j \in \mathcal {N} (i) \right\}, e _ {i j, p} ^ {\ell}\right) + g _ {\text {p a t c h - n o d e}} \left(h _ {p} ^ {\ell}\right), \\ e _ {i j, p} ^ {\ell + 1} = f _ {\text {e d g e}} \left(h _ {i, p} ^ {\ell}, h _ {i, p} ^ {\ell}, e _ {i j, p} ^ {\ell}\right) + g _ {\text {p a t c h - e d g e}} \left(e _ {p} ^ {\ell}\right), \tag {2} \\ \end{array} +$$ + +where $h_{i,p}^{\ell +1}, h_{i,p}^{\ell}, h_{p}^{\ell}, e_{ij,p}^{\ell +1}, e_{ij,p}^{\ell}, e_{p}^{\ell} \in \mathbb{R}^{d}, \ell$ is the layer index, $p$ is the patch index, $i, j$ denotes the nodes, $\mathcal{N}(i)$ is the neighborhood of the node $i$ and functions $f_{\mathrm{node}}$ and $f_{\mathrm{edge}}$ (with learnable parameters) define any arbitrary MP-GNN architecture s.a. (Kipf & Welling, 2017; Bresson & Laurent, 2017; Hu et al., 2019; Dwivedi & Bresson, 2021), $h_p^\ell = \frac{1}{|\mathcal{V}_p|}\sum_{i\in \mathcal{V}_p}h_{i,p}^l \in \mathbb{R}^d$ , $e_p^\ell = \frac{1}{|\mathcal{E}_p|}\sum_{ij\in \mathcal{E}_p}e_{ij,p}^l \in \mathbb{R}^d$ are respectively the mean representations of the patch nodes and patch edges, and $g_{\mathrm{patch - node}}$ , $g_{\mathrm{patch - edge}}$ are MLP-based functions that act on $h_p^\ell$ and $e_p^\ell$ . For each node and edge covered by more than one patch due to the patch overlapping to include all edges cut by METIS, we update the node/edge representation by averaging over the overlapping patches: + +$$ +h _ {i, p} ^ {l + 1} \leftarrow \underset {\{k \mid i \in \mathcal {V} _ {k} \}} {\text {M e a n}} h _ {i, k} ^ {l + 1}; \quad e _ {i j, p} ^ {l + 1} \leftarrow \underset {\{k \mid i j \in \mathcal {E} _ {k} \}} {\text {M e a n}} e _ {i j, k} ^ {l + 1}, \tag {3} +$$ + +where $\{k|i\in \mathcal{V}_k\}$ $\{k|ij\in \mathcal{E}_k\}$ are the set of all patches that cover node $i$ , edge $ij$ respectively. + +Step 3. Pooling and readout. The final step is mean pooling all node vectors in $G_{p}$ such that $h_{p} = \frac{1}{|\mathcal{V}_{p}|}\sum_{i\in \mathcal{V}_{p}}h_{i,p}^{\ell = L}\in \mathbb{R}^{d}$ , and applying a small MLP to get the fixed-sized patch embedding $x_{G_p}\in \mathbb{R}^d$ . + +Observe that the patch encoder is a MP-GNN, and thus has the inherent limitation of poor long-range dependency. Does it affect the Graph MLP-Mixer to capture long-range interactions? The answer is negative because this problem is limited to large graphs. But for small patch graphs, this issue does not really exist (or is negligible). To give a few numbers, the mean number of nodes and the mean diameter for graph patches are around 3.15 and 1.82 respectively for molecular datasets and around 17.20 and 3.07 for image datasets, see Table 7. + +# 4.4. Positional Information + +Regular grids offer a natural implicit arrangement for the sequence of image patches and for the pixels inside the image patches. However, such ordering of nodes and patches do not exist for general graphs. This lack of positional information reduces the expressivity of the network. Hence, we use two explicit positional encodings (PE); one absolute PE for the patch nodes and one relative PE for the graph patches. + +Node PE. Input node features in Eq (1) are augmented with $p_i \in \mathbb{R}^K$ , with a learnable matrix $T^0 \in \mathbb{R}^{d \times K}$ : + +$$ +h _ {i} ^ {0} = T ^ {0} p _ {i} + U ^ {0} \alpha_ {i} + u ^ {0} \in \mathbb {R} ^ {d}, \tag {4} +$$ + +The benefits of different PEs are dataset dependent. We follow the strategy in (Rampášek et al., 2022) that uses random-walk structural encoding (RWSE) (Dwivedi et al., 2021) for molecular data and Laplacian eigenvectors encodings (Dwivedi et al., 2020) for image superpixels. Since Laplacian eigenvectors are defined up to sign flips, the sign of the eigenvectors is randomly flipped during training. + +Patch PE. Relative positional information between the graph patches can be computed from the original graph adjacency matrix $A \in \mathbb{R}^{N \times N}$ and the clusters $\{\nu_1, \dots, \nu_P\}$ extracted by METIS in Section 4.2. Specifically, we capture relative positional information via the coarsened adjacency matrix $A^P \in \mathbb{R}^{P \times P}$ over the patch graphs: + +$$ +A _ {i j} ^ {P} = \left| \mathcal {V} _ {i} \cap \mathcal {V} _ {j} \right| = \operatorname {C u t} \left(\mathcal {V} _ {i}, \mathcal {V} _ {j}\right), \tag {5} +$$ + +where $\mathrm{Cut}(\mathcal{V}_i,\mathcal{V}_j) = \sum_{k\in \mathcal{V}_i}\sum_{l\in \mathcal{V}_j}A_{kl}$ is the standard graph cut operator which counts the number of connecting edges between cluster $\mathcal{V}_i$ and cluster $\mathcal{V}_j$ . + +We extract the positional encoding $\hat{p}_i\in \mathbb{R}^{\tilde{K}}$ at the patch level, similar to the node level, which will be injected (after a linear transformation) into the first layer of the mixer block: + +$$ +x _ {i} ^ {0} = \hat {T} ^ {0} \hat {p} _ {i} + \hat {U} ^ {0} x _ {i} + \hat {u} ^ {0} \in \mathbb {R} ^ {d}, \tag {6} +$$ + +where $x_{i}$ is the patch embedding. + +# 4.5. Mixer Layer + +For images, the original mixer layer (Tolstikhin et al., 2021) is a simple network that alternates channel and token mixing steps. The token mixing step is performed over the token dimension, while the channel mixing step is carried out over the channel dimension. These two interleaved steps enable information fusion among tokens and channels. The simplicity of the mixer layer has led to a significant reduction in computational cost with little or no sacrifice in performance. Indeed, the self-attention mechanism in ViT requires $O(P^{2})$ memory and $O(P^{2})$ computation, while the mixer layer in MLP-Mixer needs $O(P)$ memory and $O(P)$ computation. + +Table 2. Comparison with base MP-GNNs. Results are averaged over 4 runs with 4 different seeds. + +
ModelZINCMNISTCIFAR10MolTOX21MolHIVPeptide-funcPeptide-struct
MAE ↓Accuracy ↑Accuracy ↑ROCAUC ↑ROCAUC ↑AP ↑MAE ↓
GCN0.1952 ± 0.00570.9269 ± 0.00230.5423 ± 0.00560.7525 ± 0.00310.7813 ± 0.00810.6328 ± 0.00860.2758 ± 0.0012
GCN-MLP-Mixer0.1347 ± 0.00200.9516 ± 0.00270.6111 ± 0.00170.7816 ± 0.00750.7929 ± 0.01110.6832 ± 0.00610.2486 ± 0.0041
GCN-ViT0.1688 ± 0.00950.9600 ± 0.00150.6367 ± 0.00270.7820 ± 0.00960.7780 ± 0.01200.6855 ± 0.00490.2468 ± 0.0015
GatedGCN0.1577 ± 0.00460.9776 ± 0.00170.6628 ± 0.00170.7641 ± 0.00570.7874 ± 0.01190.6300 ± 0.00290.2778 ± 0.0017
GatedGCN-MLP-Mixer0.1244 ± 0.00530.9832 ± 0.00040.7060 ± 0.00220.7910 ± 0.00400.7976 ± 0.01360.6932 ± 0.00170.2508 ± 0.0007
GatedGCN-ViT0.1421 ± 0.00310.9846 ± 0.00090.7158 ± 0.00090.7857 ± 0.00280.7734 ± 0.01140.6942 ± 0.00750.2465 ± 0.0015
GINE0.1072 ± 0.00370.9705 ± 0.00230.6131 ± 0.00350.7730 ± 0.00640.7885 ± 0.00340.6405 ± 0.00770.2780 ± 0.0021
GINE-MLP-Mixer0.0733 ± 0.00140.9809 ± 0.00040.6833 ± 0.00220.7868 ± 0.00430.7997 ± 0.01020.6970 ± 0.00800.2494 ± 0.0007
GINE-ViT0.0849 ± 0.00470.9820 ± 0.00050.6967 ± 0.00400.7851 ± 0.00770.7792 ± 0.01490.6919 ± 0.00850.2449 ± 0.0016
GraphTrans0.1230 ± 0.00180.9782 ± 0.00120.6809 ± 0.00200.7646 ± 0.00550.7884 ± 0.01040.6313 ± 0.00390.2777 ± 0.0025
GraphTrans-MLP-Mixer0.0773 ± 0.00300.9742 ± 0.00110.7396 ± 0.00330.7817 ± 0.00400.7969 ± 0.00610.6858 ± 0.00620.2480 ± 0.0013
GraphTrans-ViT0.0960 ± 0.00730.9725 ± 0.00230.7211 ± 0.00550.7835 ± 0.00320.7755 ± 0.02080.6876 ± 0.00590.2455 ± 0.0027
+ +Let $X\in \mathbb{R}^{P\times d}$ be the patch embedding $\{x_{G_1},\dots,x_{G_P}\}$ , the graph mixer layer can be expressed as + +$$ +U = X + \left(W _ {2} \sigma \left(W _ {1} \operatorname {L a y e r N o r m} (X)\right)\right) \in \mathbb {R} ^ {P \times d}, +$$ + +$$ +Y = U + \left(W _ {4} \sigma \left(W _ {3} \operatorname {L a y e r N o r m} (U) ^ {T}\right)\right) ^ {T} \in \mathbb {R} ^ {P \times d}, +$$ + +where $\sigma$ is a GELU nonlinearity (Hendrycks & Gimpel, 2016), LayerNorm(\cdot) is layer normalization (Ba et al., 2016), and matrices $W_{1} \in \mathbb{R}^{d_{s} \times P}$ , $W_{2} \in \mathbb{R}^{P \times d_{s}}$ , $W_{3} \in \mathbb{R}^{d_{c} \times d}$ , $W_{4} \in \mathbb{R}^{d \times d_{c}}$ , where $d_{s}$ and $d_{c}$ are the tunable hidden widths in the token-mixing and channel-mixing MLPs. + +Alternatively, we can formulate a graph transformer layer to incorporate the self-attention mechanism as in ViT: + +$$ +U = X + \mathrm {g M H A} (\text {L a y e r N o r m} (X)) \in \mathbb {R} ^ {P \times d}, \tag {8} +$$ + +$$ +Y = U + \operatorname {M L P} (\operatorname {L a y e r N o r m} (U)) \in \mathbb {R} ^ {P \times d}, +$$ + +where gMHA (graph-based multi-head attention) is designed to capture token dependencies based on the given graph topology. In Eq.(8), gHMA is defined as $\left(A^{P} \odot \operatorname{softmax}\left(\frac{Q K^{T}}{\sqrt{d}}\right)\right)V$ but other options are possible to characterize the gHMA mechanism, as studied in Appendix F. + +Then we generate the final graph-level representation by mean pooling all the non-empty patches: + +$$ +h _ {G} = \sum_ {p} m _ {p} \cdot x _ {G _ {p}} / \sum_ {p} m _ {p} \in \mathbb {R} ^ {d}, \tag {9} +$$ + +where $m_p$ is a binary variable with value 1 for non-empty patches and value 0 for empty patches (since graphs have variable sizes, small graphs can produce empty patches). Finally, we apply a small MLP to get the graph-level target: + +$$ +y _ {G} = \operatorname {M L P} \left(h _ {G}\right) \in \mathbb {R} (\text {r e g r e s s i o n}) \text {o r} \mathbb {R} ^ {n _ {c}} (\text {c l a s s i f .}). \tag {10} +$$ + +# 4.6. Data augmentation + +MLP-Mixer architectures are known to be strong overfitters (Liu et al., 2021). In order to reduce this effect, we propose to introduce some perturbations in METIS as follows. + +Let $G = (\mathcal{V}, \mathcal{E})$ be the original graph and $G' = (\mathcal{V}, \mathcal{E}')$ be the graph after randomly dropping a small set of edges. At each epoch, we apply METIS graph partition algorithm on $G'$ to get slightly different node partitions $\{\mathcal{V}_1, \dots, \mathcal{V}_P\}$ . Then, we extract the graph patches $\{G_1, \dots, G_P\}$ where $G_i = (\mathcal{V}_i, \mathcal{E}_i)$ is the induced subgraph of the original graph $G$ , and not the modified $G'$ . This way, we can produce distinct graph patches at each epoch that retain all the nodes and edges from the original graph. + +# 5. Experiments + +Graph Benchmark Datasets. We evaluate our Graph ViT/MLP-Mixer on a wide range of graph benchmarks; 1) Simulated datasets: CSL, EXP, SR25 and TreeNeighbourMatch dataset, 2) Small real-world datasets: ZINC, MNIST and CIFAR10 from Benchmarking GNNs (Dwivedi et al., 2020), and MolTOX21 and MolHIV from OGB (Hu et al., 2020) and 3) Large real-world datasets: Peptides- func and Peptides-struct from LRGB (Dwivedi et al., 2022). Details are provided in Appendix B and Appendix C. + +# 5.1. Comparison with MP-GNNs + +We show in Table 2 that ViT/Graph MLP-Mixer lifts the performance of all base MP-GNNs across various datasets, which include GCN (Kipf & Welling, 2017), GatedGCN (Bresson & Laurent, 2017), GINE (Hu et al., 2019) and Graph Transformer (Dwivedi & Bresson, 2021). We augmented all the base models with the same type of PE as Graph MLP-Mixer to ensure a fair comparison. These promising results demonstrate the generic nature of our proposed architecture which can be applied to any MP-GNNs in practice. Remarkably, Graph ViT/MLP-Mixer outperforms the base MP-GNNs by large margins on two LRGB (Dwivedi et al., 2022) datasets; we can observe an average 0.056 Average Precision improvement on Peptides-func and an average 0.028 MAE decrease on Peptides-struct, which verifies its superiority over MP-GNNs in capturing + +Table 3. Comparison of our best results from Table 2 with the state-of-the-art models (missing values from literature are indicated with ' -'). Results are averaged over 4 runs with 4 different seeds. + +
ModelZINCMolHIVPeptides-funcPeptides-strcut
MAE ↓ROCAUC ↑AP ↑TimeMem.MAE ↓TimeMem.
GT (Dwivedi et al., 2020)0.226 ± 0.014-------
GraphiT (Mialon et al., 2021)0.202 ± 0.011-------
Graphormer (Ying et al., 2021)0.122 ± 0.006-------
GPS (Rampášek et al., 2022)0.070 ± 0.0040.7880 ± 0.01010.6562 ± 0.01151.4×6.8×0.2515 ± 0.00121.3×8.3×
SAN+LapPE (Kreuzer et al., 2021)0.139 ± 0.0060.7775 ± 0.00610.6384 ± 0.01219.4×12.4×0.2683 ± 0.00438.8×14.7×
SAN+RWSE (Kreuzer et al., 2021)--0.6439 ± 0.00758.0×19.5×0.2545 ± 0.00127.9×14.5×
GNN-AK+ (Zhao et al., 2021)0.080 ± 0.0010.7961 ± 0.01190.6480 ± 0.00892.6×7.8×0.2736 ± 0.00072.5×9.2×
SUN (Frasca et al., 2022)0.084 ± 0.0020.8003 ± 0.005510.6730 ± 0.007843.8×18.8×0.2498 ± 0.000842.7×20.7×
CIN (Bodnar et al., 2021)0.079 ± 0.00620.8094 ± 0.0057------
Graph MLP-Mixer0.073 ± 0.0010.7997 ± 0.01020.6970 ± 0.00801.0×1.0×0.2475 ± 0.00151.0×1.2×
Graph ViT0.085 ± 0.0050.7792 ± 0.01490.6942 ± 0.00751.1×0.8×0.2449 ± 0.00161.0×1.0×
+ +long-range interaction. + +# 5.2. Comparison with SOTAs + +Next, we compare Graph ViT/MLP-Mixer against popular GNN models with SOTA results, including Graph Transformers (GraphiT, GPS, SAN, etc.) and expressive GNNs (GNN-AK+ and SUN), as shown in Table 3. For small molecular graphs, our model achieved 0.073 on ZINC and 0.7997 on MolHIV. For larger molecular graphs, our model sets new SOTA performance with the best scores of 0.6970 on Peptides-func and 0.2449 on Peptides-struct. + +Besides, Graph ViT/MLP-Mixer offers better space-time complexity and scalability. Theoretically, most Graph Transformer models and expressive GNNs, might be computationally infeasible for large graphs, as they need to calculate the full attention and need to run an inner GNN on every node of the graph respectively. Experimentally, we observed that, when training on datasets with hundreds of nodes, SAN+LapPE (Chen et al., 2022a) and SUN (Frasca et al., 2022) require $9.4 \times$ and $43.8 \times$ training time per epoch, and $12.4 \times$ and $18.8 \times$ memory respectively, compared to our model (Table 3 and Table 17). + +# 5.3. Graph ViT/MLP-Mixer can mitigate over-squashing + +![](images/77d9239874b959d2b75220b3a6103a65db468280f5ce21ce38fb932cf96828ff.jpg) +Figure 2. Test Accuracy across problem radius (tree depth) in the TreeNeighbourMatch problem. + +TreeNeighbourMatch is a synthetic dataset proposed by Alon & Yahav (2020) to provide an intuition into oversquashing. Each example is a binary tree of depth $r$ . The goal is to predict an alphabetical label for the target node, where the correct answer is the label of the leaf node that has the same degree as the target node. Figure 2 shows that standard MP-GNNs (i.e., GCN, GGCN, GAT and GIN) fail to generalize on the dataset from $r = 4$ , whereas our model mitigates over-squashing and generalizes well until $r = 7$ . As for why this happens, Alon & Yahav (2020) show that GNNs fail to solve larger TreeNeighbourMatch cases as they 'squash' information about the graph into the target node's embedding, which can hold a limited amount of information. In contrast, Graph ViT/MLP-Mixer avoids this problem as it transmits long-range information directly without 'squashing.' Concretely, Appendix I shows a simple construction illustrating that our model can solve TreeNeighbourMatch cases while avoiding the inherent limitations of MP-GNNs discussed by Alon & Yahav (2020). + +# 5.4. Graph ViT/MLP-Mixer can achieve empirical high expressivity + +Table 4. Empirical evaluation of the expressive power on simulation datasets, averaging over 4 runs with 4 different seeds. + +
ModelCSL (ACC)EXP (ACC)SR25 (ACC)
GCN10.00 ± 0.0051.90 ± 1.966.67 ± 0.00
GatedGCN10.00 ± 0.0051.73 ± 1.656.67 ± 0.00
GINE10.00 ± 0.0050.69 ± 1.396.67 ± 0.00
GraphTrans10.00 ± 0.0052.35 ± 2.326.67 ± 0.00
GCN-MLP-Mixer100.00 ± 0.00100.00 ± 0.00100.00 ± 0.00
GatedGCN-MLP-Mixer100.00 ± 0.00100.00 ± 0.00100.00 ± 0.00
GINE-MLP-Mixer100.00 ± 0.00100.00 ± 0.00100.00 ± 0.00
GraphTrans-MLP-Mixer100.00 ± 0.00100.00 ± 0.00100.00 ± 0.00
+ +We experimentally evaluate the expressive power of Graph + +ViT/MLP-Mixer on three simulated datasets. CSL (Murphy et al., 2019) contains 150 4-regular graphs that cannot be distinguished with a 1-WL isomorphism test. EXP (Abboud et al., 2020) contains 600 pairs of non-isomorphic graphs: both 1-WL and 2-WL tests fail at differentiating these graphs. Finally, SR25 (Balcilar et al., 2021) has 15 strongly regular graphs with 25 nodes each that cannot be discerned with a 3-WL test. Numerical experiments show that our model achieves perfect accuracy on all 3 datasets while MP-GNNs fail, see Table 4. Our results are only empirical. Due to the nonlocal way in which information is passed from one layer to the other, a direct analytical comparison between the proposed neural network and the Weisfeiler-Lehman test is challenging. + +# 5.5. Ablation Studies + +In our ablation studies, we evaluated various choices made during the implementation of each component of the architecture. The details of these studies can be found in the appendix. Appendix D focuses on the design of the patch extraction process, including the effects of the graph partition algorithm (Table 9), patch size (Figure 4), patch overlapping (Figure 5), and other related aspects. Appendix E presents the effects of two types of positional encoding, i.e., node PE and patch PE. Appendix G investigates the effect of data augmentation and explores the trade-off between performance and efficiency. In Appendix F, we delve into different designs of the gMHA mechanism in the Graph ViT. Additionally, we provide a complexity analysis in Appendix J and discuss the limitations in Appendix K. + +# 6. Conclusion + +In this work, we proposed a novel GNN architecture to improve standard MP-GNN limitations, particularly their low expressivity power and poor long-range dependency, and presented promising results on several benchmark graph datasets. Future work will focus on further exploring graph network architectures with the inductive biases of graph tokens and vision Transformer-like architectures in order to solve fundamental node and link prediction tasks, and possibly without the need of specialized GNN libraries like PyG (Fey & Lenssen, 2019) or DGL (Zheng et al., 2020) by replacing sparse linear algebra operations on graph tokens with dense operations. + +# Acknowledgments + +XB is supported by NUS Grant ID R-252-000-B97-133 and BH was supported in part by NUS ODPRT Grant ID A-0008067-00-00. The authors would like to express their gratitude to the reviewers for their feedback, which has improved the clarity and contribution of the paper. + +# References + +Abboud, R., Ceylan, I. I., Grohe, M., and Lukasiewicz, T. The surprising power of graph neural networks with random node initialization. arXiv preprint arXiv:2010.01179, 2020. +Alon, U. and Yahav, E. On the bottleneck of graph neural networks and its practical implications. arXiv preprint arXiv:2006.05205, 2020. +Alsentzer, E., Finlayson, S., Li, M., and Zitnik, M. Subgraph neural networks. Advances in Neural Information Processing Systems, 33:8017-8029, 2020. +Arnaiz-Rodriguez, A., Begga, A., Escolano, F., and Oliver, N. M. Diffwire: Inductive graph rewiring via the lovász bound. In The First Learning on Graphs Conference, 2022. +Azizian, W. and Lelarge, M. Expressive power of invariant and equivariant graph neural networks. arXiv preprint arXiv:2006.15646, 2020. +Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. +Bahdanau, D., Cho, K., and Bengio, Y. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. +Balcilar, M., Héroux, P., Gauzere, B., Vasseur, P., Adam, S., and Honeine, P. Breaking the limits of message passing graph neural networks. In International Conference on Machine Learning, pp. 599-608. PMLR, 2021. +Bapst, V., Keck, T., Grabska-Barwińska, A., Donner, C., Cubuk, E. D., Schoenholz, S. S., Obika, A., Nelson, A. W., Back, T., Hassabis, D., et al. Unveiling the predictive power of static structure in glassy systems. Nature Physics, 16(4):448-454, 2020. +Belkin, M. and Niyogi, P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373-1396, 2003. +Berg, R. v. d., Kipf, T. N., and Welling, M. Graph convolutional matrix completion. arXiv preprint arXiv:1706.02263, 2017. +Bodnar, C., Frasca, F., Otter, N., Wang, Y., Lio, P., Montu-far, G. F., and Bronstein, M. Weisfeiler and lehman go cellular: Cw networks. Advances in Neural Information Processing Systems, 34:2625-2640, 2021. +Bouritsas, G., Frasca, F., Zafeiriou, S. P., and Bronstein, M. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. + +Bresson, X. and Laurent, T. Residual gated graph convnets. arXiv preprint arXiv:1711.07553, 2017. +Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language models are few-shot learners. Advances in neural information processing systems, 33: 1877-1901, 2020. +Bulç, A., Meyerhenke, H., Safro, I., Sanders, P., and Schulz, C. Recent advances in graph partitioning. Algorithm engineering, pp. 117-158, 2016. +Chen, D., O'Bray, L., and Borgwardt, K. Structure-aware transformer for graph representation learning. In International Conference on Machine Learning, pp. 3469-3489. PMLR, 2022a. +Chen, J., Gao, K., Li, G., and He, K. Nagphormer: Neighborhood aggregation graph transformer for node classification in large graphs. arXiv preprint arXiv:2206.04910, 2022b. +Chen, Z., Chen, L., Villar, S., and Bruna, J. On the equivalence between graph isomorphism testing and function approximation with gnns. Advances in neural information processing systems, 2019. +Chen, Z., Chen, L., Villar, S., and Bruna, J. Can graph neural networks count substructures? Advances in neural information processing systems, 33:10383-10395, 2020. +Chung, F. R. Spectral graph theory, volume 92. American Mathematical Soc., 1997. +Cranmer, M. D., Xu, R., Battaglia, P., and Ho, S. Learning symbolic physics with graph networks. arXiv preprint arXiv:1909.05862, 2019. +Cubuk, E. D., Zoph, B., Shlens, J., and Le, Q. V. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp. 702-703, 2020. +Deac, A., Lackenby, M., and Velicković, P. Expander graph propagation. In The First Learning on Graphs Conference, 2022. +Defferrard, M., Bresson, X., and Vandergheynst, P. Convolutional neural networks on graphs with fast localized spectral filtering. In NIPS, 2016. +Derrow-Pinion, A., She, J., Wong, D., Lange, O., Hester, T., Perez, L., Nunkesser, M., Lee, S., Guo, X., Wiltshire, B., et al. Eta prediction with graph neural networks in google maps. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp. 3767-3776, 2021. + +Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. +Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. +Duvenaud, D. K., Maclaurin, D., Iparraguirre, J., Bombarell, R., Hirzel, T., Aspuru-Guzik, A., and Adams, R. P. Convolutional networks on graphs for learning molecular fingerprints. Advances in neural information processing systems, 28, 2015. +Dwivedi, V. P. and Bresson, X. A generalization of transformer networks to graphs. In AAAI Workshop on Deep Learning on Graphs: Methods and Applications, 2021. +Dwivedi, V. P., Joshi, C. K., Laurent, T., Bengio, Y., and Bresson, X. Benchmarking graph neural networks. arXiv preprint arXiv:2003.00982, 2020. +Dwivedi, V. P., Luu, A. T., Laurent, T., Bengio, Y., and Bresson, X. Graph neural networks with learnable structural and positional representations. In International Conference on Learning Representations, 2021. +Dwivedi, V. P., Rampášek, L., Galkin, M., Parviz, A., Wolf, G., Luu, A. T., and Beaini, D. Long range graph benchmark. arXiv preprint arXiv:2206.08164, 2022. +Feng, J., Chen, Y., Li, F., Sarkar, A., and Zhang, M. How powerful are k-hop message passing graph neural networks. arXiv preprint arXiv:2205.13328, 2022. +Fey, M. and Lenssen, J. E. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428, 2019. +Frasca, F., Bevilacqua, B., Bronstein, M. M., and Maron, H. Understanding and extending subgraph gnns by rethinking their symmetries. arXiv preprint arXiv:2206.11140, 2022. +Freitas, S., Dong, Y., Neil, J., and Chau, D. H. A large-scale database for graph representation learning. arXiv preprint arXiv:2011.07682, 2020. +Gaugelet, T., Day, B., Jamasb, A. R., Soman, J., Regep, C., Liu, G., Hayter, J. B., Vickers, R., Roberts, C., Tang, J., et al. Utilising graph machine learning within drug discovery and development. arXiv preprint arXiv:2012.05716, 2020. + +Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., and Dahl, G. E. Neural message passing for quantum chemistry. In International Conference on Machine Learning, pp. 1263-1272. PMLR, 2017. +Hamilton, W. L., Ying, R., and Leskovec, J. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 1025-1035, 2017. +Han, K., Wang, Y., Guo, J., Tang, Y., and Wu, E. Vision gnn: An image is worth graph of nodes. arXiv preprint arXiv:2206.00272, 2022a. +Han, X., Jiang, Z., Liu, N., and Hu, X. G-mixup: Graph data augmentation for graph classification. arXiv preprint arXiv:2202.07179, 2022b. +Hendrycks, D. and Gimpel, K. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016. +Hochreiter, S. and Schmidhuber, J. Long short-term memory. Neural computation, 9(8):1735-1780, 1997. +Hu, W., Liu, B., Gomes, J., Zitnik, M., Liang, P., Pande, V., and Leskovec, J. Strategies for pre-training graph neural networks. arXiv preprint arXiv:1905.12265, 2019. +Hu, W., Fey, M., Zitnik, M., Dong, Y., Ren, H., Liu, B., Catasta, M., and Leskovec, J. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118-22133, 2020. +Irwin, J. J., Sterling, T., Mysinger, M. M., Bolstad, E. S., and Coleman, R. G. Zinc: a free tool to discover chemistry for biology. Journal of chemical information and modeling, 52(7):1757-1768, 2012. +Karypis, G. and Kumar, V. A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM Journal on scientific Computing, 20(1):359-392, 1998. +Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Kipf, T. N. and Welling, M. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR), 2017. +Kreuzer, D., Beaini, D., Hamilton, W., Létourneau, V., and Tossou, P. Rethinking graph transformers with spectral attention. Advances in Neural Information Processing Systems, 34:21618-21629, 2021. +Kuang, W., WANG, Z., Li, Y., Wei, Z., and Ding, B. Coarformer: Transformer for large graph via graph coarsening, 2022. URL https://openreview.net/forum?id=fkjO_FKVzw. + +Kwon, J., Kim, J., Park, H., and Choi, I. K. Asam: Adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks. arXiv preprint arXiv:2102.11600, 2021. +Landrum, G. et al. Rdkit: Open-source cheminformatics. 2006, 2006. +Li, P., Wang, Y., Wang, H., and Leskovec, J. Distance encoding: Design provably more powerful neural networks for graph representation learning. Advances in Neural Information Processing Systems, 33, 2020a. +Li, X., Zhou, Y., Dvornek, N., Zhang, M., Gao, S., Zhuang, J., Scheinost, D., Staib, L. H., Ventola, P., and Duncan, J. S. Braingnn: Interpretable brain graph neural network for fmri analysis. Medical Image Analysis, 74:102233, 2021. +Li, Y., Qian, B., Zhang, X., and Liu, H. Graph neural network-based diagnosis prediction. *Big Data*, 8(5):379-390, 2020b. +Lim, D., Robinson, J., Zhao, L., Smidt, T., Sra, S., Maron, H., and Jegelka, S. Sign and basis invariant networks for spectral graph representation learning. arXiv preprint arXiv:2202.13013, 2022. +Liu, H., Dai, Z., So, D., and Le, Q. V. Pay attention to mlps. Advances in Neural Information Processing Systems, 34: 9204-9215, 2021. +Loukas, A. What graph neural networks cannot learn: depth vs width. In International Conference on Learning Representations, 2020. +Maron, H., Ben-Hamu, H., Shamir, N., and Lipman, Y. Invariant and equivariant graph networks. arXiv preprint arXiv:1812.09902, 2018. +Maron, H., Ben-Hamu, H., Serviansky, H., and Lipman, Y. Provably powerful graph networks. arXiv preprint arXiv:1905.11136, 2019. +Mialon, G., Chen, D., Selosse, M., and Mairal, J. Graphit: Encoding graph structure in transformers. arXiv preprint arXiv:2106.05667, 2021. +Min, E., Chen, R., Bian, Y., Xu, T., Zhao, K., Huang, W., Zhao, P., Huang, J., Ananiadou, S., and Rong, Y. Transformer for graphs: An overview from architecture perspective. arXiv preprint arXiv:2202.08455, 2022. +Monti, F., Bronstein, M., and Bresson, X. Geometric matrix completion with recurrent multi-graph neural networks. Advances in neural information processing systems, 30, 2017. + +Morris, C., Ritzert, M., Fey, M., Hamilton, W. L., Lenssen, J. E., Rattan, G., and Grohe, M. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 4602-4609, 2019. +Murphy, R., Srinivasan, B., Rao, V., and Ribeiro, B. Relational pooling for graph representations. In International Conference on Machine Learning, pp. 4663-4673. PMLR, 2019. +Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019. +Rampášek, L., Galkin, M., Dwivedi, V. P., Luu, A. T., Wolf, G., and Beaini, D. Recipe for a general, powerful, scalable graph transformer. arXiv preprint arXiv:2205.12454, 2022. +Rong, Y., Huang, W., Xu, T., and Huang, J. Droppededge: Towards deep graph convolutional networks on node classification. arXiv preprint arXiv:1907.10903, 2019. +Schlichtkrull, M., Kipf, T. N., Bloem, P., Berg, R. v. d., Titov, I., and Welling, M. Modeling relational data with graph convolutional networks. In European semantic web conference, pp. 593-607. Springer, 2018. +Shirzad, H., Velingker, A., Venkatachalam, B., Sutherland, D. J., and Sinop, A. K. Exphormer: Sparse transformers for graphs. arXiv preprint arXiv:2303.06147, 2023. +Singh, S., Chaudhary, K., Dhanda, S. K., Bhalla, S., Usmani, S. S., Gautam, A., Tuknait, A., Agrawal, P., Mathur, D., and Raghava, G. P. Satpdb: a database of structurally annotated therapeutic peptides. *Nucleic acids research*, 44(D1):D1119–D1126, 2016. +Stokes, J. M., Yang, K., Swanson, K., Jin, W., Cubillos-Ruiz, A., Donghia, N. M., MacNair, C. R., French, S., Carfrae, L. A., Bloom-Ackermann, Z., et al. A deep learning approach to antibiotic discovery. Cell, 180(4): 688-702, 2020. +Szkarczyk, D., Gable, A. L., Lyon, D., Junge, A., Wyder, S., Huerta-Cepas, J., Simonovic, M., Doncheva, N. T., Morris, J. H., Bork, P., et al. String v11: protein-protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets. *Nucleic acids research*, 47(D1):D607–D613, 2019. +Tolstikhin, I. O., Houlsby, N., Kolesnikov, A., Beyer, L., Zhai, X., Unterthiner, T., Yung, J., Steiner, A., Keysers, + +D., Uszkoreit, J., et al. Mlp-mixer: An all-mlp architecture for vision. Advances in Neural Information Processing Systems, 34:24261-24272, 2021. +Topping, J., Di Giovanni, F., Chamberlain, B. P., Dong, X., and Bronstein, M. M. Understanding over-squashing and bottlenecks on graphs via curvature. In *The Tenth International Conference on Learning Representations*, ICLR 2022, Virtual Event, April 25-29, 2022, 2022. +Touvron, H., Bojanowski, P., Caron, M., Cord, M., El-Nouby, A., Grave, E., Izacard, G., Joulin, A., Synnaeve, G., Verbeek, J., et al. Resmlp: Feedforward networks for image classification with data-efficient training. arXiv preprint arXiv:2105.03404, 2021. +Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need. Advances in neural information processing systems, 30, 2017. +Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., and Bengio, Y. Graph attention networks. In International Conference on Learning Representations, 2018. +Wang, Z., Jiang, W., Zhu, Y. M., Yuan, L., Song, Y., and Liu, W. Dynamixer: a vision mlp architecture with dynamic mixing. In International Conference on Machine Learning, pp. 22691-22701. PMLR, 2022. +Weisfeiler, B. and Leman, A. The reduction of a graph to canonical form and the algebra which appears therein. NTI Series, 2(9):12-16, 1968. +Wu, L., Chen, Y., Shen, K., Guo, X., Gao, H., Li, S., Pei, J., and Long, B. Graph neural networks for natural language processing: A survey. arXiv preprint arXiv:2106.06090, 2021a. +Wu, Z., Jain, P., Wright, M., Mirhoseini, A., Gonzalez, J. E., and Stoica, I. Representing long-range context for graph neural networks with global attention. Advances in Neural Information Processing Systems, 34:13266-13279, 2021b. +Xu, K., Hu, W., Leskovec, J., and Jegelka, S. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. +Ying, C., Cai, T., Luo, S., Zheng, S., Ke, G., He, D., Shen, Y., and Liu, T.-Y. Do transformers really perform badly for graph representation? Advances in Neural Information Processing Systems, 34:28877-28888, 2021. +Yu, W., Luo, M., Zhou, P., Si, C., Zhou, Y., Wang, X., Feng, J., and Yan, S. Metaformer is actually what you need for vision. In Proceedings of the IEEE/CVF Conference + +on Computer Vision and Pattern Recognition, pp. 10819-10829, 2022. +Zhang, H., Cisse, M., Dauphin, Y. N., and Lopez-Paz, D. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017. +Zhang, M. and Li, P. Nested graph neural networks. Advances in Neural Information Processing Systems, 34: 15734-15747, 2021. +Zhang, Z., Liu, Q., Hu, Q., and Lee, C.-K. Hierarchical graph transformer with adaptive node sampling. arXiv preprint arXiv:2210.03930, 2022. +Zhao, L., Jin, W., Akoglu, L., and Shah, N. From stars to subgraphs: Uplifting any gnn with local structure awareness. In International Conference on Learning Representations, 2021. +Zhao, T., Liu, Y., Neves, L., Woodford, O. J., Jiang, M., and Shah, N. Data augmentation for graph neural networks. CoRR, abs/2006.06830, 2020. +Zheng, D., Song, X., Ma, C., Tan, Z., Ye, Z., Dong, J., Xiong, H., Zhang, Z., and Karypis, G. Dgl-ke: Training knowledge graph embeddings at scale. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 739–748, 2020. +Zopf, M. 1-wl expressiveness is (almost) all you need. arXiv preprint arXiv:2202.10156, 2022. + +# A. Related Work + +Table 5. Comparison of different hierarchical graph models. + +
GNNTransformerGraph CoarseningLocal Info.Global Info.
Coarformer (Kuang et al., 2022)✓(non-overlap, static)GNN on original graphMHA on coarsen graph
Exphormer (Shirzad et al., 2023)GNN on original graphMHA on expander graph
ANS-GT (Zhang et al., 2022)✓(non-overlap, static)adaptive node sampling strategysampled nodes from the coarsened graph
NAGphormer (Chen et al., 2022b)MHA on multi-hop neighbour-
Graph MLP-Mixer (Ours)✓(overlap, dynamic)GNN on graph patchestoken mixer across patches
+ +We briefly review the hierarchical graph models (Kuang et al., 2022; Shirzad et al., 2023; Zhang et al., 2022; Chen et al., 2022b) and highlight the main differences among them. + +Coarformer (Kuang et al., 2022) combines MPNNs and Transformers, using a GNN-based module for local information and a Transformer-based module for global information. Exphormer (Shirzad et al., 2023) also employs MPNN+Transformer, using GNN and Transformer modules on the original graph and expander graph, respectively. ANS-GT (Zhang et al., 2022) introduces a node-sampling-based GT with hierarchical attention and graph coarsening. NAGphormer (Chen et al., 2022b) treats each node as a token sequence and aggregates multi-hop information using attention-based readout. + +Main differences: 1) GNN/Transformer module. Coarformer, Exphormer, SAT and ours use a hybrid MPNN+Transformer architecture while ANS-GT and NAGphormer rely solely on Transformers. However, there are notable differences between these approaches as we do not use any Transformer but rather MLP as our backbone. Besides, our MPNN operates on small graph patches instead of the entire graph as Coarformer and Exphormer. Furthermore, SAT and our architecture are sequential, while Coarformer and Exphormer combine MP-GNNs and GT in parallel. + +2) Graph coarsening module. Coarformer, ANS-GT and SAT use a graph coarsening mechanism. These methods perform graph coarsening as a pre-processing step and generate static and non-overlapping graph patches. In contrast, we perform graph coarsening with a stochastic version of Metis on-the-fly, generating dynamic and overlapping graph patches. +3) Graph embedding module. The ways to capture local and global information are different as stated in the above reviews and the above summary Table 5. + +In summary, although these aforementioned hierarchical graph models share similarities with our model, the major difference between these models and our work is that we do not use Graph Transformer (GT) as the backbone architecture but an alternative architecture based on ViT/MLP-Mixer and graphs. We believe moving from GT to Graph ViT/MLP-Mixer as a new backbone/high-level architecture has the potential to open a new line of work for GNN design (by enhancing the building blocks of the proposed architecture such as graph clustering, graph embedding, mixer layer, positional encoding, (pre-)training, etc). + +# B. Datasets Description + +We evaluate our Graph MLP-Mixer on a wide range of graph benchmarks. See summary statistics of datasets in Table 6. + +CSL (Murphy et al., 2019) is a synthetic dataset to test the expressivity of GNNs. CSL has 150 graphs divided into 10 isomorphism classes. Each CSL graph is a 4-regular graph with edges connected to form a cycle and containing skip-links between nodes. The goal of the task is to classify them into corresponding isomorphism classes. + +EXP (Abboud et al., 2020) contains 600 pairs of 1&2-WL failed graphs. The goal is to map these graphs into two classes. + +SR25 (Balcilar et al., 2021) has 15 strongly regular graphs (3-WL failed) with 25 nodes each. SR25 is translated to a 15 way classification problem with the goal of mapping each graph into a different class. + +ZINC (Dwivedi et al., 2020) is a subset (12K) of molecular graphs (250K) from a free database of commercially-available compounds (Irwin et al., 2012). These molecular graphs are between 9 and 37 nodes large. Each node represents a heavy atom (28 possible atom types) and each edge represents a bond (3 possible types). The task is to regress a molecular property known as the constrained solubility. The dataset comes with a predefined 10K/1K/1K train/validation/test split. + +MNIST and CIFAR10 (Dwivedi et al., 2020) are derived from classical image classification datasets by constructing an 8 nearest-neighbor graph of SLIC superpixels for each image. The resultant graphs are of sizes 40-75 nodes for MNIST and 85-150 nodes for CIFAR10. The 10-class classification tasks and standard dataset splits follow the original image + +Table 6. Summary statistics of datasets used in this study + +
Dataset#Graphs#NodesAvg. #NodesAvg. #EdgesTaskMetric
CSL150414116410-class classif.Accuracy
EXP1,20032-6444.4110.22-class classif.Accuracy
SR2515252530015-class classif.Accuracy
ZINC12,0009-3723.224.9regressionMAE
MNIST70,00040-7570.6684.410-class classif.Accuracy
CIFAR1060,00085-150117.61129.710-class classif.Accuracy
MolTOX217,8311-13218.5738.612-task classif.ROCAUC
MolHIV41,1272-22225.554.9binary classif.ROCAUC
Peptides-func15,5358-444150.9307.310-class classif.Average Precision (AP)
Peptides-struct15,5358-444150.9307.3regressionMAE
TreeNeighbourMatch (r=2)967764-class classif.Accuracy
TreeNeighbourMatch (r=3)32,0001515148-class classif.Accuracy
TreeNeighbourMatch (r=4)64,00031313016-class classif.Accuracy
TreeNeighbourMatch (r=5)128,00063636232-class classif.Accuracy
TreeNeighbourMatch (r=6)256,00012712712664-class classif.Accuracy
TreeNeighbourMatch (r=7)512,000255255254128-class classif.Accuracy
TreeNeighbourMatch (r=8)640,000511511510256-class classif.Accuracy
+ +classification datasets, i.e., for MNIST 55K/5K/10K and for CIFAR10 45K/5K/10K train/validation/test graphs. These datasets are sanity-checks, as we expect most GNNs to perform close to $100\%$ for MNIST and well enough for CIFAR10. + +MolTOX21 and MolHIV (Hu et al., 2020) are molecular property prediction datasets adopted from the MoleculeNet (Szklarczyk et al., 2019). All the molecules are pre-processed using RDKit (Landrum et al., 2006). Each graph represents a molecule, where nodes are atoms, and edges are chemical bonds. Input node features are 9-dimensional, containing atomic number and chirality, as well as other additional atom features such as formal charge and whether the atom is in the ring or not. The datasets come with a predefined scaffold splits based on their two-dimensional structural frameworks, i.e. for MolTOX21 6K/0.78K/0.78K and for MolHIV 32K/4K/4K train/validation/test. + +Peptides-func and Peptides-struct (Dwivedi et al., 2022) are derived from 15,535 peptides with a total of 2.3 million nodes retrieved from SAT-Pdb (Singh et al., 2016). Both datasets use the same set of graphs but differ in their prediction tasks. These graphs are constructed in such a way that requires long-range interactions (LRI) reasoning to achieve strong performance in a given task. In concrete terms, they are larger graphs: on average 150.94 nodes per graph, and on average 56.99 graph diameter. Thus, they are better suited to benchmarking of graph Transformers or other expressive GNNs that are intended to capture LRI. + +TreeNeighbourMatch is a synthetic dataset proposed by Alon & Yahav (2020) to highlight the inherent problem of over-squashing in GNNs. It is designed to simulate the exponentially-growing receptive field while allowing us to control the problem radius $r$ , and thus control the intensity of over-squashing. Specifically, each graph is a binary tree of depth depth (a.k.a. problem radius $r$ ). The goal is to predict a label for the target node, where the correct answer lies in one of the leave nodes. Therefore, the TreeNeighbourMatch problem requires information to be propagated from all leave nodes to the target node before predicting the label, causing the issue of over-squashing at the target node. + +Distributions of the graph sizes. We plot of the distributions of the graph sizes (i.e., the number of nodes in each data sample) of these datasets in Figure 3. + +Patch size and diameter. We set the number of patches to 32 by default. Summary statistics of graph patches are presented in Table 7. + +# C. Experiment Details + +We implement out model using PyTorch (Paszke et al., 2019) and PyG (Fey & Lenssen, 2019). We ran our experiments on NVIDIA RTX A5000 GPUs. We run each experiment with 4 different seeds, reporting the averaged results at the epoch achieving the best validation metric. For optimization, we use Adam (Kingma & Ba, 2014) optimizer, with the default settings of $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , and $\epsilon = 1e^{-8}$ . We observe large fluctuations in the validation metric with the common + +![](images/3128b94e88b749a3b6e59b43f882feefbbe77412cbc113d5b9b74ba559c0d329.jpg) +(a) ZINC + +![](images/2e7b2c492f3613cd014233c28f249781bbe392dc213cc910ff3d42154a1c1b0a.jpg) +(b) MNIST + +![](images/4e236d3402ff59513f9d679bed3cd6cbfdac7c074dbd6acff090074d4a0c8946.jpg) +(c) CIFAR10 + +![](images/f761e19897601f5ad20a4ee9262913d4f4b90b9f9a54cab7784e385a819c45ce.jpg) +(d) MolTOX21 + +![](images/0a737a578cf9e70c50d58b3fbc9bae1065cfb89f7c9aaf2cd73b53d18dbd5c0a.jpg) +(e) MolHIV + +![](images/d29ab51576ce07a56d3b501b0e36c34d2246e2ed230009acd0698aca45e08581.jpg) +(f) Peptides-func/struct +Figure 3. Distributions of the graph sizes. + +Table 7. Summary statistics of graph patches. + +
Dataset# Patch# NodeDiameter
MeanMinMaxMeanMinMax
CSL325.80582.2823
EXP324.072112.3115
SR253213.0013132.0022
ZINC323.15271.8213
MNIST3214.369282.8525
CIFAR103217.2010353.0727
MolTOX21323.151101.8006
MolHIV323.271131.8708
Peptides-func327.081204.15014
Peptides-struct327.081204.15014
TreeNeighbourMatch(r=2)81.86130.8602
TreeNeighbourMatch(r=3)321.93130.9302
TreeNeighbourMatch(r=4)321.97130.9702
TreeNeighbourMatch(r=5)323.28152.2503
TreeNeighbourMatch(r=6)325.34383.3125
TreeNeighbourMatch(r=7)329.197144.3345
TreeNeighbourMatch(r=8)3217.0315236.1768
+ +Adam optimizer on the OGB datasets (i.e., MolHIV and MolTOX21), as also observed in (Frasca et al., 2022; Zhang & Li, 2021; Chen et al., 2019). We consider following the practice of SUN (Frasca et al., 2022) by employing the ASAM optimizer (Kwon et al., 2021) to reduce such fluctuations. We use the same hyperparameter with batch size of 32 and learning rate of 0.01 without further tuning. + +Simulation Datasets. For CSL and EXP, we run the 5-fold cross validation with stratified sampling to ensure class distribution remains the same across the splits (Dwivedi et al., 2020; Zhang & Li, 2021). For SR25 dataset, we follow the evaluation process in (Zhao et al., 2021; Feng et al., 2022) that directly train and validate the model on the whole dataset and report the best performance. + +Real-World Datasets. For benchmarking datasets from Dwivedi et al. (2020), we followed the most commonly used parameter budgets: up to 500k parameters for ZINC; For MolTOX21 and MolHIV from OGB (Hu et al., 2020), there is no upper limit on the number of parameters. For peptides-func and peptides-struct from LRGB (Dwivedi et al., 2022), we followed the parameter budget $\sim 500\mathrm{k}$ . All real world evaluated benchmarks define a standard train/Validation/test dataset split. + +Baselines. We use GCN (Kipf & Welling, 2017), GatedGCN (Bresson & Laurent, 2017), GINE (Hu et al., 2019) and Graph Transformer (Dwivedi & Bresson, 2021) as our baseline models, which also server as the base patch encoder of Graph MLP-Mixer. The hidden size is set to 128 and the number of layers is set to 4 by default. For TreeNeighbourMatch datasets, we follow the experimental protocol introduced in (Alon & Yahav, 2020), that is, for TreeNeighbourMatch dataset with problem radius $r = \text{depth}$ , we implemented a network with $r + 1$ graph layers to allow an additional nonlinearity after the information from the leaves reaches the target node. + +Graph MLP-Mixer. The hidden size is set to 128, and the number of GNN layers and Mixer layers is set to 4. Except that for LRGB datasets, we reduce the number of Mixer layers to 2 to fulfill the parameter budget $\sim 500\mathrm{k}$ . + +SOTA models. In Table 3 and Table 17, results are referenced directly from literature if available, otherwise are reproduced using authors' official code. To enable a fair comparison of speed/memory complexity (Table 17), we set the batch size to 128 all the SOTA models and ours and reduce the batch size by half if OOM until the model and batch data can be fit into the memory. Besides, all experiments are run on the same machine. + +Positional Encodings. As the most appropriate choice of node positional encoding (NodePE) is dataset and task dependent, we follow the practice of Rampášek et al. (2022); Dwivedi et al. (2022), see Table 8. We have already augmented all the base models (GCN, GatedGCN, GINE and GraphTrans) in Table 2 with the same type of NodePE as Graph MLP-Mixer to ensure a fair comparison. + +Table 8. Summary statistics of positional encoding (PE). + +
CSLEXPSR25ZINCMNISTCIFAR10MolTOX21MolHIVPeptides-funPeptides-struct
NodePERWSE-8RWSE-8LapPE-8RWSE-20LapPE-8LapPE-8--RWSE-16RWSE-16
PatchPERWSE-8RWSE-8RWSE-8RWSE-8RWSE-8RWSE-8--RWSE-8RWSE-8
+ +# D. Studies on Patch Extraction Module + +# D.1. Effect of Patch Extraction + +Table 9. Effect of patch extraction: $\times$ means no patch extraction and $\checkmark$ means uses patch extraction. + +
ModelPatch ExtractionZINC (MAE↓)Peptides-func (AP↑)
GCN-MLP-MixerX0.2495 ± 0.00400.6341 ± 0.0139
0.1347 ± 0.00200.6832 ± 0.0061
GatedGCN-MLP-MixerX0.2521 ± 0.00840.6230 ± 0.0110
0.1244 ± 0.00530.6932 ± 0.0017
GINE-MLP-MixerX0.2558 ± 0.00590.6350 ± 0.0038
0.0733 ± 0.00140.6970 ± 0.0080
GraphTrans-MLP-MixerX0.2538 ± 0.00670.6224 ± 0.0112
0.0773 ± 0.00300.6858 ± 0.0062
+ +We conducted an experiment where we ran the Graph MLP-Mixer without the patch extraction process, treating each individual node as a patch. The results of this experiment are presented in Table 9. The patch extraction process is critical. We believe that the patch extraction process, which includes Metis partition and 1-hop extension, helps to capture important local information about the graph structure. + +Table 10. Comparison of METIS and random graph partition algorithm. + +
ModelZINC (MAE ↓)Peptides-struct (MAE ↓)
METISRandomMETISRandom
GCN-MLP-Mixer0.1347 ± 0.00200.1435 ± 0.01220.2486 ± 0.00410.2565 ± 0.0031
GatedGCN-MLP-Mixer0.1244 ± 0.00530.1284 ± 0.00740.2508 ± 0.00070.2539 ± 0.0012
GINE-MLP-Mixer0.0733 ± 0.00140.0708 ± 0.00200.2494 ± 0.00070.2559 ± 0.0012
GraphTrans-MLP-Mixer0.0773 ± 0.00300.0767 ± 0.00190.2480 ± 0.00130.2574 ± 0.0025
+ +# D.2. Effect of Graph Partition Algorithm + +Graph partitioning algorithms have been studied for decades (Buluc et al., 2016) given their importance in identifying meaningful clusters. Mathematically, graph partitioning is known to be NP-hard (Chung, 1997). Approximations are thus required. A graph clustering algorithm with one of the best trade-off accuracy and speed is METIS (Karypis & Kumar, 1998), which partitions a graph into a pre-defined number of clusters/patches such that the number of within-cluster links is much higher than between-cluster links in order to better capture good community structure. For these fine properties, we select METIS as our graph patch extraction algorithm. + +We provide the ablation study for how many benefits the METIS can provide against random graph partitioning. For random graph partition, nodes are randomly assigned to a pre-defined number of patches. We apply data augmentation as described in Section 4.6 to both algorithms. Table 10 shows that using METIS as the graph partition algorithm consistently gives better performance than random node partition, especially on graphs with more nodes and edges (such as Peptides-func), which corresponds to our intuition that nodes and edges composing a patch should share similar semantic or information. Nevertheless, it is interesting to see that random graph partitioning is still able to achieve reasonable results, which shows that the performance of the model is not solely supported by the quality of the patches. + +# D.3. Effect of Number of Patches + +![](images/7bca99db3b6df31ff18592196dbb6847610dd24f49c5acab002c861e4c773afe.jpg) +Figure 4. Effect of the number of patches. + +![](images/38d85eb542bd4e0145519abab4dc8fed41277c6d4146d1167d523a42f931578d.jpg) + +We observe in Figure 4 when increasing the number of graph patches (# Patch), performance increases first and then flattens out (with small fluctuations) when #Patch=24. We set the number of patches to 32 by default. + +# D.4. Effect of Patch Overlapping + +In Figure 5, we observe a clear performance increase when graph patches are overlapping with each other (0-hop vs 1-hop), which is consistent with our intuition that extracting non-overlapping patches implies losing important edge information. We further expand graph patches to their $k$ -hop neighbourhood. Performance increases first and then flattens out or begins to decrease when $k = 2$ for ZINC and $k = 3$ for Peptides-func. We set $k = 1$ by default. + +![](images/e1a2889e03f078ce23bff6a3c0528714d3d5aa94781ce210ff693586268585ad.jpg) +Figure 5. Effect of the patch overlapping with $k$ -hop extension. + +![](images/863a62583bac0274dec17d05f9e042bb8d8a6167e0197d29ccfa95578c82e56e.jpg) + +Table 11. Accuracy on EXP with different patch sizes $P$ ,averaging over 4 runs with 4 different seeds + +
ModelP=2P=4P=8P=16P=32
GCN-MLP-Mixer57.54 ± 3.8799.44 ± 0.5999.69 ± 0.98100.00 ± 0.00100.00 ± 0.00
GatedGCN-MLP-Mixer67.65 ± 2.0199.77 ± 0.37100.00 ± 0.00100.00 ± 0.00100.00 ± 0.00
GINE-MLP-Mixer57.75 ± 3.8099.58 ± 0.45100.00 ± 0.00100.00 ± 0.00100.00 ± 0.00
GraphTrans-MLP-Mixer73.79 ± 1.5296.77 ± 8.43100.00 ± 0.00100.00 ± 0.00100.00 ± 0.00
+ +# D.5. Patch Size and WL Expressivity + +We evaluated the 2-WL and 3-WL expressivity on the benchmark datasets available to us, which indeed have small graphs. As we do not have access to 2-WL/3-WL datasets with larger graph sizes, we studied the impact of performance with a smaller number of patches in Table 11. As expected, expressivity increases when the number of patches increases as well. Given these experiential results, we also suppose that for larger graphs, we would need to increase the number of patches to maintain expressivity. + +# E. Studies on Positional Encoding + +# E.1. Effect of Positional Encoding + +Table 12. Effect of positional encoding. We study the effects of node PE and patch PE by removing one of them in turn from our model while keeping the other components unchanged. + +
DatasetMethodGCN-MLP-MixerGated-MLP-MixerGINE-MLP-MixerGraphTrans-MLP-Mixer
ZINCFull0.1347 ± 0.00200.1244 ± 0.00530.0733 ± 0.00140.0773 ± 0.0030
- NodePE0.1944 ± 0.00610.1775 ± 0.00310.1225 ± 0.00700.1393 ± 0.0122
- PatchPE0.1414 ± 0.00580.1250 ± 0.00260.0746 ± 0.00100.0778 ± 0.0029
- Both0.2207 ± 0.00720.1883 ± 0.00960.1160 ± 0.00230.1700 ± 0.0064
Peptides-funcFull0.6832 ± 0.00610.6932 ± 0.00170.6970 ± 0.00800.6858 ± 0.0062
- NodePE0.6688 ± 0.00390.6864 ± 0.00800.6868 ± 0.00340.6763 ± 0.0030
- PatchPE0.6871 ± 0.00550.6934 ± 0.00550.6933 ± 0.01040.6882 ± 0.0076
- Both0.6760 ± 0.00780.6847 ± 0.00340.6756 ± 0.00700.6783 ± 0.0088
+ +It was proved in (Murphy et al., 2019; Loukas, 2020) that unique and permutation-invariant positional encoding (PE) increases the representation power of any MP-GNN, i.e. PE leads to GNNs strictly more powerful than the 1-WL test. PE is thus important from a theoretical point of view but, unfortunately, theory does not provide any guidance on the choice of PE for a given graph dataset and task. Consequently, the choice of PE is so far arbitrary and is selected by trial-and-error experiments such as (Rampášek et al., 2022; Lim et al., 2022) to cite the most recent PE-based GNNs. + +Our experiments show that PE may or not be useful, see Table 12. Thus, PE increases the expressivity power of GNNs but not necessarily their generalization performance. In other words, they improve over-fitting but not necessarily generalization. + +In conclusion, PE is certainly useful to improve the quality of GNN prediction given the theory and the increased number of published works on this topic, but more mathematical progress is needed to identify more relevant choices and provides consistent result improvement. + +# E.2. Positional Encoding and Patch Size + +Table 13. Ablation with combining effects of PE and patch size on ZINC. + +
Patch Size241632
Full0.0983 ± 0.00420.1011 ± 0.01030.0799 ± 0.00370.0743 ± 0.0049
- Node PE0.1589 ± 0.00560.1414 ± 0.00610.1307 ± 0.01070.1154 ± 0.0032
- Patch PE0.1081 ± 0.00070.1076 ± 0.01100.0840 ± 0.00350.0744 ± 0.0037
- Both0.1677 ± 0.00450.1532 ± 0.00510.1284 ± 0.00180.1187 ± 0.0050
+ +Table 14. Ablation with combining effects of PE and patch size on Peptide-func. + +
Patch Size24163264
Full0.6578 ± 0.00630.6675 ± 0.00370.6855 ± 0.00390.6939 ± 0.00340.6944 ± 0.0074
- Node PE0.6613 ± 0.00630.6708 ± 0.00650.6864 ± 0.00690.6873 ± 0.00330.6789 ± 0.0047
- Patch PE0.6594 ± 0.00590.6724 ± 0.00510.6937 ± 0.00680.6939 ± 0.00620.6865 ± 0.0061
- Both0.6562 ± 0.00570.6739 ± 0.00380.6879 ± 0.00520.6825 ± 0.00740.6746 ± 0.0056
+ +We run ablation experiments to study the combined effects of patch size vs. model with and without node and patch PE, see Table 13 and Table 14. + +Overall, increasing the number of patches improves the results independently of using or not the PEs for ZINC and Peptide-func. Node PE clearly helps more than patch PE for both datasets and using both PEs is generally more helpful for a larger number of patches. + +# F. Study on Different Designs of Graph-Based MHA + +Table 15. Different designs of graph-based multi-head attention (gMHA) in transformer layer. + +
gMHAEquationZINC (MAE↓)Peptides-func (AP↑)
Standard/Full attention (Vaswani et al., 2017)softmax(QT/√d)V0.1784 ± 0.02380.6778 ± 0.0039
Graph Attention (Dwivedi & Bresson, 2021)softmax(AP ⊙ QT/√d)V0.1527 ± 0.00670.6795 ± 0.0070
Kernel Attention (Mialon et al., 2021)softmax(RW(AP ⊙ QT/√d)V0.1010 ± 0.00310.6844 ± 0.0102
Additive Attention (Ying et al., 2021)softmax(QKT/√d)V + LL(AP)0.1632 ± 0.00630.6842 ± 0.0057
Hadamard Attention(AP ⊙ softmax(QKT/√d))V0.0849 ± 0.00470.6919 ± 0.0085
+ +We conducted experiments on ZINC and Peptides-func datasets to explore five different versions of Graph ViT. The versions primarily differ in the attention function used. The attention functions we considered are as follows: (1) Standard/Full Attention: This attention function is based on the original attention mechanism introduced by Vaswani et al. (2017). (2) Graph Attention: This attention function is derived from the Graph Transformer (GT) model proposed by Dwivedi & Bresson (2021). (3) Kernel Attention: This attention function is based on the kernel attention mechanism proposed by Mialon et al. (2021) in the GraphiT model. (4) Additive Attention: This attention function is derived from the Graphormer model proposed by Ying et al. (2021). (5) Hadamard Attention: We employed Hadamard attention as the default attention function in our Graph ViT model. Results are presented in the Table 15. + +Experiments clearly demonstrate that the choice of the self-attention function is important. The Hadamard attention provides the best performance for ZINC (0.0849) and for peptides-func (0.6919) among all attention functions. + +Table 16. Effect of data augmentation (DA): $\times$ means no DA and ✓uses DA. + +
ModelDAZINCPeptides-struct
MAE ↓Time (S/Epoch)MAE ↓Time (S/Epoch)
GCN-MLP-MixerX0.2537 ± 0.01395.36030.2761 ± 0.00416.8297
0.1347 ± 0.00205.67280.2486 ± 0.00419.2561
GatedGCN-MLP-MixerX0.2121 ± 0.01725.38160.2776 ± 0.00207.8609
0.1244 ± 0.00535.77860.2508 ± 0.00079.5830
GINE-MLP-MixerX0.1389 ± 0.01715.39050.2792 ± 0.00437.8849
0.0733 ± 0.00145.67040.2494 ± 0.00078.8136
GraphTrans-MLP-MixerX0.1665 ± 0.01456.00390.2802 ± 0.00309.0999
0.0773 ± 0.00306.16160.2480 ± 0.00139.7730
+ +# G. Effect of Data Augmentation + +Then proposed data augmentation (DA) corresponds to newly generated graph patches with METIS at each epoch, while no DA means patches are only generated at the initial epoch and then reused during training. Table 16 presents different results. First, it is clear that DA brings an increase in performance. Second, re-generating graph patches only add to a small amount of training time. + +It is worth noting that the drop edge technique we use here is different to the standard data augmentation techniques such as DropEdge (Rong et al., 2019), and G-Mixup (Han et al., 2022b), which either add slightly modified copies of existing data or generate synthetic based on existing data. Our approach is different and actually specific to the Graph MLP-Mixer model. + +# H. Long Range Graph Benchmark + +Table 17. Comparison of our best results from Table 2 with the state-of-the-art Models on large real world datasets (Dwivedi et al., 2022). + +
Model# ParamsPeptide-funcPeptide-struct
Avg. Precision ↑Time (S/Epoch)Memory (MB)MAE ↓Time (S/Epoch)Memory (MB)
GCN508k0.5930 ± 0.00234.596960.3496 ± 0.00134.51686
GINE476k0.5498 ± 0.00793.946590.3547 ± 0.00453.84658
GatedGCN509k0.5864 ± 0.00775.481,0380.3420 ± 0.00135.311,029
GatedGCN + RWSE506k0.6069 ± 0.00355.751,0350.3357± 0.00065.611,038
Transformer + LapPE488k0.6326 ± 0.01269.74 (1.1×)6,661 (6.6×)0.2529 ± 0.00169.61 (1.1×)6,646 (8.0×)
SAN + LapPE (Chen et al., 2022a)493k0.6384 ± 0.012180.47 (9.4×)12,493 (12.4×)0.2683 ± 0.004379.41 (8.8×)12,226 (14.7×)
SAN + RWSE (Chen et al., 2022a)500k0.6439 ± 0.007568.44 (8.0×)19,691 (19.5×)0.2545 ± 0.001270.39 (7.8×)12,111 (14.5×)
GPS (Rampášek et al., 2022)504k0.6562 ± 0.011511.83 (1.4×)6,904 (6.8×)0.2515 ± 0.001211.74 (1.3×)6,878 (8.3×)
GNN-AK+ (Zhao et al., 2021)631k0.6480 ± 0.008922.52 (2.6×)7,855 (7.8×)0.2736 ± 0.000722.11 (2.5×)7,634 (9.2×)
SUN (Frasca et al., 2022)508k0.6730 ± 0.0078376.66 (43.8×)18,941 (18.8×)0.2498 ± 0.0008384.26 (42.7×)17,215 (20.7×)
GCN-MLP-Mixer329k0.6832 ± 0.00618.487160.2486 ± 0.00418.12679
GatedGCN-MLP-Mixer527k0.6932 ± 0.00178.969690.2508 ± 0.00078.44887
GINE-MLP-Mixer397k0.6970 ± 0.00808.59 (1.0×)1,010 (1.0×)0.2494 ± 0.00078.51974
GraphTrans-MLP-Mixer593k0.6858 ± 0.00629.949750.2480 ± 0.00139.001,048
GCN-ViT493k0.6855 ± 0.00498.906280.2468 ± 0.00158.55609
GatedGCN-ViT692k0.6942 ± 0.00759.078480.2465 ± 0.00159.00 (1.0×)833 (1.0×)
GINE-ViT561k0.6919 ± 0.00858.989200.2449 ± 0.00168.77902
GraphTrans-ViT757k0.6876 ± 0.00599.949750.2455 ± 0.00279.58981
+ +We have provided additional experiments with the recent Long Range Graph Benchmark (LRGB) (Dwivedi et al., 2022) to demonstrate that Graph MLP-Mixer is able to capture long-range interactions. In LRGB, Peptides-func and Peptides-struct are two graph-level prediction datasets, consisting of 15,535 graphs with a total of 2.3 million nodes. The graphs are one order of magnitude larger than ZINC, MolTOX21 and MolHIV with 151 nodes per graph on average and a mean graph diameter of 57. As such, they are better suited to evaluate models enabled with long-range dependencies, as they contain larger graphs and more data points. The performance is reported in Table 2, Table 3 and in Table 17. + +We summarize the main results as follows: 1) Graph MLP-Mixer sets new SOTA performance with the best scores of 0.6970 on Peptides-fun and 0.2449 on Peptides-struct (Table 3), demonstrating the ability of the model to better capture + +long-range relationships. 2) Compared with MP-GNNs (Table 2), Graph MLP-Mixer significantly outperforms the base MP-GNNs; we can observe an average 0.056 Average Precision improvement on Peptides-func and an average 0.028 MAE decrease on Peptides-struct, which verifies its superiority over MP-GNNs in capturing long-range interaction. 3) Graph MLP-Mixer provides significantly better speed/memory complexity compared to Graph Transformer and expressive gnn models, especially when training with large graphs, such as SAN+LSPE (Chen et al., 2022a) and SUN (Frasca et al., 2022). For example, SUN gives similar performance to Graph MLP-Mixer, 0.6730 on Peptides-func and 0.2498 on Peptides-struct, but requires 44x memory and 19x training time (Table 17). + +# I. Mitigating Oversquashing in TreeNeighbourMatch + +As discussed in section 5.3, each example of TreeNeighbourMatch is a binary tree of depth $r$ . The goal is to predict an alphabetical label for the target node, where the correct answer is the label of the leaf node that has the same degree as the target node. Figure 2 shows that standard MP-GNNs (i.e., GCN, GGCN, GAT and GIN) fail to generalize on the dataset from $r = 4$ , whereas our model mitigates over-squashing and generalizes well until $r = 7$ . + +To better understand these empirical observations, we first note that as shown by Alon & Yahav (2020), MP-GNNs are fundamentally limited in their ability to solve larger TreeNeighbourMatch cases as they 'squash' information about the graph into the target node's embedding, which can hold a limited amount of information in their floating point representation. Next, we consider Graph MLP-Mixer from an expressiveness point of view, and provide a simple construction to illustrate that it avoids this problem by transmitting long-range information directly without oversquashing. Concretely, consider each node as one patch. Then, Graph MLP-Mixer's Patch Encoder extracts each node's degree and alphabetical label, storing them into the resulting Patch Embeddings. The next Token Mixer layer then compares each node's degree to the target node's, and outputs an indicator variable for whether these degrees are equal, which is transmitted to the next layer. Finally, by combining each node's alphabetical label and this indicator variable, the Fully Connected layer can then output the alphabetical label of the node with matching degree to the target node. In summary, we can observe that Graph MLP-Mixer can solve TreeNeighbourMatch instances while only requiring that each node embedding to capture information about that patch, not the entire graph, thus avoiding the inherent limitations of MP-GNNs as discussed in (Alon & Yahav, 2020). + +# J. Complexity Analysis + +For each graph $G = (\mathcal{V}, \mathcal{E})$ , with $N = |\mathcal{V}|$ being the number of nodes and $E = |\mathcal{E}|$ being the number of edges, the METIS patch extraction takes $O(E)$ runtime complexity, and outputs graph patches $\{G_1, \dots, G_P\}$ , with $P$ being the pre-defined number of patches. Accordingly, we denote each graph patch as $G_p = (\mathcal{V}_p, \mathcal{E}_p)$ , with $N_p = |\mathcal{V}_p|$ being the number of nodes and $E_p = |\mathcal{E}_p|$ being the number of edges in $G_p$ . After our one-hop overlapping adjustment, the total number of nodes and edges of all the patches are $N' = \sum_{p} N_p \leq PN$ and $E' = \sum_{p} E_p \leq PE$ , respectively. Assuming base GNN has $O(N + E)$ runtime and memory complexity, our patch embedding module has $O(N' + E')$ runtime and memory complexity, introducing a constant overhead over the base GNN model. For the mixer layers, the complexity is $O(P)$ as discussed in Section 4.5. + +# K. Limitations + +The current limitations of the model are as follows. + +1) Arbitrary choice of the number of clusters in Metis. The number of patches needs to be selected and the number is different across different datasets. Besides, selecting the number of patches to be the same for graphs of variable sizes makes the network operate at different levels of graph resolution and may affect the overall performance. +2) Empirical experiments on WL-expressivity. Our results on the expressivity of the Graph MLP-Mixer are empirical. A theoretical analysis of the expressivity of the model on graphs with higher WL degrees would be valuable but such an analysis is non-trivial. +3) Training and pre-training on large-scale datasets of small and large graphs. More experimental results on Mal-Net (Freitas et al., 2020), PascalVOC-SP, COCO-SP (Dwivedi et al., 2022) and PCQM4Mv2 (Hu et al., 2020) to further test the supervised ability of the model. Besides, the pre-training capability of Graph MLP-Mixer on large-scale datasets with small graphs and large graphs was also not studied. We leave these tasks as future work. \ No newline at end of file diff --git a/ageneralizationofvitmlpmixertographs/images.zip b/ageneralizationofvitmlpmixertographs/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9f4b8342ce8ff0f49abcb8a3606e2f9031253805 --- /dev/null +++ b/ageneralizationofvitmlpmixertographs/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39283535e7682048f0afe02febb05efb137b542d01df459a77e36ca8fc5dfd7e +size 1443336 diff --git a/ageneralizationofvitmlpmixertographs/layout.json b/ageneralizationofvitmlpmixertographs/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..cdc7a97124d4305c71152ba8f51c1b34dc022b0a --- /dev/null +++ b/ageneralizationofvitmlpmixertographs/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:136f932167af1f9006efc5f172f115d45da324cf35df224a7477c44436be6e62 +size 757792 diff --git a/ageneralrepresentationlearningframeworkwithgeneralizationperformanceguarantees/abeb27d8-acc1-4745-9f47-25c025624af9_content_list.json b/ageneralrepresentationlearningframeworkwithgeneralizationperformanceguarantees/abeb27d8-acc1-4745-9f47-25c025624af9_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..43fe7cf4797c6bb94b4aa4334e3d688b8510cb43 --- /dev/null +++ b/ageneralrepresentationlearningframeworkwithgeneralizationperformanceguarantees/abeb27d8-acc1-4745-9f47-25c025624af9_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4fd11e261397d6ac80cab9b4c0a2449314c3e7e072f52fc758f1a174563fb8f3 +size 184370 diff --git a/ageneralrepresentationlearningframeworkwithgeneralizationperformanceguarantees/abeb27d8-acc1-4745-9f47-25c025624af9_model.json b/ageneralrepresentationlearningframeworkwithgeneralizationperformanceguarantees/abeb27d8-acc1-4745-9f47-25c025624af9_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c2d7dade660d68f006184f51da9e1c7a90db1bf1 --- /dev/null +++ b/ageneralrepresentationlearningframeworkwithgeneralizationperformanceguarantees/abeb27d8-acc1-4745-9f47-25c025624af9_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc229b21a0152b398418461c8a6954ed0d6bf5423a435f128529239738f91280 +size 215913 diff --git a/ageneralrepresentationlearningframeworkwithgeneralizationperformanceguarantees/abeb27d8-acc1-4745-9f47-25c025624af9_origin.pdf b/ageneralrepresentationlearningframeworkwithgeneralizationperformanceguarantees/abeb27d8-acc1-4745-9f47-25c025624af9_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8297d226a866638146199e05f6a14a82ad7f9ea5 --- /dev/null +++ b/ageneralrepresentationlearningframeworkwithgeneralizationperformanceguarantees/abeb27d8-acc1-4745-9f47-25c025624af9_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d861a8e9f26a236fad2ba3e2878baa838081799be72965eb9df346cff2e361da +size 787213 diff --git a/ageneralrepresentationlearningframeworkwithgeneralizationperformanceguarantees/full.md b/ageneralrepresentationlearningframeworkwithgeneralizationperformanceguarantees/full.md new file mode 100644 index 0000000000000000000000000000000000000000..5323b57787a9b628906ee15d43544641d489d2b4 --- /dev/null +++ b/ageneralrepresentationlearningframeworkwithgeneralizationperformanceguarantees/full.md @@ -0,0 +1,978 @@ +# A General Representation Learning Framework with Generalization Performance Guarantees + +Junbiao Cui $^{1}$ Jianqing Liang $^{1}$ Qin Yue $^{1}$ Jiye Liang $^{1}$ + +# Abstract + +The generalization performance of machine learning methods depends heavily on the quality of data representation. However, existing researches rarely consider representation learning from the perspective of generalization error. In this paper, we prove that generalization error of representation learning function can be estimated effectively by solving two convex optimization problems. Based on it, we propose a general representation learning framework. And then, we apply the proposed framework to two most commonly used nonlinear mapping methods, i.e., kernel based method and deep neural network (DNN), and thus design a kernel selection method and a DNN boosting framework, correspondingly. Finally, extensive experiments verify the effectiveness of the proposed methods. + +# 1. Introduction + +Machine learning, especially supervised learning, has achieved significant success in many fields, such as computer vision (Russakovsky et al., 2015; He et al., 2016; Carion et al., 2020; He et al., 2022), speech recognition (Sainath et al., 2013; Gulati et al., 2020; Chiu et al., 2022), and natural language processing (Brown et al., 2020; Qiu et al., 2020; Cuadros et al., 2022), etc. The nature of machine learning is to learn the law from empirical data, which is usually a function from input space $\mathcal{X}$ to output space $\mathcal{Y}$ , i.e., $f: \mathcal{X} \to \mathcal{Y}$ . + +It is well known that the data representation has a huge effect on the performance of machine learning methods (Goodfellow et al., 2016). Better data representations can + +. + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +make learning tasks easier to solve. In most cases, the function $f$ is nonlinear, thus we need to apply a nonlinear mapping $\varphi$ to input space $\mathcal{X}$ . The process of learning $\varphi$ from data is called representation learning. + +In machine learning, two most commonly used nonlinear mapping methods are kernel based method and DNN. + +Kernel based method utilizes kernel mapping $\varphi_{\text{kernel}}$ to map original data into a latent space $\mathcal{X}'$ , and then completes learning in latent space. In general, it is not necessary to give explicit expression of kernel mapping $\varphi_{\text{kernel}}$ , making the kernel based method highly flexible and universal. Benefiting from it, a large number of kernel based methods have been proposed, such as kernelized principal component analysis (Schölkopf et al., 1998), kernelized support vector machine (Cortes & Vapnik, 1995), and kernelized $k$ -means (Zhang & Rudnicky, 2002), etc. Among them, the selection of kernel directly determines the generalization performance of kernel based method. So far, many kernels have been proposed, such as Gaussian kernel, polynomial kernel, and Laplacian kernel, etc. However, selecting a proper kernel for a learning task is very challenging, because it is difficult to estimate the generalization performance of the corresponding latent data representation given a kernel. + +DNN (LeCun et al., 2015) maps original data into a latent space $\mathcal{X}'$ through a multi-layer neural network $\varphi_{\mathrm{DNN}}$ , and then adds a liner mapping at the end of $\varphi_{\mathrm{DNN}}$ . In DNN, the network architecture and loss are major impact factors of the generalization performance. + +For a learning task, designing a good network architecture usually needs domain knowledge. For example, CNN (LeCun et al., 1989; Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016), ViT (Dosovitskiy et al., 2021), and MAE (He et al., 2022) are designed for computer vision tasks, while RNN (Rumelhart et al., 1986), LSTM (Hochreiter & Schmidhuber, 1997), Seq2Seq (Sutskever et al., 2014), BERT (Devlin et al., 2019), and GPT-3 (Brown et al., 2020) are designed for natural language processing tasks. How to design good network architectures for specific tasks is beyond the scope of this paper. + +So far, a lot of losses have been proposed to train DNN, which can be roughly divided into two categories, super + +vised loss and unsupervised regularization term loss. + +Classical supervised losses include cross entropy loss and mean squared error, etc. These losses are calculated based on labeled data and are surrogate functions of empirical error. When labeled data is insufficient, their ability to estimate generalization error will be limited. + +$p$ -norm family (Hanson & Pratt, 1988; Loshchilov & Hutter, 2019) is popular unsupervised regularization term loss. And these losses are designed based on domain knowledge or experience. Therefore, they are lack of universality. In addition, it is difficult to give theoretical guarantee whether these losses are beneficial to reduce generalization error of DNN. + +In summary, in order to learn a good nonlinear mapping $\varphi$ (select a proper kernel for kernel based method and design an universal loss for DNN), it is indispensable to design a criterion to estimate generalization error of data representation in latent space $\mathcal{X}'$ and guide representation learning. This paper will achieve this goal. Specifically, + +- A VC dimension based criterion is proposed to measure generalization error of representation learning function. It is proved that the criterion can be calculated effectively by solving two convex optimization problems. And then algorithms are designed to solve the corresponding optimization problems efficiently. +- Based on the criterion, we propose a general representation learning framework, which aims to minimize generalization error of representation learning function. +- Based on the framework, we design a kernel selection method for kernel based method and a boosting framework for DNN. +- A toy example demonstrates the effectiveness of the proposed criterion for measuring generalization error of data representation. And systematic experiments verify the effectiveness of the proposed framework for kernel selection and DNN boosting. + +# 2. Notations and Preliminaries + +# 2.1. Problem Formalization + +Let $\mathcal{X} \subseteq \mathbb{R}^d$ , $\mathcal{Y} = \{1,2,\dots ,K\}$ , $K \geq 2$ and $f_{c}: \mathcal{X} \to \mathcal{Y}$ be input (feature) space, output (class label) space and unknown target function, respectively. Let $\varphi: \mathcal{X} \to \mathcal{X}'$ , $\mathcal{X}' \subseteq \mathbb{R}^{d'}$ , be representation learning function. + +For sake of discussion, this paper takes binary classification problem as an example to build the theory and method. By using "one-versus-one" or "one-versus-rest" strategy, the multiple classification problem can be converted into + +a series of binary ones, so that the proposed methods can work directly. Moreover, recent study shows that arbitrary classification problem can be equivalently converted into a binary classification problem (Cui & Liang, 2022). + +Given a binary classification problem, let $\mathcal{X} = \mathcal{X}_{+}\bigcup \mathcal{X}_{-}$ and $\mathcal{V} = \{+1, - 1\}$ be input space and output space, respectively. $\mathcal{X}_{+}\subseteq \mathbb{R}^{d}$ and $\mathcal{X}_{-}\subseteq \mathbb{R}^{d}$ are positive samples space and negative samples space, respectively. Let $X_{+} = \{\mathbf{x}_{i}|\mathbf{x}_{i}\in \mathcal{X}_{+},i = 1,2,\dots ,n_{+}\}$ and $X_{-} =$ $\{\mathbf{x}_j|\mathbf{x}_j\in \mathcal{X}_-,j = n_+ + 1,n_+ + 2,\dots ,n_+ + n_-\}$ be the positive and negative training samples, respectively. And let $X = X_{+}\bigcup X_{-}$ and $n = n_{+} + n_{-}$ be set of all training samples and the number of all training samples, respectively. + +At the same time, main notations used in this paper are listed in Table 1. + +Table 1. Definition of main notations. + +
NOTATIONDEFINITION
φ (X+), φ (X-){φ (X) |X ∈ X+}, {φ (X) |X ∈ X-}
φ (X)φ (X+) ∪ φ (X-)
φ (X+), φ (X-){φ (X) |X ∈ X+}, {φ (X) |X ∈ X-}
φ (X)φ (X+) ∪ φ (X-)
φ (X+){φ (X1), ..., φ (Xn+)) ∈ Rd′×n+
φ (X-){φ (Xn+1), ..., φ (Xn)) ∈ Rd′×n-
φ (X){φ (X+), φ (X-)) ∈ Rd′×n
Δm{v | v ∈ Rm, v ≥ 0, ∑i=1mvi = 1}
σσ: Rm → Δm, ∀v ∈ Rm, σ(v) = 1/z(exp(v1), ..., exp(vm))T, z = ∑i=1mexp(vi)
+ +# 2.2. Preliminaries of VC Dimension + +Definition 2.1 (Chapter 3.6 of (Vapnik, 1999)). Let $\mathcal{H}$ be a set of real functions. The VC dimension of $\mathcal{H}$ , denoted as $d_{VC}(\mathcal{H})$ , is defined as the maximum $m$ of vectors $\mathbf{x}_1, \mathbf{x}_2, \dots, \mathbf{x}_m$ that can be separated into two classes in all $2^m$ possible ways using functions of the set $\mathcal{I}(\mathcal{H})$ , where $\mathcal{I}(\mathcal{H}) = \{g | g(\mathbf{x}) = \mathbb{I} (h(\mathbf{x}) > b), h \in \mathcal{H}, b \in \mathbb{R}\}$ is the set of indicator functions corresponding to $\mathcal{H}$ , and $\mathbb{I}(\cdot)$ is 1, if $\cdot$ is true, and is 0, if $\cdot$ is false. + +Definition 2.2 (Chapter 5.4.2 of (Vapnik, 1999)). Given a hyperplane $ph(\omega, b)$ , $\|\omega\|_2 = 1$ , $b \in \mathbb{R}$ . The $ph(\omega, b)$ is called as $M$ -margin separating hyperplane if it classifiers vectors $\mathbf{x}$ as follows + +$$ +y = \left\{ \begin{array}{l l} + 1, & i f \omega^ {T} \mathbf {x} + b \geq M \\ - 1, & i f \omega^ {T} \mathbf {x} + b \leq - M \end{array} \right.. +$$ + +Theorem 2.3 (Theorem 5.1 of (Vapnik, 1999)). Let vectors $\mathbf{x} \in \mathcal{X} \subset \mathbb{R}^d$ belong to a sphere of radius $R$ . Then the set of + +$M$ -margin separating hyperplanes has VC dimension $d_{VC}$ bounded by the inequality + +$$ +d _ {V C} \leq B _ {1} (d, R, M) = \min \left(\left[ \frac {R ^ {2}}{M ^ {2}} \right], d\right) + 1. +$$ + +Theorem 2.4 (Corollary in Chapter 5.4 of (Vapnik, 1999)). With probability $1 - \eta$ one can assert that the probability that a test sample will not be separated correctly by the $M$ -margin hyperplane has the bound $P_{err} \leq \frac{n_{err}}{n} + B_2(n, n_{err}, \eta, d_{VC})$ , where $B_2(n, n_{err}, \eta, d_{VC}) = \frac{\varepsilon}{2}\left(1 + \sqrt{1 + \frac{4n_{err}}{n\varepsilon}}\right)$ , $\mathcal{E} = 4\frac{d_{VC}\left(\ln\frac{2n}{d_{VC}} + 1\right) - \ln\frac{\eta}{4}}{n}$ , $n$ is the number of training samples, $n_{err}$ is the number of training samples that are not separated correctly by this $M$ -margin hyperplane, and $d_{VC}$ is the VC dimension in Theorem 2.3. + +# 3. VC Dimension based Representation Learning Framework + +# 3.1. Formalize Generalization Error of Representing Learning + +Given a learning task, the learning process is to obtain a function $\hat{f}:\mathcal{X}\to \mathcal{Y}$ to approximate the unknown target function $f_{\mathrm{c}}$ by using training data. Generally, function $\hat{f}$ can be written as the composition of two functions, i.e., $\hat{f} = h\circ \varphi$ , where $\varphi :\mathcal{X}\rightarrow \mathcal{X}'$ is representation learning function and $h$ is classification function. The learning process can be written as follows. + +Representation learning can be formalized as follows + +$$ +\varphi^ {*} = \underset {\varphi \in \Psi} {\arg \min } P _ {e r r} \left(\mathcal {H} (\varphi (\mathcal {X}))\right), \tag {1} +$$ + +where $\Psi$ is the set of candidate representation learning functions, $P_{err}(\mathcal{H}(\varphi(\mathcal{X})))$ is generalization error of $\mathcal{H}(\varphi(\mathcal{X}))$ , and $\mathcal{H}(\varphi(\mathcal{X})) = \{h|h$ is a hyperplane on space $\varphi(\mathcal{X})\}$ is hypothesis space of classifiers. Given $\mathcal{X}$ , $P_{err}(\mathcal{H}(\varphi(\mathcal{X})))$ is uniquely determined by representation learning function $\varphi(\cdot)$ , so it is actually generalization error of representation learning. + +Classifier learning can be formalized as follows + +$$ +h ^ {*} = \underset {h \in \mathcal {H} \left(\varphi^ {*} (\mathcal {X})\right)} {\operatorname {a r g m i n}} \mathcal {L} (h, \varphi^ {*} (X)) + \gamma_ {\mathrm {R}} \mathcal {R} (h), \tag {2} +$$ + +where $\mathcal{L}$ is the loss function that measures how well the $h$ fits the training samples $X$ , $\mathcal{R}$ is the regularization term, and $\gamma_{\mathrm{R}} > 0$ is the trade-off parameter. + +The above two components can be executed sequentially (an example is given in Section 4). They also can be integrated into an unified model for end-to-end learning (an example is given in Section 5). + +# 3.2. A Upper Bound of $P_{err}$ ( $\mathcal{H}(\varphi(\mathcal{X}))$ ) + +According to Theorem 2.4, we can reduce generalization error $P_{err}$ in formula (1) by minimizing its upper bound $\frac{n_{err}}{n} + B_2(n, n_{err}, \eta, d_{VC})$ , so formula (1) can be rewritten as follows + +$$ +\underset {\varphi \in \Psi} {\min } B _ {2} (n, n _ {\text {e r r}}, \eta , d _ {V C} (\mathcal {H} (\varphi (\mathcal {X})))) \tag {3} +$$ + +$$ +s. t. \frac {n _ {e r r}}{n} \leq \varepsilon +$$ + +where $\frac{n_{err}}{n}$ is the empirical error on training set, $\varepsilon > 0$ , $\varepsilon \approx 0$ , and $\frac{n_{err}}{n} \leq \varepsilon$ is a constraint easy to satisfy (see Remark A.5 for details). And $d_{VC}(\cdot)$ is VC dimension of set of functions. + +By removing variables unrelated to $\varphi$ , formula (3) can be written as follows + +$$ +\min _ {\varphi \in \Psi} d _ {V C} (\mathcal {H} (\varphi (\mathcal {X}))) \quad s. t. \quad \frac {n _ {e r r}}{n} \leq \varepsilon . \tag {4} +$$ + +It's hard to directly minimize the VC dimension (see Definition 2.1). According to Theorem 2.3, we can indirectly control the VC dimension by minimizing its upper bound, so formula (4) can be rewritten as follows + +$$ +\min _ {\varphi \in \Psi} \min \left(\left[ \frac {R ^ {2} (\varphi (\mathcal {X}))}{M ^ {2} (\varphi (\mathcal {X} _ {+}) , \varphi (\mathcal {X} _ {-}))} \right], d (\varphi (\mathcal {X}))\right) + 1 +$$ + +$$ +s. t. \left\{ \begin{array}{l} \frac {n _ {\text {e r r}}}{n} \leq \varepsilon \\ M (\varphi (\mathcal {X} _ {+}), \varphi (\mathcal {X} _ {-})) = \sup _ {h \in \mathcal {H} (\varphi (\mathcal {X}))} g _ {\mathrm {m}} (h) \end{array} , \right. \tag {5} +$$ + +where $R(\varphi(\mathcal{X}))$ and $d(\varphi(\mathcal{X}))$ are radius and dimension of latent space $\varphi(\mathcal{X})$ , respectively. $g_{\mathrm{m}}(h)$ is the supremum of margin of $h$ on $\varphi(\mathcal{X})$ given target function $f_{\mathrm{c}}$ (see Definition 2.2). + +In general, $d(\varphi (\mathcal{X})) > \left[\frac{R^2(\varphi(\mathcal{X}))}{M^2(\varphi(\mathcal{X}_+),\varphi(\mathcal{X}_-))}\right]$ so formula (5) can be rewritten as follows + +$$ +\min _ {\varphi \in \Psi} \frac {R ^ {2} (\varphi (\mathcal {X}))}{M ^ {2} (\varphi (\mathcal {X} _ {+}) , \varphi (\mathcal {X} _ {+}))} +$$ + +$$ +s. t. \left\{ \begin{array}{l} \frac {n _ {\text {e r r}}}{n} \leq \varepsilon \\ M (\varphi (\mathcal {X} _ {+}), \varphi (\mathcal {X} _ {+})) = \sup _ {h \in \mathcal {H} (\varphi (\mathcal {X}))} g _ {\mathrm {m}} (h) \end{array} . \right. \tag {6} +$$ + +Formula (6) is not executable, because the distribution of input space $\mathcal{X}$ is unknown. Fortunately, training set $X = X_{+} \cup X_{-}$ that is sampled from $\mathcal{X}$ is available. Therefore, we can use $X$ to estimate the distribution of $\mathcal{X}$ , which results in the following optimization problem + +$$ +\min _ {\varphi \in \Psi} \frac {R ^ {2} (\varphi (X))}{M ^ {2} (\varphi (X _ {+}) , \varphi (X _ {-}))} +$$ + +$$ +s. t. \left\{ \begin{array}{l} \frac {n _ {\text {e r r}}}{n} \leq \varepsilon \\ M (\varphi (X _ {+}), \varphi (X _ {-})) = \sup _ {h \in \mathcal {H} (\varphi (X))} g _ {\mathrm {m}} (h) \end{array} . \right. \tag {7} +$$ + +Formula (7) is still not executable, because margin $M(\varphi(X_{+}), \varphi(X_{-}))$ and radius $R(\varphi(X))$ are both intractable. The next section will address these two problems. + +# 3.3. Making Margin and Radius Executable + +For sake of discussion, this section takes input space $\mathcal{X}$ as an example to solve the margin and radius problems. The results can be directly applied to latent space $\varphi(\mathcal{X})$ . + +# 3.3.1. MARGIN + +In formula (7), constraint + +$$ +M \left(X _ {+}, X _ {-}\right) = \sup _ {h \in \mathcal {H} (X)} g _ {\mathrm {m}} (h) \tag {8} +$$ + +is intractable. In order to solve it effectively, a lower bound of $M(X_{+},X_{-})$ is constructed by the following theorem. + +Theorem 3.1. The optimization problem (8) can be bounded by the following convex optimization problem. See Appendix A.1 for proof. + +$$ +M ^ {2} \left(X _ {+}, X _ {-}\right) \geq \frac {1}{4} \min _ {\boldsymbol {\alpha} \in \triangle^ {n _ {+}}, \boldsymbol {\beta} \in \triangle^ {n _ {-}}} \| \boldsymbol {X} _ {+} \boldsymbol {\alpha} - \boldsymbol {X} _ {-} \boldsymbol {\beta} \| _ {2} ^ {2} +$$ + +# 3.3.2. RADIUS + +In formula (7), $R(X)$ is a radius of set $X$ . To calculate it effectively, we first formalize $R(X)$ by Definition A.7 in Appendix A.2 and then prove that $R(X)$ can be calculated by solving a convex optimization. + +Theorem 3.2. Given a set $X \subset \mathbb{R}^d$ , the squared radius of $X$ can be computed by the following convex optimization problem. See Appendix A.2 for proof. + +$$ +R ^ {2} (X) = \inf _ {\boldsymbol {\theta} \in \triangle^ {| X |}} \sup _ {\boldsymbol {x} \in X} \| \boldsymbol {x} - \boldsymbol {X} \boldsymbol {\theta} \| _ {2} ^ {2} +$$ + +# 3.4. Final Model and Solving + +By substituting the conclusions of Theorems 3.1 and 3.2 into optimization problem (7), we have + +$$ +\begin{array}{l} \min _ {\varphi \in \Psi} f (\varphi) = 4 \frac {g _ {1} (\varphi)}{g _ {2} (\varphi)} \\ s. t. \left\{ \begin{array}{c} g _ {1} (\varphi) = \min _ {\boldsymbol {\theta}} \max _ {\mathbf {x} _ {i} \in X} \| \varphi (\mathbf {x} _ {i}) - \varphi (\mathbf {X}) \boldsymbol {\theta} \| _ {2} ^ {2} \\ s. t. \quad \boldsymbol {\theta} \in \triangle^ {n} \\ g _ {2} (\varphi) = \min _ {\boldsymbol {\alpha}, \boldsymbol {\beta}} \| \varphi (\mathbf {X} _ {+}) \boldsymbol {\alpha} - \varphi (\mathbf {X} _ {-}) \boldsymbol {\beta} \| _ {2} ^ {2} \\ s. t. \quad \boldsymbol {\alpha} \in \triangle^ {n _ {+}}, \boldsymbol {\beta} \in \triangle^ {n -} \end{array} . \right. \tag {9} \\ \end{array} +$$ + +The geometric meaning of formula (9) is clear. $g_{1}(\varphi)$ is radius of training samples in latent space (Definition A.7 in Appendix A.2) and $g_{2}(\varphi)$ is the distance between the convex hull of positive training samples and the convex hull of negative training samples in latent space (Definition A.2 in Appendix A.1). + +In formula (9), $g_{1}(\varphi)$ and $g_{2}(\varphi)$ are both convex quadratic optimization problems and there are many conventional solving methods (Boyd & Vandenberghe, 2004). To solve + +them more efficiently, the constraint $\triangle^m$ is expanded by softmax activation function $\sigma$ , i.e., + +$$ +\begin{array}{l} \min _ {\varphi \in \Psi} f (\varphi) = 4 \frac {g _ {1} (\varphi)}{g _ {2} (\varphi)} \\ s. t. \left\{ \begin{array}{l} g _ {1} (\varphi) = \underset {\mathbf {u}} {\operatorname* {m i n}} \underset {\mathbf {x} _ {i} \in X} {\operatorname* {m a x}} \| \varphi (\mathbf {x} _ {i}) - \varphi (\mathbf {X}) \sigma (\mathbf {u}) \| _ {2} ^ {2} \\ g _ {2} (\varphi) = \underset {\mathbf {v}, \mathbf {w}} {\operatorname* {m i n}} \| \varphi (\mathbf {X} _ {+}) \sigma (\mathbf {v}) - \varphi (\mathbf {X} _ {-}) \sigma (\mathbf {w}) \| _ {2} ^ {2} \end{array} , \right. \tag {10} \\ \end{array} +$$ + +where $\mathbf{u} \in \mathbb{R}^n$ , $\mathbf{v} \in \mathbb{R}^{n+}$ , $\mathbf{w} \in \mathbb{R}^{n-}$ . The optimization problems $g_1(\varphi)$ and $g_2(\varphi)$ can be solved effectively and efficiently by using the accelerated first-order algorithm, e.g., Adam (Kingma & Ba, 2015) and high-performance computing hardware, e.g., GPU. See Appendix B.1 for details. + +# 4. VC Dimension based Kernel Selection + +In this section, the set of candidate representation learning functions $\Psi$ is modeled as a set of kernel functions $\Psi_{\mathrm{kernel}}$ . Based on Section 3, a general VC dimension based kernel selection method is designed. For sake of discussion, we illustrate the method on binary classification problem. + +# 4.1. Optimization Model + +By substituting $\Psi_{\mathrm{kernel}}$ into formula (10), we obtain the following optimization problem (see Appendix A.3 for proof). + +$$ +\begin{array}{l} \min_{\varphi \in \Psi_{\text{kernel}}}f(\varphi) = 4\frac{g_{1}(\varphi)}{g_{2}(\varphi)} \\ s. t. \left\{ \begin{array}{l} g _ {1} (\varphi) = \underset {\mathbf {u}} {\operatorname {m i n}} \underset {\mathbf {x} _ {i} \in X} {\operatorname {m a x}} \\ \sigma (\mathbf {u}) ^ {T} \mathbf {K} ^ {\varphi} \sigma (\mathbf {u}) - 2 \mathbf {K} _ {[ i,: ]} ^ {\varphi} \sigma (\mathbf {u}) + k _ {i i} ^ {\varphi} \\ g _ {2} (\varphi) = \underset {\mathbf {v}, \mathbf {w}} {\operatorname {m i n}} \binom {\sigma (\mathbf {v})} {\sigma (\mathbf {w})} ^ {T} \hat {\mathbf {K}} ^ {\varphi} \binom {\sigma (\mathbf {v})} {\sigma (\mathbf {w})} \end{array} , \right. \tag {11} \\ \end{array} +$$ + +where $\mathbf{K}^{\varphi}$ and $\hat{\mathbf{K}}^{\varphi}$ are given in formula (29) in Appendix A.3, $\mathbf{K}_{[i,:]}^{\varphi}$ is the $i$ -th row of matrix $\mathbf{K}^{\varphi}$ , and $\mathbf{u}, \mathbf{v}, \mathbf{w}$ and $\sigma$ are same as formula (10). + +# 4.2. Model Solving + +In formula (11), the feasible domain $\Psi_{\mathrm{kernel}}$ is a finite set. $\forall \varphi \in \Psi_{\mathrm{kernel}}$ , the objective function $f(\varphi)$ can be calculated efficiently by the method given in Section 3.4. So it can be solved efficiently. See Appendix B.2 for details. + +# 5. VC Dimension base DNN Boosting Framework + +For a multiple classification problem, let $\mathcal{Y} = \{1,2,\dots ,K\}$ , $K > 2$ be output space. Let $\mathbf{X}$ be a matrix composed of all training samples. $\forall p\in \mathcal{V}$ , let $\mathbf{X}_p$ be a matrix composed of training samples that come from class $p$ . + +In this section, the representation learning function is mod + +eled as a DNN $\varphi : \mathcal{X} \to \mathcal{X}'$ , $\mathcal{X}' \subseteq \mathbb{R}^{d'}$ with learnable parameters $\Theta$ . Based on Section 3, a general DNN boosting framework is proposed. An overall flowchart of the framework is shown in Figure 1. + +![](images/71ef173cd337afca385d8bcb10f34a518e23404abe2c43d668e623d85cf8da43.jpg) +Figure 1. The flowchart of the VC dimension based DNN boosting framework. The proposed module is marked with a solid red box. + +# 5.1. Optimization Model + +$\forall \mathbf{x}\in \mathcal{X}$ , let $\varphi$ $(\mathbf{x};\Theta)$ be latent representation of sample $\mathbf{x}$ By substituting $\varphi (\cdot ;\Theta)$ into formula (10), we have + +$$ +\begin{array}{l} \underset {\Theta} {m i n} f (\Theta) = \sum_ {p, q \in \mathcal {Y}, p < q} 4 \frac {g _ {1} (\Theta)}{g _ {2} ^ {(p , q)} (\Theta)} \\ s. t. \left\{ \begin{array}{l} g _ {1} (\Theta) = \min _ {\mathbf {u}} \max _ {\mathbf {x} _ {i} \in X} \\ \| \varphi (\mathbf {x} _ {i}; \Theta) - \varphi (\mathbf {X}; \Theta) \sigma (\mathbf {u}) \| _ {2} ^ {2} \\ \forall p, q \in \mathcal {Y}, p < q, g _ {2} ^ {(p, q)} (\Theta) = \min _ {\mathbf {v}, \mathbf {w}} \\ \| \varphi (\mathbf {X} _ {p}; \Theta) \sigma (\mathbf {v}) - \varphi (\mathbf {X} _ {q}; \Theta) \sigma (\mathbf {w}) \| _ {2} ^ {2} \end{array} . \right. \tag {12} \\ \end{array} +$$ + +To make formula (12) easier to solve, we introduce 2-norm standardization network, i.e., $\forall \mathbf{h} \in \mathbb{R}^{d'}$ , $\lambda(\mathbf{h}) = \frac{\mathbf{h}}{\|\mathbf{h}\|_2}$ . $\forall \mathbf{x} \in \mathcal{X}$ , $\|\lambda(\varphi(\mathbf{x};\Theta))\|_2 = 1$ , so the radius of latent space $\lambda(\varphi(\mathcal{X};\Theta))$ (see Definition A.7 in Appendix A.2) is not greater than 1. By substituting $\lambda(\cdot)$ into formula (12), we have + +$$ +\begin{array}{l} \underset {\Theta} {m i n} - \sum_ {p, q \in \mathcal {Y}, p < q} g ^ {(p, q)} (\Theta) \\ s. t. \left\{ \begin{array}{l} \forall p, q \in \mathcal {Y}, p < q, g ^ {(p, q)} (\Theta) = \min _ {\mathbf {v}, \mathbf {w}} \\ \| \lambda (\varphi (\mathbf {X} _ {p}; \Theta)) \sigma (\mathbf {v}) - \lambda (\varphi (\mathbf {X} _ {q}; \Theta)) \sigma (\mathbf {w}) \| _ {2} ^ {2} \end{array} \right. \tag {13} \\ \end{array} +$$ + +In order to achieve end-to-end learning, we add a linear classification network at the end of $\lambda (\varphi (\cdot ;\Theta))$ and introduce empirical loss $\mathcal{L}$ into formula (13). The final optimization + +problem can be written as follows + +$$ +\begin{array}{l} \min _ {\boldsymbol {\Theta}, \mathbf {W}, \mathbf {b}} \mathcal {L} (\boldsymbol {\Theta}, \mathbf {W}, \mathbf {b}) + \gamma_ {\mathrm {V C}} \mathcal {L} _ {\mathrm {V C}} (\boldsymbol {\Theta}) \\ \begin{array}{l} s. t. \left\{ \begin{array}{l} \mathcal {L} (\Theta , \mathbf {W}, \mathbf {b}) = \\ \frac {1}{n} \sum_ {i = 1} ^ {n} \ell_ {\mathrm {C E}} \left(\sigma \left(\mathbf {W} \lambda \left(\varphi \left(\mathbf {x} _ {i}; \Theta\right)\right) + \mathbf {b}\right), y _ {i}\right) \\ \mathcal {L} _ {\mathrm {V C}} (\Theta) = - \frac {2}{| \mathcal {Y} | (| \mathcal {Y} | - 1)} \sum_ {p, q \in \mathcal {Y}, p < q} g ^ {(p, q)} (\Theta) \\ \forall p, q \in \mathcal {Y}, p < q, g ^ {(p, q)} (\Theta) = \min _ {\mathbf {v}, \mathbf {w}} \\ \| \lambda (\varphi (\mathbf {X} _ {p}; \Theta)) \sigma (\mathbf {v}) - \lambda (\varphi (\mathbf {X} _ {q}; \Theta)) \sigma (\mathbf {w}) \| _ {2} ^ {2} \end{array} , \right. \end{array} \tag {14} \\ \end{array} +$$ + +where $\mathbf{W} \in \mathbb{R}^{|\mathcal{V}| \times d'}$ and $\mathbf{b} \in \mathbb{R}^{|\mathcal{V}|}$ are learnable parameters of linear classification network, $\ell_{\mathrm{CE}}$ is cross entropy loss, $\mathcal{L}_{\mathrm{VC}}$ is VC dimension based loss, $\gamma_{\mathrm{VC}} > 0$ is trade-off parameter, and $\mathbf{v}$ , $\mathbf{w}$ are vectors with adaptive length. + +# 5.2. Model Solving + +Given a batch of training samples, the optimization problem $g^{(p,q)}(\Theta)$ in formula (14) can be solved efficiently by the method given in Section 3.4. So the total loss $\mathcal{L}(\Theta, \mathbf{W}, \mathbf{b}) + \gamma_{\mathrm{VC}} \mathcal{L}_{\mathrm{VC}}(\Theta)$ in formula (14) can be optimized efficiently by stochastic gradient descent with mini-batch even if the size of the training samples is large. See Appendix B.3 for details. + +# 5.3. Related Work + +In DeepLAD (Dorfer et al., 2016), a loss is designed based on the criteria of maximizing inter-class scatter and minimizing intra-class scatter to train DNNs. Although the loss is simple and intuitive, there is no theoretical analysis in the literature on whether it is conducive to reduce generalization error. In Section 6.3, we validate the advantage of the proposed method by comparison experiment. + +# 6. Experiments + +# 6.1. Verifying Theoretical Results + +This section demonstrates the effectiveness of the representation learning framework proposed in Section 3.4. + +# 6.1.1. EXPERIMENTAL SETTINGS + +Data set A 2-dimension data set Taichi is designed. The ground truth distribution of Taichi data set is shown in Figure 2a. The training data set consists of 628 positive samples and 629 negative samples, as shown in Figure 2b. The test data set consists of 251, 312 positive samples and 251, 313 negative samples. + +Candidate functions Gaussian kernel $\varphi (\mathbf{x}_i;\delta)^T\varphi (\mathbf{x}_j;\delta)$ $= \exp \left(\delta \| \mathbf{x}_i - \mathbf{x}_j\| _2^2\right),\delta < 0$ is used. And there are 2000 kernel parameters, i.e., $\delta_{k} = \delta_{\mathrm{min}} + (k - 1)\frac{\delta_{\mathrm{max}} - \delta_{\mathrm{min}}}{2000},$ $k = 1,2,\dots ,2000$ where $\delta_{\mathrm{min}} = -200$ $\delta_{\mathrm{max}} = -10^{-5}$ + +![](images/374f9825b6bf268066880d32bfe3877e6d5dd6c16b9e45537d41db827ff791b0.jpg) + +![](images/3b64689885a6e3b35f9c6bf4e725cc823ff32fb5cc6e5380dfd63e302171f4d8.jpg) + +![](images/63a0b51941460cd9dcfc46cba447e440e38d21231488b4461ff78ddc250dd0e4.jpg) + +![](images/c5301cc7554c5d185f8a65640aadb3fdfd5195f0870e6a0d50656fc3e94e485b.jpg) +Figure 2. The experiments on Taichi data set. (a) The ground truth distribution. (b) The training data set. (c) The classification hyperplane of SVM@ $\varphi (\cdot ;\delta^{*})$ . (d) The error rate of SVM@ $\varphi (\cdot ;\delta)$ and the objective function value $f(\varphi (\cdot ;\delta))$ in formula (10) with $\delta \in [-200, - 4]$ (e) It's the same as (d), except for $\delta \in (-4, - 10^{-5}]$ . (f) The values of $g_{1}(\varphi (\cdot ;\delta^{*}))$ and $g_{2}(\varphi (\cdot ;\delta^{*}))$ in formula (10) after each iteration. + +![](images/9785dd1145c0daf05d3a0f6d3ee530afbb13b8076c211beb7d01bb83739b3417.jpg) + +![](images/58f66a116ffc9c4dbc372af617ff3f68d103c2d95d30c1670c428ccebba1b983.jpg) + +Basic learner The SVM (Cortes & Vapnik, 1995) with Gaussian kernel is selected as the basic learner. Let SVM@ $\varphi (\cdot ;\delta)$ denote the SVM classifier trained on the training data set with kernel function $\varphi (\cdot ;\delta)$ . + +See Appendix C.1 for more experimental details. + +# 6.1.2. EXPERIMENTAL RESULT AND ANALYSIS + +We show the experimental results in Figure 2. We have + +(a) Figure 2d and 2e show that the objective function value (red curve) $f(\varphi (\cdot ;\delta))$ in formula (10) on training set and error rate (blue curve) on test set under different kernel parameters. It is observed that their change tendency is consistent, which demonstrates that the proposed method can be used for effectively estimating the generalization error of representation learning function $\varphi (\cdot ;\delta)$ . +(b) Figure 2c shows the classification hyperplane of SVM@ $\varphi (\cdot ;\delta^{*})$ , where $\delta^*$ is parameter corresponding to the lowest point of the red curve in Figure 2d. We can observe that the classification hyperplane with parameter $\delta^{*}$ is close to the ground truth distribution. +(c) Figure 2f shows the convergence of the objective functions $g_{1}(\cdot)$ (blue curve) and $g_{2}(\cdot)$ (red curve) in formula (10). The result shows the both objective functions can converge after a few iterations. + +Therefore, the criterion in formula (10) can be used for effectively estimating the generalization error of representation + +learning function $\varphi (\cdot ;\delta)$ and the corresponding optimization problems can be solved effectively and efficiently by the proposed Algorithm 1 and 2 in Appendix B.1. + +# 6.2. Kernel Selection Experiment + +This section demonstrates the effectiveness of the kernel selection method proposed in Section 4. + +# 6.2.1. EXPERIMENTAL SETTINGS + +Data sets There are 15 binary classification data sets (see Table 5 in Appendix C.2 for details). For each data set, $80\%$ samples are selected randomly as training data set and the remaining samples are selected as test data set. The ratio between positive samples and negative samples in training data set (and test data set) is equal to the whole data set. + +Candidate kernel functions In this section, Gaussian kernel $\varphi (\mathbf{x}_i;\delta)^T\varphi (\mathbf{x}_j;\delta) = \exp \left(\delta \| \mathbf{x}_i - \mathbf{x}_j\| _2^2\right),\delta < 0$ is used. And there are 25 kernel parameters, i.e., $\delta_{k} = -2^{9 - k}$ $k = 1,2,\dots ,25$ , are used on each data set. + +Basic learner The SVM (Cortes & Vapnik, 1995) with Gaussian kernel is selected as the basic learner. Let SVM@ $\varphi (\cdot ;\delta)$ denote the SVM classifier trained on the training data set with kernel function $\varphi (\cdot ;\delta)$ . + +Comparison method Cross validation (Bengio & Grandvalet, 2004; Rodríguez et al., 2010; Jiang & Wang, 2017) is a commonly used method for kernel selection. 5-fold cross + +validation is used to estimate the generalization performance of each candidate kernel function. To reduce randomness, 5 different 5-fold cross validations (denoted as T1, T2, ..., T5) are conducted. Let $\delta_{CV}^{*}$ be the selected parameter. + +Proposed method For each data set, the training data set and the set of candidate kernel functions are fed to the Algorithm 3 in Appendix B. Let $\delta_{opt}$ be the selected parameter. + +Oracle and evaluation criteria Given a learning task (a data set), the goal of learning is to minimize generalization error, i.e., error rate on test set. In this experiment, the error rate of SVM@ $\varphi (\cdot ;\delta_k)$ $k = 1,2,\dots ,25$ on test set are used as oracle to estimate the generalization performance of each candidate parameter. So the oracle-rank of every candidate parameter rank $(\delta_{k})\in \{1,2,\dots ,25\}$ can be calculated. Then, rank $(\delta_{\mathrm{CV}}^{*})$ and rank $(\delta_{\mathrm{opt}})$ are used to compare the performance of the corresponding methods. + +See Appendix C.2 for more experimental details. + +Table 2. The oracle-ranks of different methods. + +
ID5-FOLD CROSS VALIDATIONOUR
T1T2T5T4T5MEAN
D1222222.001
D2111221.401
D3332332.801
D4323322.601
D5222222.001
D6133312.201
D7333333.002
D8555645.002
D9113111.401
D10144112.201
D11342423.001
D12331132.201
D13121311.601
D14122221.801
D15122221.801
+ +# 6.2.2. EXPERIMENTAL RESULT AND ANALYSIS + +We conduct two groups of experiments from the perspective of generalization performance and time efficiency. + +First, the oracle-rank of different methods is used for evaluating the generalization performance. The experimental results on 15 data sets are presented in Table 2. It can be observed that the oracle-rank of the proposed kernel selection method is superior to the means of 5-fold cross validation on the all data sets. Therefore, the generalization performance of the proposed method is superior to 5-fold cross validation. In addition, cross validation is inherently of randomness, while the proposed method is of no randomness. + +Second, we report the running times of 5-fold cross validation and the proposed method for evaluating a kernel + +function in Table 3. The results show that with the increase of the number of training samples, the time efficiency of the proposed method is significantly higher than of the 5-fold cross validation. See Appendix B.2.1 for detailed analysis. + +Therefore, the proposed method can effectively and efficiently select kernel with good generalization performance for kernel based method. + +Table 3. The running times of different methods (ms). + +
IDTRAINING +SAMPLES5-FOLD +CROSS +VALIDATIONOURRATIO
D181.785.960.30
D2642.974.370.68
D3863.954.540.87
D41365.944.391.35
D51467.416.141.21
D61578.675.711.52
D723615.436.282.46
D824510.925.971.83
D945658.785.909.96
D1046741.875.168.12
D1148671.535.8212.29
D1255374.1618.334.05
D1356032.225.685.67
D145,9218,406.9052.56159.96
D1515,21733,147.2422.081,501.23
+ +# 6.3. DNN Boosting Experiment + +This section demonstrates the effectiveness of the DNN boosting framework proposed in Section 5. + +# 6.3.1. EXPERIMENTAL SETTINGS + +Data sets The MNIST and CIFAR10 are used in this section. In this section, 10, 20, $\dots$ , 60 samples are randomly selected from each class in the training set to train the models and the 10,000 test samples are used to evaluate the models. + +Basic DNNs There are 5 DNNs used in this section, i.e., FCNet3, LeNet (LeCun et al., 1989), ResNet18 (He et al., 2016), ResNet50 (He et al., 2016) and ViT-Base (Dosovitskiy et al., 2021). Among them, FCNet3 hardly uses domain knowledge, LeNet uses less domain knowledge, ResNet18, ResNet50, and ViT-Base use more domain knowledge. + +Comparison framework The DeepLAD (Dorfer et al., 2016) is selected as the comparison framework. + +$\forall \mathsf{Net} \in \{\mathsf{FCNet3}, \mathsf{LeNet}, \mathsf{ResNet18}, \mathsf{ResNet50}, \mathsf{ViT-Base}\}$ , let $\mathsf{Net}, \mathsf{Net} + \mathsf{LDA}$ , and $\mathsf{Net} + \mathsf{Our}$ be the basic DNN, the DNN embedded into DeepLDA, and the DNN embedded into the proposed framework, respectively. + +See Appendix C.3 for more experimental details. + +Table 4. The accuracy of different methods on test data set $(\%)$ + +
DATA SETMETHODTHE NUMBER OF TRAINING SAMPLES IN EACH CLASS
102030405060
MNISTFCNET375.9183.4985.7687.2187.9388.87
FCNET3+LDA35.2641.7045.0743.9142.6544.57
FCNET3+OUR78.7485.1487.2788.4489.6090.25
LENET67.3471.6682.1483.6784.1384.53
LENET+LDA34.9634.7335.4633.3235.1936.61
LENET+OUR71.3674.7383.9994.0494.2394.78
RESNET1876.7084.5789.4290.5591.4691.74
RESNET18+LDA41.5143.1346.6844.0047.1049.19
RESNET18+OUR85.3590.2292.8193.3894.1694.28
RESNet5072.4381.5387.4389.3688.9489.41
RESNet50+LDA22.5138.1238.5539.6039.8740.66
RESNet50+OUR78.0985.7488.1389.8390.8491.15
VIT-BASE71.1678.0082.9184.2285.6987.23
VIT-BASE+LDA53.9856.8358.0758.8359.0159.88
VIT-BASE+OUR72.4780.5086.1487.6888.9689.56
CIFAR10FCNET321.5324.8126.4129.0129.7229.54
FCNET3+LDA17.6319.5421.4321.0621.0620.55
FCNET3+OUR24.5427.4830.1131.3432.6232.25
LENET13.3916.9918.5118.6620.3122.31
LENET+LDA14.7814.2015.4015.7315.3015.39
LENET+OUR22.5123.1326.8929.1929.6030.47
RESNET1824.6327.8732.5333.8334.2135.06
RESNET18+LDA16.2615.4016.6318.2618.8119.29
RESNET18+OUR26.6030.7733.9835.7836.6436.87
RESNet5022.9926.3229.7430.7730.4530.51
RESNet50+LDA15.2718.0117.9215.6418.1420.60
RESNet50+OUR23.6827.6830.6432.0332.2132.31
VIT-BASE21.2822.6523.9625.1425.2025.95
VIT-BASE+LDA19.3019.0920.4120.3021.0520.33
VIT-BASE+OUR22.8124.1726.5229.1027.3427.98
+ +# 6.3.2. EXPERIMENTAL RESULT AND ANALYSIS + +We report the test accuracy of all methods with different number of the training samples in Table 4. We have + +(a) In all cases, $\forall Net \in \{\text{FCNet3}, \text{LeNet}, \text{ResNet18}, \text{ResNet50}, \text{ViT-Base}\}$ , the performances of $Net + \text{Our}$ is superior to $Net$ . These results demonstrate the universality and effectiveness of the proposed framework. +(b) When the number of the training samples is small, the proposed framework can boost the performance of basic DNNs. Therefore, the proposed framework can alleviate the dependence of DNN on a large number labeled samples to a certain extent. +(c) On MNIST data set, we have (c1) The performance of ResNet18 is superior to FCNet3 and LeNet in all cases because it is a well-designed network architecture that uses more domain knowledge. (c2) The performance of FCNet3+Our is superior to LeNet with 10, 20, ..., 60 training samples. (c3) The performance of LeNet+Our is superior + +to ResNet18 with 40, 50, and 60 training samples. Similar results can also be seen on the CIFAR10 data set, e.g., FCNet3+Our is superior to LeNet 10, 20, ..., 60 training samples. These results show that the proposed framework can achieve better performance with less or no domain knowledge. +(d) On both two data sets, ResNet18 gets the highest performance among 5 basic DNNs. The network architecture of ResNet18 is more complex than FCNet3 and LeNet, but simpler than ResNet50 and ViT-Base. These results show that the complexity of the network architecture needs to match the task in order to achieve good performance. The proposed framework can improve their performance regardless of whether the network architecture matches the task. +(e) In all cases, $\forall Net\in \{\mathrm{FCNet3},\mathrm{LeNet},\mathrm{ResNet18},$ ResNet50, ViT-Base} the performances of $Net + \mathrm{Our}$ is superior to $Net + \mathrm{LDA}$ . These results show that the proposed method is superior to DeepLDA as a boosting framework. This is because DeepLDA needs to estimate the covariance + +matrix of the latent representations. When the number of the training samples is small, it is difficult to obtain accurate results. On the contrary, the radius and convex hull of the samples can better depict the characteristics of the latent representations, thus estimating these variables can better approximate the generalization error. + +In summary, the proposed boosting framework is universal and effective, and can alleviate some problems of DNN. + +# 7. Conclusion + +In this paper, a VC dimension based criterion is designed. The criterion has clear geometric meaning and can estimate generalization error of representation learning function effectively and efficiently. Based on the criterion, we propose a general representation learning framework which aims to reduce generalization error of representation learning. And the framework has been successfully applied to kernel selection and DNN boosting. As a general framework, it is promising to combine with other machine learning methods and improve their generalization performance. + +# Acknowledgements + +This work is supported by the National Key Research and Development Program of China (under grant 2020AAA0106100) and the National Natural Science Foundation of China (Nos. 62006147, U21A20473). + +# References + +Bengio, Y. and Grandvalet, Y. No unbiased estimator of the variance of k-fold cross-validation. Journal of Machine Learning Research, 5:1089-1105, 2004. +Boyd, S. and Vandenberghe, L. Convex Optimization. Cambridge University Press, 1 edition, 2004. +Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. Language models are few-shot learners. In Advances in Neural Information Processing Systems, Virtual Event, pp. 1877-1901, 2020. +Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., and Zagoruyko, S. End-to-end object detection with transformers. In European Conference on Computer Vision, Glasgow, UK, Proceedings, Part I, pp. 213-229, 2020. + +Chiu, C., Qin, J., Zhang, Y., Yu, J., and Wu, Y. Self-supervised learning with random-projection quantizer for speech recognition. In International Conference on Machine Learning, Baltimore, Maryland, USA, pp. 3915-3924, 2022. +Cortes, C. and Vapnik, V. Support-vector networks. Machine Learning, 20(3):273-297, 1995. +Cuadros, X. S., Zappella, L., and Apostoloff, N. Self-conditioning pre-trained language models. In International Conference on Machine Learning, Baltimore, Maryland, USA, pp. 4455-4473, 2022. +Cui, J. and Liang, J. Fuzzy learning machine. In Advances in Neural Information Processing Systems, Virtual Event, pp. 36693-36705, 2022. +Devlin, J., Chang, M., Lee, K., and Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, MN, USA, pp. 4171-4186, 2019. +Dorfer, M., Kelz, R., and Widmer, G. Deep linear discriminant analysis. In International Conference on Learning Representations, San Juan, Puerto Rico, 2016. +Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, Virtual Event, 2021. +Goodfellow, I. J., Bengio, Y., and Courville, A. C. Deep Learning. MIT Press, 1 edition, 2016. +Gulati, A., Qin, J., Chiu, C., Parmar, N., Zhang, Y., Yu, J., Han, W., Wang, S., Zhang, Z., Wu, Y., and Pang, R. Conformer: Convolution-augmented transformer for speech recognition. In Annual Conference of the International Speech Communication Association, Virtual Event, Shanghai, China, pp. 5036-5040, 2020. +Hanson, S. J. and Pratt, L. Y. Comparing biases for minimal network construction with back-propagation. In Advances in Neural Information Processing Systems, Denver, Colorado, USA, pp. 177-185, 1988. +He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, pp. 770-778, 2016. + +He, K., Chen, X., Xie, S., Li, Y., Dollar, P., and Girshick, R. B. Masked autoencoders are scalable vision learners. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, pp. 15979-15988, 2022. +Hochreiter, S. and Schmidhuber, J. Long short-term memory. Neural Computation, 9(8):1735-1780, 1997. +Jiang, G. and Wang, W. Error estimation based on variance analysis of k-fold cross-validation. Pattern Recognition, 69:94-106, 2017. +Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In International Conference on Learning Representations, San Diego, CA, USA, 2015. +Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Annual Conference on Neural Information Processing Systems, Lake Tahoe, Nevada, USA, pp. 1106-1114, 2012. +LeCun, Y., Boser, B. E., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W. E., and Jackel, L. D. Handwritten digit recognition with a back-propagation network. In Advances in Neural Information Processing Systems, Denver, Colorado, USA, pp. 396-404, 1989. +LeCun, Y., Bengio, Y., and Hinton, G. E. Deep learning. Nature, 521(7553):436-444, 2015. +Loshchilov, I. and Hutter, F. Decoupled weight decay regularization. In International Conference on Learning Representations, New Orleans, LA, USA, 2019. +Qiu, X., Sun, T., Xu, Y., Shao, Y., Dai, N., and Huang, X. Pre-trained models for natural language processing: A survey. Science China Technological Sciences, 63(10): 1872-1897, 2020. +Rodriguez, J. D., Martinez, A. P., and Lozano, J. A. Sensitivity analysis of k-fold cross validation in prediction error estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(3):569-575, 2010. +Rumelhart, D. E., Hinton, G. E., and Williams, R. J. Learning representations by back-propagating errors. Nature, 323(6088):533-536, 1986. +Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M. S., Berg, A. C., and Fei-Fei, L. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211-252, 2015. +Sainath, T. N., Mohamed, A.-r., Kingsbury, B., and Ramabhadran, B. Deep convolutional neural networks for + +LVCSR. In IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, pp. 8614-8618, 2013. +Schölkopf, B., Smola, A. J., and Müller, K. Nonlinear component analysis as a kernel eigenvalue problem. *Neural Computation*, 10(5):1299-1319, 1998. +Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, San Diego, CA, USA, 2015. +Sutskever, I., Vinyals, O., and Le, Q. V. Sequence to sequence learning with neural networks. In Annual Conference on Neural Information Processing Systems, Montreal, Quebec, Canada, pp. 3104-3112, 2014. +Vapnik, V. The Nature of Statistical Learning Theory. Springer, 2 edition, 1999. +Zhang, R. and Rudnicky, A. I. A large scale clustering scheme for kernel k-means. In International Conference on Pattern Recognition, Quebec, Canada, pp. 289-292, 2002. + +# A. Proofs + +# A.1. Proof of Theorem 3.1 + +Step 1: Some useful definitions and lemmas are given as follows. + +Definition A.1 (Page 34 of (Boyd & Vandenberghe, 2004)). Given a set $X \subset \mathbb{R}^d$ , the convex hull of $X$ is defined as + +$$ +c h (X) = \left\{\sum_ {i = 1} ^ {| X |} \alpha_ {i} \mathbf {x} _ {i} \mid \boldsymbol {\alpha} \in \triangle^ {| X |} \right\}. +$$ + +Obviously, $X \subseteq ch(X)$ , and $ch(X)$ is a convex set. + +Definition A.2. Given two sets $X_{+} = \{\mathbf{x}_{i}|\mathbf{x}_{i}\in \mathbb{R}^{d}\}_{i = 1}^{n_{+}}$ and $X_{-} = \{\mathbf{x}_{j}|\mathbf{x}_{j}\in \mathbb{R}^{d}\}_{j = n_{+} + 1}^{n_{+} + n_{-}}$ , $n = n_{+} + n_{-}$ . The distance in $\| \cdot \| _2$ between the convex hulls $ch(X_{+})$ and $ch(X_{-})$ , denoted as $dis(ch(X_{+}),ch(X_{-}))$ , is defined as follows + +$$ +\begin{array}{l} d i s ^ {2} \left(c h \left(X _ {+}\right), c h \left(X _ {-}\right)\right) = \min _ {\mathbf {a}, \mathbf {b}} \| \mathbf {a} - \mathbf {b} \| _ {2} ^ {2} \\ s. t. \mathbf {a} \in c h (X _ {+}) \cdot \tag {15} \\ \mathbf {b} \in c h (X _ {-}) \\ \end{array} +$$ + +Lemma A.3. The optimization problem (15) is a convex optimization problem. + +Proof. + +$$ +\begin{array}{l} d i s ^ {2} \left(c h \left(X _ {+}\right), c h \left(X _ {-}\right)\right) = \min _ {\mathbf {a} \in c h \left(X _ {+}\right), \mathbf {b} \in c h \left(X _ {-}\right)} \| \mathbf {a} - \mathbf {b} \| _ {2} ^ {2} \\ = \min _ {\boldsymbol {\alpha} \in \triangle^ {n +}, \boldsymbol {\beta} \in \triangle^ {n -}} \| \mathbf {X} _ {+} \boldsymbol {\alpha} - \mathbf {X} _ {-} \boldsymbol {\beta} \| _ {2} ^ {2} \quad / / \text {s e e D e f i n i t i o n A . 1} \tag {16} \\ = \min _ {\boldsymbol {\alpha} \in \triangle^ {n _ {+}}, \boldsymbol {\beta} \in \triangle^ {n _ {-}}} \left(\boldsymbol {\alpha} ^ {T}, \boldsymbol {\beta} ^ {T}\right) \mathbf {K} \binom {\boldsymbol {\alpha}} {\boldsymbol {\beta}} \\ \end{array} +$$ + +where $\mathbf{K} = \left(\mathbf{X}_{+}, - \mathbf{X}_{-}\right)^{T}\left(\mathbf{X}_{+}, - \mathbf{X}_{-}\right)\in \mathbb{R}^{n\times n}$ + +Obviously, $\mathbf{K}$ is a symmetric positive semidefinite matrix. So (a1) $(\pmb{\alpha}^T,\pmb{\beta}^T)$ $\mathbf{K}\left( \begin{array}{c}\pmb {\alpha}\\ \pmb {\beta} \end{array} \right)$ is convex function w.r.t. $\left( \begin{array}{l}\pmb {\alpha}\\ \pmb {\beta} \end{array} \right)$ . + +At the same time, (a2) both $\triangle^{n_{+}}$ and $\triangle^{n_{-}}$ involve only linear constraints. + +Combining (a1) and (a2), the above formula is a convex optimization problem. + +Step 2: The margin of hyperplane described in Definition 2.2 is bounded by the distance between two convex hulls described in Definition A.2, see the following lemma. + +Lemma A.4. Given two sets $X_{+} = \{x_{i}|x_{i}\in \mathbb{R}^{d}\}_{i = 1}^{n_{+}}$ and $X_{-} = \{x_{j}|x_{j}\in \mathbb{R}^{d}\}_{j = n_{+} + 1}^{n_{+} + n_{-}}$ , $n = n_{+} + n_{-}$ . Let $dis(ch(X_{+}),ch(X_{-}))$ be the distance between $ch(X_{+})$ and $ch(X_{-})$ given in Definition A.2. If $dis(ch(X_{+}),ch(X_{-})) > 0$ , then $\exists \omega \in \mathbb{R}^d$ , $\| \omega \| _2 = 1$ , $\exists b\in \mathbb{R}$ , such that + +(i) $\forall \hat{\pmb{p}}\in ch(X_{+}),\pmb{\omega}^{T}\hat{\pmb{p}} +b\geq M,$ +(ii) $\forall \hat{\pmb{n}}\in ch(X_{-}),\pmb{\omega}^{T}\hat{\pmb{n}} +b\leq -M,$ + +where $M = \frac{dis(ch(X_{+}),ch(X_{-}))}{2}$ + +Proof. Let + +$$ +(\mathbf {p}, \mathbf {n}) = \underset {\mathbf {a} \in c h (X _ {+}), \mathbf {b} \in c h (X _ {-})} {\operatorname {a r g m i n}} \| \mathbf {a} - \mathbf {b} \| _ {2} ^ {2}, \tag {17a} +$$ + +$$ +\omega = \frac {\mathbf {p} - \mathbf {n}}{\| \mathbf {p} - \mathbf {n} \| _ {2}}, \tag {17b} +$$ + +$$ +b = - \omega^ {T} \frac {\mathbf {p} + \mathbf {n}}{2}. \tag {17c} +$$ + +Obviously, $\| \boldsymbol{\omega} \|_2 = 1$ . According to Definition A.2, we have $\text{dis}(ch(X_+), ch(X_-)) = \| \mathbf{p} - \mathbf{n} \|_2$ . So $M = \frac{1}{2} \text{dis}(ch(X_+), ch(X_-)) = \frac{1}{2} \| \mathbf{p} - \mathbf{n} \|_2$ + +(A) Proof of (i). + +$\forall \hat{\mathbf{p}}\in ch(X_{+})$ , if $\hat{\mathbf{p}} = \mathbf{p}$ , then + +$$ +\begin{array}{l} \boldsymbol {\omega} ^ {T} \hat {\mathbf {p}} + b = \boldsymbol {\omega} ^ {T} \mathbf {p} + b \\ = \boldsymbol {\omega} ^ {T} \mathbf {p} - \boldsymbol {\omega} ^ {T} \frac {\mathbf {p} + \mathbf {n}}{2} \\ = \omega^ {T} \left(\mathbf {p} - \frac {\mathbf {p} + \mathbf {n}}{2}\right) \\ = \boldsymbol {\omega} ^ {T} \left(\frac {\mathbf {p} - \mathbf {n}}{2}\right) \quad . \tag {18} \\ = \left(\frac {\mathbf {p} - \mathbf {n}}{\| \mathbf {p} - \mathbf {n} \| _ {2}}\right) ^ {T} \left(\frac {\mathbf {p} - \mathbf {n}}{2}\right) \\ = \frac {1}{2} \| \mathbf {p} - \mathbf {n} \| _ {2} = M \\ \end{array} +$$ + +$\forall \hat{\mathbf{p}}\in ch(X_{+})$ , if $\hat{\mathbf{p}}\neq \mathbf{p}$ , then + +$$ +\begin{array}{l} \boldsymbol {\omega} ^ {T} \hat {\mathbf {p}} + b = \boldsymbol {\omega} ^ {T} \hat {\mathbf {p}} - \boldsymbol {\omega} ^ {T} \frac {\mathbf {p} + \mathbf {n}}{2} \\ = \boldsymbol {\omega} ^ {T} \left(\hat {\mathbf {p}} - \frac {\mathbf {p} + \mathbf {n}}{2}\right) \\ = \boldsymbol {\omega} ^ {T} \left(\left(\hat {\mathbf {p}} - \frac {\mathbf {p} + \mathbf {n}}{2} - \frac {\mathbf {p} - \mathbf {n}}{2}\right) + \frac {\mathbf {p} - \mathbf {n}}{2}\right) \\ = \boldsymbol {\omega} ^ {T} \left(\hat {\mathbf {p}} - \mathbf {p} + \frac {\mathbf {p} - \mathbf {n}}{2}\right) (19) \\ = \left(\frac {\mathbf {p} - \mathbf {n}}{\| \mathbf {p} - \mathbf {n} \| _ {2}}\right) ^ {T} (\hat {\mathbf {p}} - \mathbf {p} + \frac {\mathbf {p} - \mathbf {n}}{2}) (19) \\ = \left(\frac {\mathbf {p} - \mathbf {n}}{\| \mathbf {p} - \mathbf {n} \| _ {2}}\right) ^ {T} (\hat {\mathbf {p}} - \mathbf {p}) + \frac {\| \mathbf {p} - \mathbf {n} \| _ {2}}{2} \\ = \left(\frac {\mathbf {p} - \mathbf {n}}{\| \mathbf {p} - \mathbf {n} \| _ {2}}\right) ^ {T} (\hat {\mathbf {p}} - \mathbf {p}) + M \\ \end{array} +$$ + +According to formula (19), in order to proof that $\boldsymbol{\omega}^T\hat{\mathbf{p}} + b \geq M$ , we need to proof that $(\mathbf{p} - \mathbf{n})^T (\hat{\mathbf{p}} - \mathbf{p}) \geq 0$ . Next, we will proof $(\mathbf{p} - \mathbf{n})^T (\hat{\mathbf{p}} - \mathbf{p}) \geq 0$ by reduction to absurdity. + +Assume that $(\mathbf{p} - \mathbf{n})^T (\hat{\mathbf{p}} -\mathbf{p}) < 0$ + +Let $\mathbf{q} = (1 - \theta)\mathbf{p} + \theta \hat{\mathbf{p}}$ , where $\theta \in (0,1)$ . $\because \hat{\mathbf{p}} \in ch(X_{+})$ , $\mathbf{p} \in ch(X_{+})$ , and $ch(X_{+})$ is a convex set, $\therefore \mathbf{q} \in ch(X_{+})$ . And then, we have + +$$ +\begin{array}{l} \| \mathbf {n} - \mathbf {q} \| _ {2} ^ {2} = \| \mathbf {n} - (1 - \theta) \mathbf {p} - \theta \hat {\mathbf {p}} \| _ {2} ^ {2} \\ = \| (\mathbf {n} - \mathbf {p}) + \theta (\mathbf {p} - \hat {\mathbf {p}}) \| _ {2} ^ {2} \tag {20} \\ = \| \mathbf {n} - \mathbf {p} \| _ {2} ^ {2} + \theta^ {2} \| \mathbf {p} - \hat {\mathbf {p}} \| _ {2} ^ {2} + 2 \theta (\mathbf {n} - \mathbf {p}) ^ {T} (\mathbf {p} - \hat {\mathbf {p}}) \\ = \| \mathbf {p} - \mathbf {n} \| _ {2} ^ {2} + \theta^ {2} \| \hat {\mathbf {p}} - \mathbf {p} \| _ {2} ^ {2} + 2 \theta (\mathbf {p} - \mathbf {n}) ^ {T} (\hat {\mathbf {p}} - \mathbf {p}) \\ \end{array} +$$ + +Given formula (20), let $g(\theta) = \theta^2\|\hat{\mathbf{p}} - \mathbf{p}\|_2^2 + 2\theta (\mathbf{p} - \mathbf{n})^T (\hat{\mathbf{p}} - \mathbf{p})$ be the quadratic function w.r.t $\theta$ . And then we have + +(a1) $\because \hat{\mathbf{p}}\neq \mathbf{p},\therefore \| \hat{\mathbf{p}} -\mathbf{p}\| _2^2 >0,$ +(a2) $g(0) = 0$ +(a3) $g\left(-\frac{2(\mathbf{p} - \mathbf{n})^T(\hat{\mathbf{p}} - \mathbf{p})}{\|\hat{\mathbf{p}} - \mathbf{p}\|_2^2}\right) = 0,$ +(a4) $\because (\mathbf{p} - \mathbf{n})^T (\hat{\mathbf{p}} -\mathbf{p}) < 0,\therefore -\frac{2(\mathbf{p} - \mathbf{n})^T(\hat{\mathbf{p}} - \mathbf{p})}{\|\hat{\mathbf{p}} - \mathbf{p}\|_2^2} >0.$ + +Combining (a1)-(a4), we have $\forall \theta' \in (0, 1) \cap \left(0, -\frac{2(\mathbf{p} - \mathbf{n})^T (\hat{\mathbf{p}} - \mathbf{p})}{\|\hat{\mathbf{p}} - \mathbf{p}\|_2^2}\right)$ , $g(\theta') < 0$ . And let $\mathbf{q}' = (1 - \theta')\mathbf{p} + \theta' \hat{\mathbf{p}} \in ch(X_+)$ , then $\|\mathbf{n} - \mathbf{q}'\|_2^2 = \|\mathbf{p} - \mathbf{n}\|_2^2 + g(\theta') < \|\mathbf{p} - \mathbf{n}\|_2^2$ . That is to say, $\exists \mathbf{q}' \in ch(X_+)$ , such that $\|\mathbf{n} - \mathbf{q}'\|_2^2 < \|\mathbf{p} - \mathbf{n}\|_2^2$ , which conflicts with formula (17a). So the assumption is false, and we have (a5) $(\mathbf{p} - \mathbf{n})^T (\hat{\mathbf{p}} - \mathbf{p}) \geq 0$ . + +According to (a5) and formula (19), we have (a6) $\pmb{\omega}^T\hat{\mathbf{p}} + b \geq M$ . + +Combining formula (18) and (a6), (i) is proofed. + +(B) Proof of (ii). + +$\forall \hat{\mathbf{n}}\in ch(X_{+})$ , if $\hat{\mathbf{n}} = \mathbf{n}$ , then + +$$ +\begin{array}{l} \boldsymbol {\omega} ^ {T} \hat {\mathbf {n}} + b = \boldsymbol {\omega} ^ {T} \mathbf {n} + b \\ = \boldsymbol {\omega} ^ {T} \mathbf {n} - \boldsymbol {\omega} ^ {T} \frac {\mathbf {p} + \mathbf {n}}{2} \\ = \boldsymbol {\omega} ^ {T} \left(\mathbf {n} - \frac {\mathbf {p} + \mathbf {n}}{2}\right) \\ = - \omega^ {T} \left(\frac {\mathbf {p} - \mathbf {n}}{2}\right) \tag {21} \\ = - \left(\frac {\mathbf {p} - \mathbf {n}}{\| \mathbf {p} - \mathbf {n} \| _ {2}}\right) ^ {T} \left(\frac {\mathbf {p} - \mathbf {n}}{2}\right) \\ = - \frac {1}{2} \| \mathbf {p} - \mathbf {n} \| _ {2} = - M \\ \end{array} +$$ + +$\forall \hat{\mathbf{n}}\in ch(X_{+})$ , if $\hat{\mathbf{n}}\neq \mathbf{n}$ , then + +$$ +\begin{array}{l} \boldsymbol {\omega} ^ {T} \hat {\mathbf {n}} + b = \boldsymbol {\omega} ^ {T} \hat {\mathbf {n}} - \boldsymbol {\omega} ^ {T} \frac {\mathbf {p} + \mathbf {n}}{2} \\ = \boldsymbol {\omega} ^ {T} \left(\hat {\mathbf {n}} - \frac {\mathbf {p} + \mathbf {n}}{2}\right) \\ = \omega^ {T} \left(\left(\hat {\mathbf {n}} - \frac {\mathbf {p} + \mathbf {n}}{2} - \frac {\mathbf {n} - \mathbf {p}}{2}\right) + \frac {\mathbf {n} - \mathbf {p}}{2}\right) \\ = \boldsymbol {\omega} ^ {T} \left(\hat {\mathbf {n}} - \mathbf {n} + \frac {\mathbf {n} - \mathbf {p}}{2}\right) \tag {22} \\ = \left(\frac {\mathbf {p} - \mathbf {n}}{\| \mathbf {p} - \mathbf {n} \| _ {2}}\right) ^ {T} (\hat {\mathbf {n}} - \mathbf {n} + \frac {\mathbf {n} - \mathbf {p}}{2}) \\ = \left(\frac {\mathbf {p} - \mathbf {n}}{\| \mathbf {p} - \mathbf {n} \| _ {2}}\right) ^ {T} (\hat {\mathbf {n}} - \mathbf {n}) - \frac {\| \mathbf {p} - \mathbf {n} \| _ {2}}{2} \\ = \left(\frac {\mathbf {p} - \mathbf {n}}{\| \mathbf {p} - \mathbf {n} \| _ {2}}\right) ^ {T} (\hat {\mathbf {n}} - \mathbf {n}) - M \\ \end{array} +$$ + +According to formula (22), in order to proof that $\boldsymbol{\omega}^T\hat{\mathbf{n}} + b \leq -M$ , we need to proof that $(\mathbf{p} - \mathbf{n})^T (\hat{\mathbf{n}} - \mathbf{n}) \leq 0$ . Next, we will proof $(\mathbf{p} - \mathbf{n})^T (\hat{\mathbf{n}} - \mathbf{n}) \leq 0$ by reduction to absurdity. + +Assume that $(\mathbf{p} - \mathbf{n})^T (\hat{\mathbf{n}} - \mathbf{n}) > 0$ + +Let $\mathbf{o} = (1 - \theta)\mathbf{n} + \theta \hat{\mathbf{n}}$ , where $\theta \in (0,1)$ . $\because \hat{\mathbf{n}} \in ch(X_{-})$ , $\mathbf{n} \in ch(X_{-})$ , and $ch(X_{-})$ is a convex set, $\therefore \mathbf{o} \in ch(X_{-})$ . And then, we have + +$$ +\begin{array}{l} \left\| \mathbf {p} - \mathbf {o} \right\| _ {2} ^ {2} = \left\| \mathbf {p} - (1 - \theta) \mathbf {n} - \theta \hat {\mathbf {n}} \right\| _ {2} ^ {2} \\ = \left\| (\mathbf {p} - \mathbf {n}) + \theta (\mathbf {n} - \hat {\mathbf {n}}) \right\| _ {2} ^ {2}. \tag {23} \\ = \| \mathbf {p} - \mathbf {n} \| _ {2} ^ {2} + \theta^ {2} \| \mathbf {n} - \hat {\mathbf {n}} \| _ {2} ^ {2} + 2 \theta (\mathbf {p} - \mathbf {n}) ^ {T} (\mathbf {n} - \hat {\mathbf {n}}) \\ \end{array} +$$ + +Given formula (23), let $g(\theta) = \theta^2\| \mathbf{n} - \hat{\mathbf{n}}\| _2^2 +2\theta (\mathbf{p} - \mathbf{n})^T (\mathbf{n} - \hat{\mathbf{n}})$ be the quadratic function w.r.t $\theta$ . And then we have + +(b1) $\because \hat{\mathbf{n}}\neq \mathbf{n},\therefore \| \mathbf{n} - \hat{\mathbf{n}}\| _2^2 >0,$ +(b2) $g(0) = 0$ +(b3) $g\left(-\frac{2(\mathbf{p} - \mathbf{n})^T(\mathbf{n} - \hat{\mathbf{n}})}{\|\mathbf{n} - \hat{\mathbf{n}}\|_2^2}\right) = 0,$ +(b4) $\because (\mathbf{p} - \mathbf{n})^T (\hat{\mathbf{n}} -\mathbf{n}) < 0,\therefore -\frac{2(\mathbf{p} - \mathbf{n})^T(\mathbf{n} - \hat{\mathbf{n}})}{\|\mathbf{n} - \hat{\mathbf{n}}\|_2^2} >0.$ + +Combining (b1)-(b4), we have $\forall \theta' \in (0, 1) \cap \left(0, -\frac{2(\mathbf{p} - \mathbf{n})^T(\mathbf{n} - \hat{\mathbf{n}})}{\|\mathbf{n} - \hat{\mathbf{n}}\|_2^2}\right)$ , $g(\theta') < 0$ . And let $\mathbf{o}' = (1 - \theta')\mathbf{n} + \theta' \hat{\mathbf{n}} \in ch(X_{-})$ , then $\| \mathbf{p} - \mathbf{o}'\|_2^2 = \| \mathbf{p} - \mathbf{n}\|_2^2 + g(\theta') < \| \mathbf{p} - \mathbf{n}\|_2^2$ . That is to say, $\exists \mathbf{o}' \in ch(X_{-})$ , such that $\| \mathbf{p} - \mathbf{o}'\|_2^2 < \| \mathbf{p} - \mathbf{n}\|_2^2$ , which conflicts with formula (17a). So the assumption is false, and we have (b5) $(\mathbf{p} - \mathbf{n})^T (\hat{\mathbf{n}} - \mathbf{n}) \leq 0$ . + +According to (b5) and formula (22), we have (b6) $\boldsymbol{\omega}^T\hat{\mathbf{n}} + b \leq -M$ . + +Combining formula (21) and (b6), (ii) is proofed. + +Combining (A) and (B), the lemma is proofed. + +Remark A.5. According to Lemma A.4, if $\text{dis} \left( \text{ch} \left( \varphi(X_{+}) \right), \text{ch} \left( \varphi(X_{+}) \right) \right) > 0$ , then $\exists h \in \mathcal{H}(\varphi(X_{+}), \varphi(X_{-}))$ , such that $\forall \mathbf{x}_i \in X_+$ , $h(\varphi(\mathbf{x}_i)) > 0$ , and $\forall \mathbf{x}_j \in X_{-}$ , $h(\varphi(\mathbf{x}_j)) < 0$ , i.e., $\frac{n_{e}rr}{n} = 0$ . That is to say, constraint $\frac{n_{err}}{n} \leq \varepsilon$ in formula (7) can be satisfied naturally. + +Step 3: Based on the above conclusions, the proof of Theorem 3.1 is given as follows. + +Proof. (a1) + +$$ +\begin{array}{l} M ^ {2} \left(X _ {+}, X _ {-}\right) = \sup _ {h \in \mathcal {H} \left(X _ {+}, X _ {-}\right)} g _ {\mathrm {m}} ^ {2} (h) \\ \geq \sup _ {h = p h (\omega , b)} g _ {\mathrm {m}} ^ {2} (h) \quad / / \omega , b a r e d i n f o r m u l a \tag {17} \\ \geq \frac {1}{4} d i s ^ {2} \left(c h \left(X _ {+}\right), c h \left(X _ {+}\right)\right) / / a c c o r d i n g t o L e m m a A. 4 \\ = \frac {1}{4} \min _ {\boldsymbol {\alpha} \in \triangle^ {n +}, \boldsymbol {\beta} \in \triangle^ {n -}} \| \mathbf {X} _ {+} \boldsymbol {\alpha} - \mathbf {X} _ {-} \boldsymbol {\beta} \| _ {2} ^ {2}. \quad / / a c c o r d i n g t o L e m m a A. 3 \\ \end{array} +$$ + +(a2) According to Lemma A.3, the min $\| \mathbf{X}_{+}\pmb {\alpha} - \mathbf{X}_{-}\pmb {\beta}\|_{2}^{2}$ is a convex optimization problem. + +Combining (a1) and (a2), the theorem is proofed. + +# A.2. Proof of Theorem 3.2 + +Step 1: A useful definition is given as follows. + +Definition A.6 (Page 30 of (Boyd & Vandenberghe, 2004)). Given $\mathbf{c} \in \mathbb{R}^d$ and $r > 0$ , the ball in norm $\|\cdot\|_2$ is defined as $\mathrm{ball}(\mathbf{c}, r) = \{\mathbf{v} | \mathbf{v} \in \mathbb{R}^d, \| \mathbf{v} - \mathbf{c}\|_2 \leq r\}$ , where $\mathbf{c}$ is called the center of the ball and $r$ is called the radius of the ball. Obviously, $\mathrm{ball}(\mathbf{c}, \mathbf{r})$ is a convex set. + +Step 2: Based on Definition A.6, the radius $R(X)$ can be defined as follows. + +Definition A.7. Given a set $X \subset \mathbb{R}^d$ , the center and radius of $X$ , denoted as $C(X)$ and $R(X)$ , are defined as the center and radius of the minimum ball that can include $X$ , respectively, i.e., + +$$ +(C (X), R (X)) = \underset {\mathbf {c} \in \mathbb {R} ^ {d}, r \in \mathbb {R}} {\operatorname {a r g i n f}} r s. t. X \subseteq \operatorname {b a l l} (\mathbf {c}, r). \tag {24} +$$ + +The formula (24) is a non-trivial optimization problem, because constraints $\mathbf{c} \in \mathbb{R}^d$ and $X \subseteq ball(\mathbf{c}, r)$ are intractable for numerical optimization. + +Step 3: To solve this problem, the properties of the optimal solution of formula (24) need to be revealed, see the following lemma. + +Lemma A.8. Given a set $X \subset \mathbb{R}^d$ , the center of $X$ belongs to the convex hull of $X$ , i.e., if + +$$ +\left(\boldsymbol {c} ^ {*}, r ^ {*}\right) = \underset {\boldsymbol {c} _ {i} \in \mathbb {R} ^ {d}, r \in \mathbb {R}} {\arg \inf } r \text {s . t .} X \subseteq \operatorname {b a l l} (\boldsymbol {c}, r), \tag {25} +$$ + +then $c^* \in ch(X)$ . + +Proof. Reduction to absurdity. Assume that $\mathbf{c}^* \notin ch(X)$ . + +Let + +$$ +\mathbf {x} ^ {*} = \underset {\mathbf {x} \in X} {\operatorname {a r g i n f}} \| \mathbf {x} - \mathbf {c} ^ {*} \| _ {2}, \tag {26a} +$$ + +$$ +\mathbf {c} = \frac {\mathbf {x} ^ {*} + \mathbf {c} ^ {*}}{2}, \tag {26b} +$$ + +$$ +\omega = \frac {\mathbf {x} ^ {*} - \mathbf {c} ^ {*}}{\| \mathbf {x} ^ {*} - \mathbf {c} ^ {*} \| _ {2}}, \tag {26c} +$$ + +$$ +b _ {1} = - \boldsymbol {\omega} ^ {T} \mathbf {c}, \tag {26d} +$$ + +$$ +b _ {2} = - \boldsymbol {\omega} ^ {T} \mathbf {c} ^ {*}. \tag {26e} +$$ + +$\because \mathbf{c}^{*}\notin ch(X),\therefore \mathbf{c}^{*}\notin X,\therefore \mathbf{x}^{*}\neq \mathbf{c}^{*},\therefore \| \mathbf{x}^{*} - \mathbf{c}^{*}\|_{2} > 0.$ Then, we have (a1) $\| \boldsymbol {\omega}\| _2 = 1$ + +(a2) + +$$ +\begin{array}{l} b _ {2} - b _ {1} = - \boldsymbol {\omega} ^ {T} \mathbf {c} ^ {*} + \boldsymbol {\omega} ^ {T} \mathbf {c} \\ = \boldsymbol {\omega} ^ {T} (\mathbf {c} - \mathbf {c} ^ {*}) \\ = \boldsymbol {\omega} ^ {T} \left(\frac {\mathbf {x} ^ {*} + \mathbf {c} ^ {*}}{2} - \mathbf {c}\right) \\ = \left(\frac {\mathbf {x} ^ {*} - \mathbf {c}}{\| \mathbf {x} ^ {*} - \mathbf {c} ^ {*} \| _ {2}}\right) ^ {T} \left(\frac {\mathbf {x} ^ {*} - \mathbf {c} ^ {*}}{2}\right) \\ = \frac {\| \mathbf {x} ^ {*} - \mathbf {c} ^ {*} \| _ {2}}{2} \\ \end{array} +$$ + +$\forall \mathbf{x} \in X, \because X \subseteq ch(X), \therefore \mathbf{x} \in ch(X)$ . At the same time, according to formula (25), $\mathbf{x} \in ball(\mathbf{c}^*, r^*)$ . And then we have: (a3) the distance between $\mathbf{x}$ and hyperplane $hp(\omega, b_1)$ is + +$$ +\begin{array}{l} \operatorname {d i s} \left(\mathbf {x}, p h \left(\boldsymbol {\omega}, b _ {1}\right)\right) = \frac {\left| \boldsymbol {\omega} ^ {T} \mathbf {x} + b _ {1} \right|}{\| \boldsymbol {\omega} \| _ {2}} \\ = \frac {\boldsymbol {\omega} ^ {T} \mathbf {x} + b _ {1}}{\| \boldsymbol {\omega} \| _ {2}} \quad / / a c c o r d i n g t o L e m m a A. 4 \\ = \boldsymbol {\omega} ^ {T} \mathbf {x} + b _ {1}, \quad / / \because (a 1) \\ \end{array} +$$ + +(a4) the distance between $\mathbf{x}$ to hyperplane $hp(\omega, b_2)$ is + +$$ +\begin{array}{l} d i s \left(\mathbf {x}, p h \left(\boldsymbol {\omega}, b _ {2}\right)\right) = \frac {\left| \boldsymbol {\omega} ^ {T} \mathbf {x} + b _ {b} \right|}{\| \boldsymbol {\omega} \| _ {2}} \\ = \frac {\boldsymbol {\omega} ^ {T} \mathbf {x} + b _ {2}}{\| \boldsymbol {\omega} \| _ {2}} \quad / / a c c o r d i n g t o L e m m a A. 4 \\ = \boldsymbol {\omega} ^ {T} \mathbf {x} + b _ {2} \quad / / \because (a 1) \\ = \boldsymbol {\omega} ^ {T} \mathbf {x} + b _ {1} + \frac {\| \mathbf {x} ^ {*} - \mathbf {c} ^ {*} \| _ {2}}{2} \quad / / \because (a 2) \\ = d i s \left(\mathbf {x}, p h \left(\boldsymbol {\omega}, b _ {1}\right)\right) + \frac {\| \mathbf {x} ^ {*} - \mathbf {c} ^ {*} \| _ {2}}{2}, / / \because (a 3) \\ \end{array} +$$ + +(a5) the projection of $\mathbf{x}$ on hyperplane $hp(\omega, b_1)$ is + +$$ +\begin{array}{l} \operatorname {p r o} \left(\mathbf {x}, h p \left(\boldsymbol {\omega}, b _ {1}\right)\right) = \mathbf {x} - d i s \left(\mathbf {x}, p h \left(\boldsymbol {\omega}, b _ {1}\right)\right) \frac {\boldsymbol {\omega}}{\| \boldsymbol {\omega} \| _ {2}} \\ = \mathbf {x} - d i s \left(\mathbf {x}, p h \left(\omega , b _ {1}\right)\right) \omega , \quad / / \because (a 1) \\ \end{array} +$$ + +(a6) the projection of $\mathbf{x}$ on hyperplane $hp(\omega, b_2)$ is + +$$ +\begin{array}{l} \operatorname {p r o} \left(\mathbf {x}, h p \left(\boldsymbol {\omega}, b _ {2}\right)\right) = \mathbf {x} - d i s \left(\mathbf {x}, p h \left(\boldsymbol {\omega}, b _ {2}\right)\right) \frac {\boldsymbol {\omega}}{\| \boldsymbol {\omega} \| _ {2}} \\ = \mathbf {x} - d i s (\mathbf {x}, p h (\omega , b _ {2})) \omega / / \because (a 1) \\ = \mathbf {x} - d i s (\mathbf {x}, p h (\boldsymbol {\omega}, b _ {1})) \boldsymbol {\omega} - \frac {\| \mathbf {x} ^ {*} - \mathbf {c} ^ {*} \| _ {2}}{2} \boldsymbol {\omega}, / / \because (a 5) \\ \end{array} +$$ + +(a7) + +$$ +\begin{array}{l} (p r o (\mathbf {x}, h p (\boldsymbol {\omega}, b _ {1})) - \mathbf {c}) - (p r o (\mathbf {x}, h p (\boldsymbol {\omega}, b _ {2})) - \mathbf {c} ^ {*}) \\ = \left(\mathbf {x} - d i s \left(\mathbf {x}, p h \left(\boldsymbol {\omega}, b _ {1}\right)\right) \boldsymbol {\omega} - \mathbf {c}\right) - \left(\mathbf {x} - d i s \left(\mathbf {x}, p h \left(\boldsymbol {\omega}, b _ {1}\right)\right) \boldsymbol {\omega} - \frac {\| \mathbf {x} ^ {*} - \mathbf {c} ^ {*} \| _ {2}}{2} \boldsymbol {\omega} - \mathbf {c} ^ {*}\right) / / \because (a 5) a n d (a 6) \\ = \frac {\| \mathbf {x} ^ {*} - \mathbf {c} ^ {*} \| _ {2}}{2} \omega - \mathbf {c} + \mathbf {c} ^ {*} \\ = \frac {\left\| \mathbf {x} ^ {*} - \mathbf {c} ^ {*} \right\| _ {2}}{2} \frac {\mathbf {x} ^ {*} - \mathbf {c} ^ {*}}{\left\| \mathbf {x} ^ {*} - \mathbf {c} ^ {*} \right\| _ {2}} - \frac {\mathbf {x} ^ {*} + \mathbf {c} ^ {*}}{2} + \mathbf {c} ^ {*} \\ = \frac {\mathbf {x} ^ {*} - \mathbf {c} ^ {*}}{2} - \frac {\mathbf {x} ^ {*} + \mathbf {c} ^ {*}}{2} + \mathbf {c} ^ {*} \\ = \mathbf {0}, \\ \end{array} +$$ + +(a8) + +$$ +\begin{array}{l} \left\| \mathbf {x} - \mathbf {c} \right\| _ {2} ^ {2} = \left\| \mathbf {x} - p r o \left(\mathbf {x}, p h \left(\omega , b _ {1}\right)\right) \right\| _ {2} ^ {2} + \left\| p r o \left(\mathbf {x}, p h \left(\omega , b _ {1}\right)\right) - \mathbf {c} \right\| _ {2} ^ {2} \\ = d i s ^ {2} \left(\mathbf {x}, p h \left(\boldsymbol {\omega}, b _ {1}\right)\right) + \left\| p r o \left(\mathbf {x}, p h \left(\boldsymbol {\omega}, b _ {1}\right)\right) - \mathbf {c} \right\| _ {2} ^ {2} \quad / / \because (a 5) \\ < \left(d i s \left(\mathbf {x}, p h (\boldsymbol {\omega}, b _ {1})\right) + \frac {\| \mathbf {x} ^ {*} - \mathbf {c} ^ {*} \| _ {2}}{2}\right) ^ {2} + \| p r o (\mathbf {x}, p h (\boldsymbol {\omega}, b _ {2})) - \mathbf {c} ^ {*} \| _ {2} ^ {2} / / \because \| \mathbf {x} ^ {*} - \mathbf {c} ^ {*} \| _ {2} > 0 a n d (a 7) \\ = d i s ^ {2} \left(\mathbf {x}, p h \left(\boldsymbol {\omega}, b _ {2}\right)\right) + \left\| p r o \left(\mathbf {x}, p h \left(\boldsymbol {\omega}, b _ {2}\right)\right) - \mathbf {c} ^ {*} \right\| _ {2} ^ {2} \quad / / \because (a 4) \\ = \| \mathbf {x} - p r o (\mathbf {x}, p h (\boldsymbol {\omega}, b _ {2})) \| _ {2} ^ {2} + \| p r o (\mathbf {x}, p h (\boldsymbol {\omega}, b _ {2})) - \mathbf {c} ^ {*} \| _ {2} ^ {2} \\ = \| \mathbf {x} - \mathbf {c} ^ {*} \| _ {2} ^ {2} \\ \leq \left(r ^ {*}\right) ^ {2}. \quad / / s e e D e f i n i t i o n A. 6 \\ \end{array} +$$ + +That is to say, $\exists \mathbf{c} \neq \mathbf{c}^*$ , such that $\forall \mathbf{x} \in X \subseteq ch(X)$ , $\| \mathbf{x} - \mathbf{c} \|_2 < r^*$ , which conflicts with formula (25). So the assumption is false, and then we have $\mathbf{c}^* \in ch(X)$ . + +Step 4: Based on the above conclusion, the proof of Theorem 3.2 is given as follows. + +Proof. Given a set $X \subset \mathbb{R}^d$ , + +$$ +\begin{array}{l} (C (X), R (X)) = \underset {\mathbf {c} \in \mathbb {R} ^ {d}, r \in \mathbb {R}} {\operatorname {a r g i n f}} r ^ {2} \quad s. t. \forall \mathbf {x} \in X, \mathbf {x} \in \operatorname {b a l l} (\mathbf {c}, r) \quad / / \text {s e e D e f i n i t i o n A . 7} \\ = \underset {\mathbf {c} \in \mathbb {R} ^ {d}, r \in \mathbb {R}} {\arg \inf r ^ {2}} \quad s. t. \forall \mathbf {x} \in X, \| \mathbf {x} - \mathbf {c} \| _ {2} ^ {2} \leq r ^ {2} \quad / / \text {s e e D e f i n i t i o n A . 6} \tag {27} \\ = \underset {\mathbf {c} \in c h (X), r \in \mathbb {R}} {\operatorname {a r g i n f}} r ^ {2} \quad s. t. \forall \mathbf {x} \in X, \| \mathbf {x} - \mathbf {c} \| _ {2} ^ {2} \leq r ^ {2} \quad / / a c c o r d i n g \text {t o L e m m a A . 8}. \\ \end{array} +$$ + +Based on formula (27) and Definition A.1, let + +$$ +\boldsymbol {\theta} ^ {*} = \underset {\boldsymbol {\theta} \in \triangle^ {| X |}} {\arg \inf } g (\boldsymbol {\theta}) \text {s . t .} g (\boldsymbol {\theta}) = \sup _ {\mathbf {x} \in X} \| \mathbf {x} - \mathbf {X} \boldsymbol {\theta} \| _ {2} ^ {2}, \tag {28} +$$ + +then, $C(X) = \mathbf{X}\pmb{\theta}^{*}$ and $R^2 (X) = g(\pmb {\theta}^*) = \sup_{\mathbf{x}\in X}\| \mathbf{x} - \mathbf{X}\pmb{\theta}^*\| _2^2$ . That is to say, + +(a1) optimization problem (27) can be solved equivalently by solving optimization problem (28). + +(a2) $\forall \mathbf{x}_i\in X$ + +$$ +g _ {i} (\boldsymbol {\theta}) = \left\| \mathbf {x} _ {i} - \mathbf {X} \boldsymbol {\theta} \right\| _ {2} ^ {2} = \boldsymbol {\theta} ^ {T} \mathbf {X} ^ {T} \mathbf {X} \boldsymbol {\theta} - 2 \mathbf {x} _ {i} ^ {T} \mathbf {X} \boldsymbol {\theta} + \mathbf {x} _ {i} ^ {T} \mathbf {x} _ {i} +$$ + +is a quadratic function w.r.t $\pmb{\theta}$ . In $g_{i}(\pmb{\theta}),\because \mathbf{X}^{T}\mathbf{X}$ is a symmetric positive semidefinite matrix, $\therefore g_{i}(\pmb{\theta})$ is a convex function w.r.t $\pmb{\theta}$ . + +Because point-wise supremum over an infinite set is a operation that preserve convexity (a formal description can be found in Page 81 of (Boyd & Vandenberghe, 2004)), combining (a2), we have + +(a3) given $X$ + +$$ +g (\boldsymbol {\theta}) = \sup _ {\mathbf {x} _ {i} \in X} g _ {i} (\boldsymbol {\theta}) = \sup _ {\mathbf {x} \in X} \| \mathbf {x} - \mathbf {X} \boldsymbol {\theta} \| _ {2} ^ {2} +$$ + +is also a convex function w.r.t $\pmb{\theta}$ + +At the same time, (a4) $\triangle^{|X|}$ only involves linear constraints. + +Combining (a3) and (a4), we have (a5) formula (28) is a convex optimization problem. + +Combining (a1) and (a5), the theorem is proved. + +# A.3. Proof of Formula (11) + +Proof. Given a set of positive samples $X_{+} = \{\mathbf{x}_{i}|\mathbf{x}_{i}\in \mathbb{R}^{d},i = 1,2,\dots ,n_{+}\}$ , a set of negative samples $X_{-} = \{\mathbf{x}_{j}|\mathbf{x}_{j}\in \mathbb{R}^{d},j = n_{+} + 1,n_{+} + 2,\dots ,n_{+} + n_{-}\}$ , $n = n_{-} + n_{+}$ , and a kernel function $\varphi :\mathbb{R}^d\to \mathbb{R}^{d'}$ . + +Let + +$$ +\varphi \left(\mathbf {X} _ {+}\right) = \left(\varphi \left(\mathbf {x} _ {1}\right), \varphi \left(\mathbf {x} _ {2}\right), \dots \varphi \left(\mathbf {x} _ {n _ {+}}\right)\right) \in \mathbb {R} ^ {d ^ {\prime} \times n _ {+}}, \tag {29a} +$$ + +$$ +\varphi \left(\mathbf {X} _ {-}\right) = \left(\varphi \left(\mathbf {x} _ {n + + 1}\right), \varphi \left(\mathbf {x} _ {n + + 2}\right), \dots \varphi \left(\mathbf {x} _ {n + + n _ {-}}\right)\right) \in \mathbb {R} ^ {d ^ {\prime} \times n _ {-}}, \tag {29b} +$$ + +$$ +\varphi (\mathbf {X}) = \left(\varphi \left(\mathbf {X} _ {+}\right), \varphi \left(\mathbf {X} _ {-}\right)\right) \in \mathbb {R} ^ {d ^ {\prime} \times n}, \tag {29c} +$$ + +$$ +\mathbf {K} ^ {\varphi} = \varphi (\mathbf {X}) ^ {T} \varphi (\mathbf {X}) \in \mathbb {R} ^ {n \times n}, \tag {29d} +$$ + +$$ +\hat {\mathbf {K}} ^ {\varphi} = \left( \begin{array}{c c} \varphi (\mathbf {X} _ {+}) ^ {T} \varphi (\mathbf {X} _ {+}), & - \varphi (\mathbf {X} _ {+}) ^ {T} \varphi (\mathbf {X} _ {-}) \\ - \varphi (\mathbf {X} _ {-}) ^ {T} \varphi (\mathbf {X} _ {+}), & \varphi (\mathbf {X} _ {-}) ^ {T} \varphi (\mathbf {X} _ {-}) \end{array} \right) \in \mathbb {R} ^ {n \times n}. \tag {29e} +$$ + +Obviously, matrices defined in formula (29d) and (29e) are both symmetric positive semidefinite. + +(a1) For radius problem, + +$$ +g _ {1} (\varphi) = \min _ {\mathbf {u}} \max _ {\mathbf {x} _ {i} \in X} \| \varphi (\mathbf {x} _ {i}) - \varphi (\mathbf {X}) \sigma (\mathbf {u}) \| _ {2} ^ {2} \quad / / \text {s e e f o r m u l a (1 0)} +$$ + +$$ +\begin{array}{l} = \min _ {\mathbf {u}} \max _ {\mathbf {x} _ {i} \in X} \sigma (\mathbf {u}) ^ {T} \varphi (\mathbf {X}) ^ {T} \varphi (\mathbf {X}) \sigma (\mathbf {u}) - 2 \varphi (\mathbf {x} _ {i}) ^ {T} \varphi (\mathbf {X}) \sigma (\mathbf {u}) \tag {30} \\ + \varphi (\mathbf {x} _ {i}) ^ {T} \varphi (\mathbf {x} _ {i}) \\ \end{array} +$$ + +$$ += \min _ {\mathbf {u}} \max _ {\mathbf {x} _ {i} \in X} \sigma (\mathbf {u}) ^ {T} \mathbf {K} ^ {\varphi} \sigma (\mathbf {u}) - 2 \mathbf {K} _ {[ i,: ]} ^ {\varphi} \sigma (\mathbf {u}) + k _ {i i} ^ {\varphi}. \quad / / s e e f o r m u l a (2 9 d) +$$ + +where $\mathbf{K}_{[i,:]}^{\varphi}$ is the $i$ -th row of matrix $\mathbf{K}^{\varphi}$ . + +(a2) For margin problem, + +$$ +\begin{array}{l} g _ {2} (\varphi) = \min _ {\mathbf {x}, \mathbf {w}} \| \varphi (\mathbf {X} _ {+}) \sigma (\mathbf {v}) - \varphi (\mathbf {X} _ {-}) \sigma (\mathbf {w}) \| _ {2} ^ {2} / / s e e f o r m u l a (1 0) \\ = \min _ {\mathbf {v}, \mathbf {w}} \sigma (\mathbf {v}) ^ {T} \varphi (\mathbf {X} _ {+}) ^ {T} \varphi (\mathbf {X} _ {+}) \sigma (\mathbf {v}) - 2 \sigma (\mathbf {v}) ^ {T} \varphi (\mathbf {X} _ {+}) ^ {T} \varphi (\mathbf {X} _ {-}) \sigma (\mathbf {w}) \\ + \sigma (\mathbf {w}) ^ {T} \varphi (\mathbf {X} _ {-}) ^ {T} \varphi (\mathbf {X} _ {-}) \sigma (\mathbf {w}) \\ = \min _ {\mathbf {v}, \mathbf {w}} \left( \begin{array}{l} \sigma (\mathbf {v}) \\ \sigma (\mathbf {w}) \end{array} \right) ^ {T} \left( \begin{array}{c c} \varphi \left(\mathbf {X} _ {+}\right) ^ {T} \varphi \left(\mathbf {X} _ {+}\right), & - \varphi \left(\mathbf {X} _ {+}\right) ^ {T} \varphi \left(\mathbf {X} _ {-}\right) \\ - \varphi \left(\mathbf {X} _ {-}\right) ^ {T} \varphi \left(\mathbf {X} _ {+}\right), & \varphi \left(\mathbf {X} _ {-}\right) ^ {T} \varphi \left(\mathbf {X} _ {-}\right) \end{array} \right) \binom {\sigma (\mathbf {v})} {\sigma (\mathbf {w})} \tag {31} \\ = \min _ {\mathbf {v}, \mathbf {w}} \left( \begin{array}{c} \sigma (\mathbf {v}) \\ \sigma (\mathbf {w}) \end{array} \right) ^ {T} \hat {\mathbf {K}} ^ {\varphi} \left( \begin{array}{c} \sigma (\mathbf {v}) \\ \sigma (\mathbf {w}) \end{array} \right). \\ \end{array} +$$ + +Combining (a1) and (a2), the formula (11) is proofed. + +# B. Algorithms + +# B.1. Algorithms for Solving Formula (10) + +To solve the optimization problems $g_{1}(\varphi)$ and $g_{2}(\varphi)$ in formula (10), the Algorithm 1 and 2 are given as follows. + +# B.1.1. CONVERGENCE ANALYSIS + +In Algorithm 1 and 2, the initial solution and solutions obtained at each iteration are both feasible solutions, because the output of the softmax activation function $\sigma(\cdot)$ is the convex combination coefficients. So Algorithm 1 and 2 are both interior point methods and the convergence of them can be guaranteed (Boyd & Vandenberghe, 2004). + +# B.1.2. EFFICIENCY ANALYSIS + +The efficiency of Algorithm 1 and 2 can be analyzed from the following three aspects. First, the constant vector $\sigma((1,1,\dots,1)^T)$ with adaptive length is used as the initial point, which is a strictly feasible solution. The computational complexity of calculating the initial point is of constant order. Second, the accelerated first-order algorithm, e.g., Adam (Kingma & Ba, 2015) can be used to update the optimization variables. Third, these algorithms can easily leverage high-performance computing hardware, e.g., GPU for acceleration. In this paper, we utilize the NVIDIA GeForce RTX 3090 for acceleration. + +Algorithm 1 Solving $g_{1}(\varphi)$ in Formula (10) +Input: The set of training samples $X = \{\mathbf{x}_i\}_{i=1}^n$ , the representation learning function $\varphi$ , the maximum number of iterations $T$ , the error tolerance parameter $\epsilon$ . +Output: The optimal value $g^*$ and optimal solution $\mathbf{u}^*$ . +1: Initialize $\mathbf{u} \gets (1,1,\dots,1)^T \in \mathbb{R}^n$ , $t \gets 0$ , err $\gets +\infty$ . +2: if $\varphi$ is kernel function then +3: Compute kernel matrix $\mathbf{K}^\varphi$ by formula (29d). +4: else +5: Compute latent representation matrix $\varphi(\mathbf{X}) \gets (\varphi(\mathbf{x}_1), \varphi(\mathbf{x}_2), \dots, \varphi(\mathbf{x}_n))$ . +6: end if +7: if $\varphi$ is kernel function then +8: $g_{\mathrm{new}} \gets \max_{\mathbf{x}_i \in X} \sigma(\mathbf{u})^T \mathbf{K}^\varphi \sigma(\mathbf{u}) - 2 \mathbf{K}_{[i,:]}^\varphi \sigma(\mathbf{u}) + k_{ii}^\varphi$ . +9: else +10: $g_{\mathrm{new}} \gets \max_{\mathbf{x}_i \in X} \| \varphi(\mathbf{x}_i) - \varphi(\mathbf{X}) \sigma(\mathbf{u})\|_2^2$ . +11: end if +12: while $t < T$ and err > $\epsilon$ do +13: $g_{\mathrm{old}} \gets g_{\mathrm{new}}$ . +14: Update $\mathbf{u}$ by Adam (Kingma & Ba, 2015) with loss $g_{\mathrm{old}}$ . +15: if $\varphi$ is kernel function then +16: $g_{\mathrm{new}} \gets \max_{\mathbf{x}_i \in X} \sigma(\mathbf{u})^T \mathbf{K}^\varphi \sigma(\mathbf{u}) - 2 \mathbf{K}_{[i,:]}^\varphi \sigma(\mathbf{u}) + k_{ii}^\varphi$ . +17: else +18: $g_{\mathrm{new}} \gets \max_{\mathbf{x}_i \in X} \| \varphi(\mathbf{x}_i) - \varphi(\mathbf{X}) \sigma(\mathbf{u})\|_2^2$ . +19: end if +20: $t \gets t + 1$ . +21: err $\gets |g_{\mathrm{old}} - g_{\mathrm{new}}|$ . +22: end while +23: return $g_{\mathrm{new}}$ and $\mathbf{u}$ . + +Algorithm 2 Solving $g_{2}(\varphi)$ in Formula (10) +Input: The set of positive training samples $X_{+} = \{\mathbf{x}_{i}\}_{i = 1}^{n_{+}}$ , the set of negative training samples $X_{-} = \{\mathbf{x}_{j}\}_{j = n_{+} + 1}^{n_{+} + n_{-}}$ , the representation learning function $\varphi$ , the maximum number of iterations $T$ , the error tolerance parameter $\epsilon$ +Output: The optimal value $g^{*}$ and optimal solution $(\mathbf{v}^{*},\mathbf{w}^{*})$ +1: Initialize $\mathbf{v}\gets (1,1,\dots ,1)^T\in \mathbb{R}^{n + },\mathbf{w}\gets (1,1,\dots ,1)^T\in \mathbb{R}^{n - },t\gets 0,\text{err}\gets +\infty .$ +2: if $\varphi$ is kernel function then +3: Compute kernel matrix $\hat{\mathbf{K}}^{\varphi}$ by formula (29e). +4: else +5: $\varphi (\mathbf{X}_{+})\leftarrow \left(\varphi (\mathbf{x}_{1}),\varphi (\mathbf{x}_{2}),\dots ,\varphi (\mathbf{x}_{n_{+}})\right),\varphi (\mathbf{X}_{-})\leftarrow \left(\varphi (\mathbf{x}_{n_{+} + 1}),\varphi (\mathbf{x}_{n_{+} + 2}),\dots ,\varphi (\mathbf{x}_{n_{+} + n_{-}})\right).$ +6: end if +7: if $\varphi$ is kernel function then +8: $g_{\mathrm{new}}\leftarrow \left( \begin{array}{c}\sigma (\mathbf{v})\\ \sigma (\mathbf{w}) \end{array} \right)^T\hat{\mathbf{K}}^\varphi \left( \begin{array}{c}\sigma (\mathbf{v})\\ \sigma (\mathbf{w}) \end{array} \right)$ +9: else +10: $g_{\mathrm{new}}\gets \| \varphi (\mathbf{X}_+)\sigma (\mathbf{v}) - \varphi (\mathbf{X}_-) \sigma (\mathbf{w})\| _2^2.$ +11: end if +12: while $t < T$ and err $> \epsilon$ do +13: $g_{\mathrm{old}}\gets g_{\mathrm{new}}$ +14: Update v and w by Adam (Kingma & Ba, 2015) with loss $g_{\mathrm{old}}$ +15: if $\varphi$ is kernel function then +16: $g_{\mathrm{new}}\leftarrow \left( \begin{array}{c}\sigma (\mathbf{v})\\ \sigma (\mathbf{w}) \end{array} \right)^T\hat{\mathbf{K}}^\varphi \left( \begin{array}{c}\sigma (\mathbf{v})\\ \sigma (\mathbf{w}) \end{array} \right)$ +17: else +18: $g_{\mathrm{new}}\gets \| \varphi (\mathbf{X}_+)\sigma (\mathbf{v}) - \varphi (\mathbf{X}_-) \sigma (\mathbf{w})\| _2^2.$ +19: end if +20: $t\gets t + 1$ +21: err $< |g_{\mathrm{old}} - g_{\mathrm{new}}|$ +22: end while +23: return $g_{\mathrm{new}}$ and (v,w). + +# B.2. Kernel Selection Algorithm + +The detailed description of VC dimension based kernel selection method is given in Algorithm 3. + +Algorithm 3 VC Dimension based Kernel Selection +Input: The set of positive training samples $X_{+} = \{\mathbf{x}_{i}\}_{i = 1}^{n_{+}}$ , the set of negative training samples $X_{-} = \{\mathbf{x}_{j}\}_{j = n_{+} + 1}^{n_{+} + n_{-}}$ , the set of candidate kernel functions $\{\varphi_k\}_{k = 1}^m$ +Output: The optimal kernel function $\varphi_{opt}$ +1: Initialize $f_{min}\gets +\infty ,\varphi_{opt}\gets$ None. +2: for $k = 1$ to m do +3: $\varphi \leftarrow \varphi_{k}$ +4: Solve optimization problem $g_{1}(\varphi)$ in formula (11) on $X_{+}\bigcup X_{-}$ by Algorithm 1. +5: Solve optimization problem $g_{2}(\varphi)$ in formula (11) on $X_{+}$ and $X_{-}$ by Algorithm 2. +6: $f(\varphi)\gets 4\frac{g_1(\varphi)}{g_2(\varphi)}$ +7: if $f(\varphi) < f_{min}$ then +8: $f_{min}\gets f(\varphi)$ +9: $\varphi_{opt}\gets \varphi$ +10: end if +11: end for +12: return $\varphi_{opt}$ + +# B.2.1. EFFICIENCY ANALYSIS + +Cross validation is one of the most popular methods for kernel selection. We use SVM-based $k$ -fold cross verification as the benchmark to analyze the efficiency of the proposed kernel selection method. + +Given $n$ training samples and a kernel function $\varphi$ . + +To evaluate the quality of $\varphi$ , the proposed method needs to solve two constrained convex quadratic programming problems, i.e., $g_{1}(\varphi)$ , $g_{2}(\varphi)$ in formula (9). In each optimization problem, the scale of the optimization variable is $O(n)$ and the scale of the constraint condition is $O(n)$ . Let $O(T)$ denote the computational cost of solving the optimization problem $g_{1}(\varphi)$ (or $g_{2}(\varphi)$ ), and then the computational complexity of the proposed method is $O(2T) = O(T)$ . + +To evaluate the quality of $\varphi$ , SVM-based $k$ -fold cross verification method needs to solve $k$ convex quadratic programming problems corresponding to SVM, denoted as $g_{SVM}(\varphi)^1$ . In each $g_{SVM}(\varphi)$ , the scale of the optimization variable is $O\left(\frac{k-1}{k} n\right) = O(n)$ and the scale of the constraint condition is $O\left(\frac{k-1}{k} n\right) = O(n)$ . Therefore, the computational cost of solving $g_{SVM}(\varphi)$ is $O(T)$ , which is same as $g_1(\varphi)$ . In summary, the computational complexity of $k$ -fold cross-verification is $O(kT)$ . + +In general, we have $k > 2$ , so the computational complexity of the proposed method has certain advantages compared with SVM-based $k$ -fold cross verification method. + +Thanks to the good property of optimization problem (9), the constraint conditions in formula (9) can be converted into a softmax activation function layer. This allows some acceleration methods to be directly used. Specifically, (a) the accelerated first-order algorithm, e.g., Adam (Kingma & Ba, 2015) is used for solving, (b) the mature deep learning architecture, e.g., Pytorch ${}^{2}$ is used for programming, (c) the high-performance computing hardware, e.g., GPU is used for acceleration. As a result, the running time of the proposed method has great advantages compared with the SVM-based $k$ -fold cross verification method. + +# B.3. DNN Boosting Framework Algorithm + +The detailed description of VC dimension based DNN boosting framework is given in Algorithm 4. + +Algorithm 4 VC Dimension based DNN Boosting Framework +Input: The training data set $D = \{(\mathbf{x}_i,y_i)\}_{i = 1}^n$ output space $\mathcal{V}$ , the representation learning network $\varphi (\cdot ;\Theta)$ , the 2-norm standardization network $\lambda (\cdot)$ , the classification network $h(\cdot ;\mathbf{W},\mathbf{b})$ , the weight of VC dimension based loss $\gamma_{\mathrm{VC}}$ , the number of iterations $T$ , the size of batch $s$ +Output: The final model. +1: Initialize $\Theta$ W and b randomly. +2: for $t = 1$ to $T$ do +3: $D^{\prime}\gets$ select s samples from training data set $D$ randomly, $\mathcal{V}'\gets \{y_i|(\mathbf{x}_i,y_i)\in D'\}$ , such that $|\mathcal{V}^{\prime}|\geq 2$ +4: $\mathcal{L}_{VC}\gets 0,k\gets 0$ +5: for $p,q\in \mathcal{V}',p < q$ do +6: Compute set of positive samples $X_{+}\gets \{x_{i}|(x_{i},y_{i})\in D',y_{i} = p\}$ +7: Compute set of negative samples $X_{-}\gets \{x_{i}|(x_{i},y_{i})\in D',y_{i} = q\}$ +8: Compute $l_{pq}\gets -g^{(p,q)}(\lambda \circ \varphi)$ in formula (14) on $X_{+}$ and $X_{-}$ by Algorithm 2. +9: $\mathcal{L}_{\mathrm{VC}}\gets \mathcal{L}_{\mathrm{VC}} + l_{pq},k\gets k + 1.$ +10: end for +11: Compute empirical loss $\mathcal{L}$ in formula (14) on data set $D^{\prime}$ +12: Update $\Theta$ W, and b by Adam (Kingma & Ba, 2015) with loss $\mathcal{L} + \frac{\gamma_{\mathrm{VC}}}{k}\mathcal{L}_{\mathrm{VC}}$ +13: end for +14: return h $(\lambda (\varphi (\cdot ;\Theta));\mathbf{W},\mathbf{b})$ + +# B.3.1. EFFICIENCY ANALYSIS + +Stochastic gradient descent with mini-batch is a popular optimizing strategy in deep learning. There are many accelerated first-order algorithms, e.g., Adam (Kingma & Ba, 2015) for training DNN effectively and efficiently. Meanwhile, with the help of mature deep learning frameworks e.g., Pytorch, the high-performance computing hardware e.g., GPU can be used for acceleration. These strategies can be directly applied to the proposed framework to achieve efficient training. + +When the number of the training samples $n$ is large, we can randomly sample a min-batch of training samples $D'$ such that $|D'| \ll n$ . When the number of classes $|\mathcal{V}|$ is large, we can randomly sample a min-batch of training samples $D'$ such that $2 \leq |\{y_i|(\mathbf{x}_i,y_i) \in D'\}| \ll |\mathcal{V}|$ . And then the VC dimension based loss $\mathcal{L}_{\mathrm{VC}}$ (lines 4-10 of Algorithm B.3) and the empirical loss $\mathcal{L}$ (line 11 of Algorithm B.3) can be calculated efficiently. Based on the loss, the parameters of the model can be updated efficiently (line 12 of Algorithm B.3). + +# C. Experimental Details + +# C.1. Details of Verifying Theoretical Results + +Data set For Taichi date set, input space is $\mathcal{X} = \left\{(x_1,x_2)^T | x_1^2 + x_2^2 \leq 4^2\right\}$ and samples obey the uniform distribution on $\mathcal{X}$ . Output space is $\mathcal{Y} = \{+1, -1\}$ . Target function $f_{\mathrm{c}}$ is nonlinear and is defined as follows + +$$ +\forall \left(x _ {1}, x _ {2}\right) ^ {T} \in \mathcal {X}, f _ {\mathrm {c}} \left(\left(x _ {1}, x _ {2}\right) ^ {T}\right) = \left\{ \begin{array}{l l} + 1, & x _ {1} ^ {2} + \left(x _ {2} - 2\right) ^ {2} < 1 ^ {2} \\ - 1, & x _ {1} ^ {2} + \left(x _ {2} + 2\right) ^ {2} < 1 ^ {2} \\ g _ {\mathrm {c}} \left(\left(x _ {1}, x _ {2}\right) ^ {T}\right), & o t h e r w i s e \end{array} \right., +$$ + +where + +$$ +g _ {\mathrm {c}} \left((x _ {1}, x _ {2}) ^ {T}\right) = \left\{ \begin{array}{l l} + 1, & x _ {1} \geq 0, x _ {2} \geq 0, x _ {1} ^ {2} + (x _ {2} - 2) ^ {2} > 2 ^ {2} \\ - 1, & x _ {1} \geq 0, x _ {2} \geq 0, x _ {1} ^ {2} + (x _ {2} - 2) ^ {2} \leq 2 ^ {2} \\ - 1, & x _ {1} < 0, x _ {2} \geq 0 \\ - 1, & x _ {1} < 0, x _ {2} < 0, x _ {1} ^ {2} + (x _ {2} + 2) ^ {2} > 2 ^ {2} \\ + 1, & x _ {1} < 0, x _ {2} < 0, x _ {1} ^ {2} + (x _ {2} + 2) ^ {2} \leq 2 ^ {2} \\ + 1, & x _ {1} \geq 0, x _ {2} < 0 \end{array} \right.. +$$ + +Code implementations and parameter settings The basic learner SVM is implemented by Scikit-learn $^{3}$ . All parameters adopt the default settings except for kernel function. For the proposed method, Algorithm 1 and 2 are called to calculate the objective function $f(\varphi (\cdot ;\delta))$ in formula (10). In the whole experiment, $T = 300$ , $\epsilon = 10^{-10}$ are adopted for Algorithm 1 and 2. The Adam (Kingma & Ba, 2015) used in Algorithm 1 and 2 is implemented by Pytorch, and all parameters adopt the default settings. + +# C.2. Details of Kernel Selection Experiment + +Data sets There are 15 UCI $^4$ binary classification data sets and their basic information is given in Table 5. + +Table 5. Basic information of 15 binary classification data sets. + +
IDDATA SETSAMPLESFEATURESCLASS RATIO
D1TRAINS10291.00
D2SPECT79222.04
D3MOLEC-BIOL-PROMOTER106571.00
D4MONKS-216961.64
D5PLANNING182122.50
D6PARKINSONS195223.06
D7HEART-HUNGARIAN294121.77
D8HABERMAN-SURVIVAL30632.78
D9BREAST-CANCER-WISC-DIAG569301.68
D10ILPD-INDIAN-LIVER58392.49
D11HILL-VALLEY6061001.03
D12CREDIT-APPROVAL690151.25
D13BREAST-CANCER-WISC69991.90
D14RINGNORM7,400201.02
D15MAGIC19,020101.84
+ +Code implementations and parameter settings The settings of basic learner SVM and the proposed method are the same as that in Appendix C.1. The experiments of 5-fold cross validation based on the SVM implemented by Scikit-learn are conducted on Intel(R) Xeon(R) Gold 6254 CPU @ 3.10GHz. The experiments of the proposed method implemented based on Pytorch are conducted on NVIDIA GeForce RTX 3090. + +# C.3. Details of DNN Boosting Experiment + +Data sets The basic information of two image classification data sets is given as follows. + +MNIST $^{5}$ is a handwritten digits classification data set, which consists of 10 classes, i.e., $\{0,1,\dots ,9\}$ . It has 60,000 training samples and 10,000 test samples. And each sample is an image with $28\times 28$ pixels. + +CIFAR10 $^{6}$ is a visual objects classification data set, which consists of 10 classes, i.e., {airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck}. It has 50,000 training samples and 10,000 test samples. And each sample is an image with $3 \times 32 \times 32$ pixels. + +Basic DNNs The brief introductions of 5 basic DNNs are given as follows. + +FCNet3: A 3-layers fully connected network. The representation learning network is + +$$ +\operatorname {L i n e a r} (d, 1 0 2 4) - \operatorname {R e L U} () - \operatorname {L i n e a r} (1 0 2 4, 5 1 2) - \operatorname {R e L U} () - \operatorname {L i n e a r} (5 1 2, d ^ {\prime}), +$$ + +and classification network is + +$$ +\operatorname {L i n e a r} \left(d ^ {\prime}, | \mathcal {Y} |\right), +$$ + +where for MNIST data set, $d = 28 \times 28 = 784$ , for CIFAR10 data set $d = 3 \times 32 \times 32 = 3072$ , and for MNIST and CIFAR10 data sets $d' = 256$ , $|\mathcal{Y}| = 10$ . + +LeNet (LeCun et al., 1989): A classical CNN for image classification. + +ResNet18 and ResNet50 (He et al., 2016): Two commonly used CNNs with residual connections for image classification. + +ViT-Base (Dosovitskiy et al., 2021): A recently proposed DNN that has achieved state-of-the-art performance in many visual tasks. + +Code implementations and parameter settings The Adam (Kingma & Ba, 2015) implemented by Pytorch is used to train basic models (FCNet3, LeNet, ResNet18, ResNet50, ViT-Base), the models embedded into DeepLDA (Dorfer et al., 2016) (FCNet3+LDA, LeNet+LDA, ResNet18+LDA, ResNet50+LDA, and ViT-Base+LDA), and the models embedded into the proposed framework (FCNet3+Our, LeNet+Our, ResNet18+Our, ResNet50+Our, and ViT-Base+Our). + +In the whole experiment, the set of candidate learning rate of Adam is $\{5\times 10^{-4},10^{-3},5\times 10^{-3}\}$ , and the rest parameters adopt the default settings. At the same time, the number of epoch is 500. + +For the proposed method, the set of candidate trade-off parameter $\gamma_{\mathrm{VC}}$ is $\{10^{-2}, 10^{-1}, 1\}$ . At the same time, Algorithm 2 is called to calculate the VC dimension based loss in formula (14). For Algorithm 2, $T = 100$ and $\epsilon = 10^{-10}$ are adopted. + +# D. Code Release + +The codes of the proposed methods are available at https://github.com/JunbiaoCui/GRLF_GPG. \ No newline at end of file diff --git a/ageneralrepresentationlearningframeworkwithgeneralizationperformanceguarantees/images.zip b/ageneralrepresentationlearningframeworkwithgeneralizationperformanceguarantees/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..cc0f23c6a725c8aa7fca9314c332ecf76bd656aa --- /dev/null +++ b/ageneralrepresentationlearningframeworkwithgeneralizationperformanceguarantees/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:736ace1438b8487ae72da55b2e77b664f8a74a35258f51cb72fe70ecd63da756 +size 1199053 diff --git a/ageneralrepresentationlearningframeworkwithgeneralizationperformanceguarantees/layout.json b/ageneralrepresentationlearningframeworkwithgeneralizationperformanceguarantees/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3ed43d28ee3558ca83c4b684d77f12b61e41cd89 --- /dev/null +++ b/ageneralrepresentationlearningframeworkwithgeneralizationperformanceguarantees/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8aa074f6cdb95e0f23153e8f9331869d675bc689305d8ad7a389f5f4086c907e +size 1145572 diff --git a/agromovwassersteingeometricviewofspectrumpreservinggraphcoarsening/b50c09a3-98ff-4ab9-920d-f4486f8a1c2e_content_list.json b/agromovwassersteingeometricviewofspectrumpreservinggraphcoarsening/b50c09a3-98ff-4ab9-920d-f4486f8a1c2e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a90626dd5589ef5be09003802699f2538ca54d8f --- /dev/null +++ b/agromovwassersteingeometricviewofspectrumpreservinggraphcoarsening/b50c09a3-98ff-4ab9-920d-f4486f8a1c2e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14d892dcd1ece4de0bc2651bddd73168d8042e598bc93030cb829f59834d3be0 +size 216745 diff --git a/agromovwassersteingeometricviewofspectrumpreservinggraphcoarsening/b50c09a3-98ff-4ab9-920d-f4486f8a1c2e_model.json b/agromovwassersteingeometricviewofspectrumpreservinggraphcoarsening/b50c09a3-98ff-4ab9-920d-f4486f8a1c2e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..788a9e2590daf1411ee75dd87e6f9d5630a59419 --- /dev/null +++ b/agromovwassersteingeometricviewofspectrumpreservinggraphcoarsening/b50c09a3-98ff-4ab9-920d-f4486f8a1c2e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e94dd98d6ad5eb5c95e77ffe3dfa05989480225b4c230aa4e1402c19a7a1f858 +size 252262 diff --git a/agromovwassersteingeometricviewofspectrumpreservinggraphcoarsening/b50c09a3-98ff-4ab9-920d-f4486f8a1c2e_origin.pdf b/agromovwassersteingeometricviewofspectrumpreservinggraphcoarsening/b50c09a3-98ff-4ab9-920d-f4486f8a1c2e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7834c154cd52ff04e7ef0713bb9a99b4e7838ece --- /dev/null +++ b/agromovwassersteingeometricviewofspectrumpreservinggraphcoarsening/b50c09a3-98ff-4ab9-920d-f4486f8a1c2e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b06e834fce34b3c3f3692f1e9525ea4d62dd734a4edf409f9820d03be8fcb879 +size 819017 diff --git a/agromovwassersteingeometricviewofspectrumpreservinggraphcoarsening/full.md b/agromovwassersteingeometricviewofspectrumpreservinggraphcoarsening/full.md new file mode 100644 index 0000000000000000000000000000000000000000..55b3c0c72bed2171388efbc4b18a5ffe523515a9 --- /dev/null +++ b/agromovwassersteingeometricviewofspectrumpreservinggraphcoarsening/full.md @@ -0,0 +1,939 @@ +# A Gromov-Wasserstein Geometric View of Spectrum-Preserving Graph Coarsening + +Yifan Chen $^{1}$ Rentian Yao $^{2}$ Yun Yang $^{2}$ Jie Chen $^{3}$ + +# Abstract + +Graph coarsening is a technique for solving large-scale graph problems by working on a smaller version of the original graph, and possibly interpolating the results back to the original graph. It has a long history in scientific computing and has recently gained popularity in machine learning, particularly in methods that preserve the graph spectrum. This work studies graph coarsening from a different perspective, developing a theory for preserving graph distances and proposing a method to achieve this. The geometric approach is useful when working with a collection of graphs, such as in graph classification and regression. In this study, we consider a graph as an element on a metric space equipped with the Gromov-Wasserstein (GW) distance, and bound the difference between the distance of two graphs and their coarsened versions. Minimizing this difference can be done using the popular weighted kernel $K$ -means method, which improves existing spectrum-preserving methods with the proper choice of the kernel. The study includes a set of experiments to support the theory and method, including approximating the GW distance, preserving the graph spectrum, classifying graphs using spectral information, and performing regression using graph convolutional networks. Code is available at https://github.com/ychen-stat-ml/GW-Graph-Coarsening. + +# 1. Introduction + +Modeling the complex relationship among objects by using graphs and networks is ubiquitous in scientific applica + +$^{1}$ Hong Kong Baptist University $^{2}$ University of Illinois Urbana-Champaign $^{3}$ MIT-IBM Watson AI Lab, IBM Research. Correspondence to: Yifan Chen . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +![](images/124ac2863aea64b4a93191003ef7684c55f1d7268fab485fed8006a0933227a0.jpg) +Figure 1: An example of graph coarsening. Three nodes $v_{1}, v_{2}$ , and $v_{3}$ are merged to a supernode $v_{1}'$ in the coarsened graph, while each of the other nodes $(v_{4}, v_{5},$ and $v_{6})$ is a separate supernode. + +![](images/1e2d8c03f2c3f5c70e5c3bfb70866a8b1ee8be2a1699781ae71b3e10024f8007.jpg) + +tions (Morris et al., 2020). Examples range from analysis of chemical and biological networks (Debnath et al., 1991; Helma et al., 2001; Dobson & Doig, 2003; Irwin et al., 2012; Sorkun et al., 2019), learning and inference with social interactions (Oettershagen et al., 2020), to image understanding (Dwivedi et al., 2020). Many of these tasks are faced with large graphs, the computational costs of which can be rather high even when the graph is sparse (e.g., computing a few extreme eigenvalues and eigenvectors of the graph Laplacian can be done with a linear cost by using the Lanczos method, while computing the round-trip commute times requiring the pseudoinverse of the Laplacian, which admits a cubic cost). Therefore, it is practically important to develop methods that can efficiently handle graphs as their size grows. In this work, we consider one type of methods, which resolve the scalability challenge through working on a smaller version of the graph. The production of the smaller surrogate is called graph coarsening. + +Graph coarsening is a methodology for solving large-scale graph problems. Depending on the problem itself, one may develop a solution based on the coarsened graph, or interpolate the solution back to the original one. Figure 1 illustrates a toy example of graph coarsening, whereby a cluster of nodes $(v_{1}, v_{2},$ and $v_{3})$ in the original graph are merged to a so-called supernode $(v_{1}^{\prime})$ in the coarsened graph. If the problem is graph classification (predicting classes of multiple graphs in a dataset), one may train a classifier by using these smaller surrogates, if it is believed that they inherit the characteristics of the original graphs necessary for classifica + +tion. On the other hand, if the problem is node regression, $^{1}$ one may predict the targets for the nodes in the coarsened graph and interpolate them in the original graph (Huang et al., 2021). + +Graph coarsening has a long history in scientific computing, while it gains attraction in machine learning recently (Chen et al., 2022). A key question to consider is what characteristics should be preserved when reducing the graph size. A majority of work in machine learning focuses on the spectral properties (Loukas & Vandergheynst, 2018; Loukas, 2019; Hermsdorff & Gunderson, 2019; Jin et al., 2020). That is, one desires that the eigenvalues of the original graph $G$ are close to those of the coarsened graph $G^{(c)}$ . While being attractive, it is unclear why this objective is effective for problems involving a collection of graphs (e.g., graph classification), where the distances between graphs shape the classifier and the decision boundary. Hence, in this work, we consider a different objective, which desires that the distance of a pair of graphs, $\mathrm{dist}(G_1, G_2)$ , is close to that of their coarsened versions, $\mathrm{dist}(G_1^{(c)}, G_2^{(c)})$ . + +To achieve so, we make the following contributions. + +- We consider a graph as an element on the metric space endowed with the Gromov-Wasserstein (GW) distance (Chowdhury & Mémoli, 2019), which formalizes the distance of graphs with different sizes and different node weights. We analyze the distance between $G$ and $G^{(c)}$ as a major lemma for subsequent results. +- We establish an upper bound on the difference of the GW distance between the original graph pair and that of the coarsened pair. Interestingly, this bound depends on only the respective spectrum change of each of the original graphs. Such a finding explains the effectiveness of spectrum-preserving coarsening methods for graph classification and regression problems. +- We bridge the connection between the upper bound and weighted kernel $K$ -means, a popular clustering method, under a proper choice of the kernel. This connection leads to a graph coarsening method that we demonstrate to exhibit attractive inference qualities for graph-level tasks, compared with other spectrum-preserving methods. + +# 2. Related Work + +Graph coarsening was made popular by scientific computing many decades ago, where a major problem was to solve large, sparse linear systems of equations (Saad, 2003). (Geometric) Multigrid methods (Briggs et al., 2000) were used + +to solve partial differential equations, the discretization of which leads to a mesh and an associated linear system. Multigrid methods solve a smaller system on the coarsened mesh and interpolate the solution back to the original mesh. When the linear system is not associated with a geometric mesh, algebraic multigrid methods (Ruge & Stüben, 1987) were developed to treat the coefficient matrix as a general graph, so that graph coarsening results in a smaller graph and the procedure of "solving a smaller problem then interpolating the solution on the larger one" remains applicable. + +Graph coarsening was introduced to machine learning as a methodology of graph summarization (Liu et al., 2018) through the concepts of "graph cuts," "graph clustering," and "graph partitioning." A commonality of these concepts is that graph nodes are grouped together based on a certain objective. Graph coarsening plays an important role in multilevel graph partitioning (Karypis & Kumar, 1999). The normalized cut (Shi & Malik, 2000) is a well-known and pioneering method for segmenting an image treated as a graph. Dhillon et al. (2007) compute weighted graph cuts by performing clustering on the coarsest graph resulting from hierarchical coarsening and refining the clustering along the reverse hierarchy. Graph partitioning is used to form convolutional features in graph neural networks (Defferrard et al., 2016) and to perform neighborhood pooling (Ying et al., 2018). Graph coarsening is used to learn node embeddings in a hierarchical manner (Chen et al., 2018). For a survey of graph coarsening with comprehensive accounts on scientific computing and machine learning, see Chen et al. (2022). + +A class of graph coarsening methods aim to preserve the spectra of the original graphs. Loukas & Vandergheynst (2018) and Loukas (2019) introduce the notion of "restricted spectral similarity" (RSS), requiring the eigenvalues and eigenvectors of the coarsened graph Laplacian, when restricted to the principal eigen-subspace, to approximate those of the original graph. Local variation algorithms are developed therein to achieve RSS. Jin et al. (2020) suggest the use of a spectral distance as the key metric for measuring the preservation of spectral properties. The authors develop two coarsening algorithms to maximally reduce the spectral distance between the original and the coarsened graph. Hermsdorff & Gunderson (2019) develop a probabilistic algorithm to coarsen a graph while preserving the Laplacian pseudoinverse, by using an unbiased procedure to minimize the variance. + +Graph coarsening is increasingly used in deep learning. One limitation of traditional coarsening methods is that they mean to be universally applicable to any graphs, without the flexibility of adapting to a particular dataset or distribution of its own characteristics. Cai et al. (2021) address this limitation by adjusting the edge weights in the coarsened graph through graph neural networks (GNNs). Conversely, graph + +coarsening techniques can be applied to scale up GNNs by preprocessing the graphs (Huang et al., 2021). On a separate note, graph condensation (Jin et al., 2021) is another technique to accelerate GNNs, which uses supervised learning to condense node features and the graph structure. Furthermore, graph pooling can assign a node in the original graph to multiple supernodes (Grattarola et al., 2022), similar to the relaxation-based graph coarsening strategy explored in the algebraic multigrid literature (Ron et al., 2011). + +# 3. Notations and Preliminaries + +In this section, we set up the notations for graph coarsening and review the background of Gromov-Wasserstein distance. Additional information can be found in Appendix A. + +# 3.1. Graphs and Laplacians + +We denote an undirected graph as $G$ , whose node set is $\mathcal{V} := \{v_i\}_{i=1}^N$ with size $N = |\mathcal{V}|$ and whose symmetric weighted adjacency is $A := [a_{ij}]$ . The $N$ -by- $N$ (combinatorial) Laplacian matrix is defined as $L := D - A$ , where $D := \operatorname{diag}(A\mathbf{1}_N)$ is the degree matrix. The normalized Laplacian is $\mathcal{L} := D^{-\frac{1}{2}}LD^{-\frac{1}{2}}$ . Without loss of generality, we assume that $G$ is connected, in which case the smallest eigenvalue of $L$ and $\mathcal{L}$ is zero and is simple. + +# 3.2. Graph coarsening and coarsened graphs + +Given a graph $G$ , graph coarsening amounts to finding a smaller graph $G^{(c)}$ with $n \leq |\mathcal{V}|$ nodes to approximate $G$ . One common coarsening approach obtains the coarsened graph from a partitioning $\mathcal{P} = \{\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_n\}$ of the node set $\mathcal{V}$ (Loukas & Vandergheynst, 2018). In this approach, each subset $\mathcal{P}_j$ of nodes are collapsed to a supernode in the coarsened graph and the edge weight between two supernodes is the sum of the edge weights crossing the two corresponding partitions. Additionally, the sum of the edge weights in a partition becomes the weight of the supernode. In the matrix notation, we use $C_p \in \{0, 1\}^{n \times N}$ to denote the membership matrix induced by the partitioning $\mathcal{P}$ , with the $(k, i)$ entry being $C_p(k, i) = 1_{\{v_i \in \mathcal{P}_k\}}$ , where $1_{\{\cdot\}}$ is the indicator function. For notational convenience, we define the adjacency matrix of the coarsened graph to be $A^{(c)} = C_p A C_p^T$ . Note that $A^{(c)}$ includes not only edge weights but also node weights. + +If we similarly define $\pmb{D}^{(c)} \coloneqq \mathrm{diag}\left(\pmb{A}^{(c)}\pmb{1}_n\right)$ to be the degree matrix of the coarsened graph $G^{(c)}$ , then it can be verified that the matrix $\pmb{L}^{(c)} = \pmb{C}_p\pmb{L}\pmb{C}_p^{\intercal} = \pmb{D}^{(c)} - \pmb{A}^{(c)}$ is a (combinatorial) Laplacian (because its smallest eigenvalue is zero and is simple). Additionally, $\pmb{\mathcal{L}}^{(c)} = \left(\pmb{D}^{(c)}\right)^{-\frac{1}{2}}\pmb{L}^{(c)}\left(\pmb{D}^{(c)}\right)^{-\frac{1}{2}}$ is the normalized Laplacian. + +In the literature, the objective of coarsening is often to mini + +mize some difference between the original and the coarsened graph. For example, in spectral graph coarsening, Loukas (2019) proposes that the $L$ -norm of any $N$ -dimensional vector $\pmb{x}$ be close to the $L^{(c)}$ -norm of the $n$ -dimensional vector $(C_p^\intercal)^+ x$ ; while Jin et al. (2020) propose to minimize the difference between the ordered eigenvalues of the Laplacian of $G$ and those of the lifted Laplacian of $G^{(c)}$ (such that the number of eigenvalues matches). Such objectives, while being natural and interesting in their own right, are not the only choice. Questions may be raised; for example, it is unclear why the objective is to preserve the Laplacian spectrum but not the degree distribution or the clustering coefficients. Moreover, it is unclear how preserving the spectrum benefits the downstream use. In this paper, we consider a different objective—preserving the graph distance—which may help, for example, maintain the decision boundary in graph classification problems. To this end, we review the Gromov-Wasserstein (GW) distance. + +# 3.3. Gromov-Wasserstein distance and its induced metric space + +The GW distance was originally proposed by Mémoli (2011) to measure the distance between two metric measure spaces $M_{\mathcal{X}}$ and $M_{\mathcal{Y}}$ . However, using metrics to characterize the difference between elements in sets $\mathcal{X}$ and $\mathcal{Y}$ can be too restrictive. Peyré et al. (2016) relaxed the metric notion by proposing the GW discrepancy, which uses dissimilarity, instead of metric, to characterize differences. Chowdhury & Mémoli (2019) then extended the concept of GW distance/discrepancy from metric measure spaces to measure networks, which can be considered a generalization of graphs. + +Definition 3.1 (Measure network). A measure network is a triple $(\mathcal{X},\omega_X,\mu_X)$ , where $\mathcal{X}$ is a Polish space (a separable and completely metrizable topological space), $\mu_X$ is a fully supported Borel probability measure, and $\omega_{X}$ is a bounded measurable function on $\mathcal{X}^2$ . + +In particular, a graph $G$ (augmented with additional information) can be taken as a discrete measure network. We let $\mathcal{X}$ be the set of graph nodes $\{v_i\}_{i=1}^N$ and associate with it a probability mass $\mu_X = \pmb{m} = [m_1, \dots, m_N]^\top \in \mathbb{R}_+^N$ , $\sum_{i=1}^N m_i = 1$ . Additionally, we associate $G$ with the node similarity matrix $\pmb{S} = [s_{ij}] \in \mathbb{R}^{N \times N}$ , whose entries are induced from the measurable map $\omega_X$ : $s_{ij} = \omega_X(v_i, v_j)$ . Note that the mass $\pmb{m}$ and the similarity $\pmb{S}$ do not necessarily need to be related to the node weights and edge weights, although later we will justify and advocate the use of some variant of the graph Laplacian as $\pmb{S}$ . + +For a source graph $G_{s}$ with $N_{s}$ nodes and mass $m_{s}$ and a target graph $G_{t}$ with $N_{t}$ nodes and mass $m_{t}$ , we can define a transport matrix $T = [t_{ij}] \in \mathbb{R}^{N_s \times N_t}$ , where $t_{ij}$ specifies the probability mass transported from $v_{i}^{s}$ (the $i$ -th node of $G_{s}$ ) to $v_{j}^{t}$ (the $j$ -th node of $G_{t}$ ). We denote the collection of all feasible transport matrices as $\Pi_{s,t}$ , which includes all $T$ that satisfy $T1 = m_{s}$ and $T^{\intercal}1 = m_{t}$ . Using the $\ell_{p}$ transportation cost $L(a,b) = (a - b)^{p}$ , Chowdhury & Mémoli (2019) define the $\mathrm{GW}_{p}$ distance for graphs as + +$$ +\begin{array}{l} \mathrm {G W} _ {p} ^ {p} (G _ {s}, G _ {t}) = \min _ {\boldsymbol {T} \in \Pi_ {s, t}} \sum_ {i, j = 1} ^ {N _ {S}} \sum_ {i ^ {\prime}, j ^ {\prime} = 1} ^ {N _ {t}} \left| s _ {i j} ^ {s} - s _ {i ^ {\prime} j ^ {\prime}} ^ {t} \right| ^ {p} \boldsymbol {T} _ {i i ^ {\prime}} \boldsymbol {T} _ {j j ^ {\prime}} \\ = \min _ {\boldsymbol {T} \in \Pi_ {s, t}} \left\langle \boldsymbol {M}, \boldsymbol {T} \right\rangle , \tag {1} \\ \end{array} +$$ + +where the cross-graph dissimilarity matrix $M \in \mathbb{R}^{N_s \times N_t}$ has entries $M_{jj'} = \sum_{i,i'} \left| s_{ij}' - s_{i'j'}^t \right|^p T_{ii'}$ (which by themselves are dependent on $\pmb{T}$ ). + +The computation of the $\mathrm{GW}_p$ distance (GW distance for short) can be thought of as finding a proper alignment of nodes in two graphs, such that the aligned nodes $v_j^s$ and $v_{j'}^t$ have similar interactions with other nodes in their respective graphs. Intuitively, this is achieved by assigning large transport mass $T_{jj'}$ to a node pair $(v_j^s,v_{j'}^t)$ with small dissimilarity $M_{jj'}$ , making the GW distance a useful tool for graph matching (Xu et al., 2019b). We note that variants of the GW distance exist; for example, Titouan et al. (2019) proposed a fused GW distance by additionally taking into account the dissimilarity of node features. We also note that computing the GW distance is NP-hard, but several approximate methods were developed (Xu et al., 2019a; Zheng et al., 2022). In this work, we use the GW distance as a theoretical tool and may not need to compute it in action. + +On closing this section, we remark that Chowdhury & Memoli (2019, Theorem 18) show that the GW distance is indeed a metric for measure networks, modulo weak isomorphism. Therefore, we can formally establish the metric space of interest. + +Definition 3.2 ( $\mathrm{GW}_p$ space). Let $\mathcal{N}$ be the collection of all measure networks. For $p \geq 1$ , we denote by $(\mathcal{N}, \mathrm{GW}_p)$ the metric space of measure networks endowed with the $\mathrm{GW}_p$ distance defined in (1) and call it the $\mathrm{GW}_p$ space. + +# 4. Graph Coarsening from a Gromov-Wasserstein Geometric View + +In this section, we examine how the GW geometric perspective can shape graph coarsening. In Section 4.1, we provide a framework that unifies many variants of the coarsening matrices through the use of the probability mass $m$ introduced + +in the context of measure networks. Then, in Section 4.2, we analyze the variant associated with the similarity matrix $S$ . In particular, we establish an upper bound of the difference of the GW distances before and after coarsening. Based on the upper bound, in Section 4.3 we connect it with some spectral graph techniques and in Section 4.4 we advocate a choice of $S$ in practice. + +# 4.1. A unified framework for coarsening matrices + +The membership matrix $C_p$ is a kind of coarsening matrices: it connects the adjacency matrix $\mathbf{A}$ with the coarsened version $\mathbf{A}^{(c)}$ through $\mathbf{A}^{(c)} = C_p\mathbf{A}\mathbf{C}_p^\intercal$ . There are, however, different variants of coarsening matrices. We consider three here, all having a size $n \times N$ . + +(i) Accumulation. This is the matrix $C_p$ . When multiplied to the left of $A$ , the $i$ -th row of the product is the sum of all rows of $A$ corresponding to the partition $\mathcal{P}_i$ . + +(ii) Averaging. A natural alternative to summation is (weighted) averaging. We define the diagonal mass matrix $\mathbf{W} = \mathrm{diag}(m_1, \dots, m_N)$ and for each partition $\mathcal{P}_i$ , the accumulated mass $c_i = \sum_{j \in \mathcal{P}_i} m_j$ for all $i \in [n]$ . Then, the averaging coarsening matrix is + +$$ +\bar {C} _ {w} := \operatorname {d i a g} \left(c _ {1} ^ {- 1}, \dots , c _ {n} ^ {- 1}\right) C _ {p} W. +$$ + +This matrix takes the effect of averaging because of the division over $c_{i}$ . Moreover, when the probability mass $m$ is uniform (i.e., all $m_{j}$ 's are the same), we have the relation $\bar{C}_{w}^{+} = C_{p}^{\top}$ . + +(iii) Projection. Neither $C_p$ nor $\bar{C}_w$ is orthogonal. We define the projection coarsening matrix as + +$$ +\boldsymbol {C} _ {w} := \operatorname {d i a g} \left(c _ {1} ^ {- 1 / 2}, \dots , c _ {n} ^ {- 1 / 2}\right) \boldsymbol {C} _ {p} \boldsymbol {W} ^ {\frac {1}{2}}, +$$ + +by noting that $C_w C_w^\intercal$ is the identity and hence $C_w$ has orthonormal rows. Therefore, the $N$ -by- $N$ matrix $\Pi_w := W^{\frac{1}{2}} C_p^\intercal \bar{C}_w W^{-\frac{1}{2}} = C_w^\intercal C_w$ is a projection operator. + +In Section 3.2, we have seen that the combinatorial Laplacian $L$ takes $C_p$ as the coarsening matrix, because the relationship $L^{(c)} = C_pLC_p^\intercal$ inherits from $A^{(c)} = C_pAC_p^\intercal$ . On the other hand, it can be proved that, if we take the diagonal mass matrix $\pmb{W}$ to be the degree matrix $\pmb{D}$ , the normalized Laplacian defined in Section 3.2 can be written as $\mathcal{L}^{(c)} = C_w\mathcal{LC}_w^\intercal$ (see Appendix B.1). In other words, the normalized Laplacian uses $C_w$ as the coarsening matrix. The matrix $\mathcal{L}^{(c)}$ is called a doubly-weighted Laplacian (Chung & Langlands, 1996). + +For a general similarity matrix $S$ (not necessarily a Laplacian), we use the averaging coarsening matrix $\bar{C}_w$ and define $S^{(c)} \coloneqq \bar{C}_w S \bar{C}_w^{\intercal}$ . This definition appears to be more natural in the GW setting; see a toy example in Appendix B.2. It is interesting to note that Vincent-Cuaz et al. + +(2022) proposed the concept of semi-relaxed GW (srGW) divergence, wherein the first-order optimality condition of the constrained srGW barycenter problem is exactly the equality $S^{(c)} = \bar{C}_w SC_{w}^{\top}$ . See Appendix B.3. + +In the next subsection, we will consider the matrix $\boldsymbol{U} \coloneqq \boldsymbol{W}^{\frac{1}{2}}\boldsymbol{S}\boldsymbol{W}^{\frac{1}{2}}$ . We define the coarsened version by using the projection coarsening matrix $C_w$ as in $U^{(c)} \coloneqq C_wUC_w^\intercal$ . We will bound the distance of the original and the coarsened graph by using the eigenvalues of $\boldsymbol{U}$ and $U^{(c)}$ . The reason why $\boldsymbol{U}$ and $S$ use different coarsening matrices lies in the technical subtlety of $W^{\frac{1}{2}}$ : $C_w$ absorbs this factor from $\bar{C}_w$ . + +# 4.2. Graph distance on the $\mathbf{GW}_2$ space + +Now we consider the GW distance between two graphs. For theoretical and computational convenience, we take the $\ell_2$ transportation cost (i.e., taking $p = 2$ in (1)). When two graphs $G_{1}$ and $G_{2}$ are concerned, we inherit the subscripts $^1$ and $^2$ to all related quantities, such as the probability masses $m_{1}$ and $m_{2}$ , avoiding verbatim redundancy when introducing notations. This pair of subscripts should not cause confusion with other subscripts, when interpreted in the proper context. Our analysis can be generalized from the $\mathrm{GW}_2$ distance to other distances, as long as the transportation cost satisfies the following decomposable condition. + +Proposition 4.1. (Peyre et al., 2016) Let $T^{*} \in \mathbb{R}^{N_{1} \times N_{2}}$ be the optimal transport plan from $m_{1}$ to $m_{2}$ , and $S_{k} \in \mathbb{R}^{N_{k} \times N_{k}}$ be similarity matrices for $k = 1, 2$ . If the transport cost can be written as $L(a, b) = f_{1}(a) + f_{2}(b) - h_{1}(a)h_{2}(b)$ for some element-wise functions $(f_{1}, f_{2}, h_{1}, h_{2})$ , then we can write $M$ in (1) as + +$$ +f _ {1} (\boldsymbol {S} _ {1}) \boldsymbol {m} _ {1} \mathbf {1} _ {N _ {2}} ^ {\intercal} + \mathbf {1} _ {N _ {1}} \boldsymbol {m} _ {2} ^ {\intercal} f _ {2} (\boldsymbol {S} _ {2}) ^ {\intercal} - h _ {1} (\boldsymbol {S} _ {1}) \boldsymbol {T} ^ {*} h _ {2} (\boldsymbol {S} _ {2}) ^ {\intercal}. +$$ + +Clearly, for the squared cost $L(a,b) = (a - b)^2$ , we may take $f_{1}(a) = a^{2}$ , $f_{2}(b) = b^{2}$ , $h_{1}(a) = a$ , and $h_{2}(b) = 2b$ . + +We start the analysis by first bounding the distance between $G$ and $G^{(c)}$ . + +Theorem 4.2 (Single graph). Consider a graph $G$ with positive semi-definite (PSD) similarity matrix $\mathbf{S}$ and diagonal mass matrix $\mathbf{W}$ and similarly the coarsened graph $G^{(c)}$ . Let $\lambda_{1} \geq \lambda_{2} \geq \dots \geq \lambda_{N}$ be the sorted eigenvalues of $\mathbf{U} = \mathbf{W}^{\frac{1}{2}} \mathbf{S} \mathbf{W}^{\frac{1}{2}}$ and $\lambda_{1}^{(c)} \geq \dots \geq \lambda_{n}^{(c)}$ be the sorted eigenvalues of $\mathbf{U}^{(c)} = \mathbf{C}_{w} \mathbf{U} \mathbf{C}_{w}^{\dagger}$ . Then, + +$$ +\mathrm {G W} _ {2} ^ {2} (G, G ^ {(c)}) \leq \lambda_ {N - n + 1} \sum_ {i = 1} ^ {n} \left(\lambda_ {i} - \lambda_ {i} ^ {(c)}\right) + C _ {\mathbf {U}, n}, \tag {2} +$$ + +where $C_{U,n} = \sum_{i=1}^{n} \lambda_i (\lambda_i - \lambda_{N-n+i}) + \sum_{i=n+1}^{N} \lambda_i^2$ is non-negative and is independent of coarsening. + +Remark 4.3. (i) The bound is tight when $n = N$ because the right-hand side is zero in this case. (ii) The choice of coarsening only affects the spectral difference + +$$ +\Delta := \sum_ {i = 1} ^ {n} \left(\lambda_ {i} - \lambda_ {i} ^ {(c)}\right), \tag {3} +$$ + +because $C_{U,n}$ is independent of it. Each term $\lambda_i - \lambda_i^{(c)}$ in $\Delta$ is non-negative due to the Poincaré separation theorem (see Appendix D.1). (iii) $\Delta$ is a generalization of the spectral distance proposed by Jin et al. (2020), because our matrix $U$ is not necessarily the normalized Laplacian. For additional discussions, see Appendix B.4. (iv) When $U$ is taken as the normalized Laplacian, our bound is advantageous over the bound established by Jin et al. (2020) in the sense that $\Delta$ is the only term impacted by coarsening and that no assumptions on the $K$ -means cost are imposed. + +We now bound the difference of distances. The following theorem suggests that the only terms dependent on coarsening are $\Delta_{1}$ and $\Delta_{2}$ , counterparts of $\Delta$ in Theorem 4.2, for graphs $G_{1}$ and $G_{2}$ respectively. + +Theorem 4.4. Given a pair of graphs $G_{1}$ and $G_{2}$ , we extend all notations in Theorem 4.2 by adding subscripts $_{1}$ and $_{2}$ respectively for $G_{1}$ and $G_{2}$ . We denote the optimal transport plan induced by $\mathrm{GW}_2(G_1,G_2)$ as $T^{*}$ and let the normalized counterpart be $P = W_{1}^{-\frac{1}{2}}T^{*}W_{2}^{-\frac{1}{2}}$ . Additionally, we define $V_{1} := PW_{2}^{\frac{1}{2}}S_{2}W_{2}^{\frac{1}{2}}P^{\intercal}$ with eigenvalues $\nu_{1,1} \geq \nu_{1,2} \geq \dots \geq \nu_{1,N_{1}}$ and $V_{2} := P^{\intercal}W_{1}^{\frac{1}{2}}S_{1}W_{1}^{\frac{1}{2}}P$ with eigenvalues $\nu_{2,1} \geq \nu_{2,2} \geq \dots \geq \nu_{2,N_{2}}$ , both independent of coarsening. Then, $\left| \mathrm{GW}_2^2 (G_1^{(c)},G_2^{(c)}) - \mathrm{GW}_2^2 (G_1,G_2) \right|$ is upper bounded by + +$$ +\begin{array}{l} \max \left\{\lambda_ {1, N _ {1} - n _ {1} + 1} \cdot \Delta_ {1} + C _ {U _ {1}, n _ {1}} \right. \\ + \lambda_ {2, N _ {2} - n _ {2} + 1} \cdot \Delta_ {2} + C _ {\mathbf {U} _ {2}, n _ {2}}, \\ 2 \cdot \left[ \nu_ {1, N _ {1} - n _ {1} + 1} \cdot \Delta_ {1} + C _ {\mathbf {U} _ {1}, \mathbf {V} _ {1, n _ {1}}} \right. \\ \left. \left. + \nu_ {2, N _ {2} - n _ {2} + 1} \cdot \Delta_ {2} + C _ {\mathbf {U} _ {2}, \mathbf {V} _ {2}, n _ {2}} \right] \right\}, \\ \end{array} +$$ + +where $C_{U_1,n_1}$ is from Theorem 4.2 and the other coarsening-independent terms $U_2, C_{U_2,n_2}, C_{U_2,V_2,n_2}, C_{U_2,V_2,n_2}$ are introduced in Lemma E.5 in Appendix E. + +Remark 4.5. (i) The above bound takes into account both the differences between two graphs and their respective coarsenings. Even when the two graphs are identical $G_{1} = G_{2}$ , the bound can still be nonzero if the coarsened graphs $G_{1}^{(c)}$ and $G_{2}^{(c)}$ do not match. (ii) The decoupling of $\Delta_{1}$ and $\Delta_{2}$ offers an algorithmic benefit when one wants to optimize the differences of distances for all graph pairs in a dataset: it suffices to optimize the distance between $G$ and $G^{(c)}$ for each graph individually. This benefit is in line with the prior practice of directly applying spectrum-preserving + +coarsening methods for graph-level tasks (Jin et al., 2020; Huang et al., 2021). Their experimental results and our numerical verification in Section 6 show that our bound is useful and it partly explains the empirical success of spectral graph coarsening. + +# 4.3. Connections with truncated SVDs and Laplacian eigenmaps + +The spectral difference $\Delta = \sum_{i=1}^{n} \left( \lambda_i - \lambda_i^{(c)} \right)$ in Theorem 4.2 can be used as the loss function for defining an optimal coarsening. Because $\Delta + \sum_{i=n+1}^{N} \lambda_i = \operatorname{Tr}(\boldsymbol{U}) - \operatorname{Tr}(\boldsymbol{C}_w \boldsymbol{U} \boldsymbol{C}_w^{\intercal})$ and because $\sum_{i=n+1}^{N} \lambda_i$ is independent of coarsening, minimizing $\Delta$ is equivalent to the following problem + +$$ +\min _ {\boldsymbol {C} _ {w}} \operatorname {T r} \left(\boldsymbol {U} - \boldsymbol {\Pi} _ {w} \boldsymbol {U} \boldsymbol {\Pi} _ {w}\right), \tag {4} +$$ + +by recalling that $\pmb{\Pi}_{w} = \pmb{C}_{w}^{\intercal}\pmb{C}_{w}$ is a projector. The problem (4) is a well-known trace optimization problem, which has rich connections with many spectral graph techniques (Kokiopoulou et al., 2011). + +Connection with truncated SVD. At first sight, a small trace difference does not necessarily imply the two matrices are close. However, because $\Pi_w$ is a projector, the Poincaré separation theorem (see Appendix D.1) suggests that their eigenvalues can be close. A well-known example of using the trace to find optimal approximations is the truncated SVD, which retains the top singular values (equivalently eigenvalues for PSD matrices). The truncated SVD is a technique to find the optimal rank- $n$ approximation of a general matrix $U$ in terms of the spectral norm or the Frobenius norm, solving the problem + +$$ +\min _ {\boldsymbol {C} \in \mathcal {O} _ {n}} \operatorname {T r} \left(\boldsymbol {U} - \boldsymbol {C} ^ {\intercal} \boldsymbol {C} \boldsymbol {U} \boldsymbol {C} ^ {\intercal} \boldsymbol {C}\right), +$$ + +where $\mathcal{O}_n$ is the class of all rank- $n$ orthogonal matrices. The projection coarsening matrix $C_w$ belongs to this class. + +Connection with Laplacian eigenmaps. The Laplacian eigenmap (Belkin & Niyogi, 2003) is a manifold learning technique that learns an $n$ -dimensional embedding for $N$ points connected by a graph. The embedding matrix $\mathbf{Y}$ solves the trace problem $\min_{\mathbf{Y} \mathbf{D} \mathbf{Y}^{\top} = \mathbf{I}_n} \operatorname{Tr}(\mathbf{Y} \mathbf{L} \mathbf{Y}^{\top})$ . If we let $\bar{\mathbf{Y}} = \mathbf{Y} \mathbf{D}^{-\frac{1}{2}}$ , the problem is equivalent to + +$$ +\min _ {\bar {Y} \in \mathcal {O} _ {n}} \operatorname {T r} \left(\bar {Y} \mathcal {L} \bar {Y} ^ {\intercal}\right). +$$ + +Different from truncated SVD, which uses the top singular vectors (eigenvectors) to form the solution, the Laplacian eigenmap uses the bottom eigenvectors of $\mathcal{L}$ to form the solution. + +# 4.4. Signless Laplacians as similarity matrices + +The theory established in Section 4.2 is applicable to any PSD matrix $S$ , but for practical uses we still have to define it. Based on the foregoing exposition, it is tempting to let $S$ be the Laplacian $L$ or the normalized Laplacian $\mathcal{L}$ , because they are PSD and they reveal important information of the graph structure (Tsitsulin et al., 2018). In fact, as a real example, Chowdhury & Needham (2021) used the heat kernel as the similarity matrix to define the GW distance (and in this case the GW framework is related to spectral clustering). However, there are two problems that make such a choice troublesome. First, the (normalized) Laplacian is sparse but its off-diagonal, nonzero entries are negative. Thus, when $S$ is interpreted as a similarity matrix, a pair of nodes not connected by an edge becomes more similar than a pair of connected nodes, causing a dilemma. Second, under the trace optimization framework, one is lured to intuitively look for solutions toward the bottom eigenvectors of $S$ , like in Laplacian eigenmaps, an opposite direction to the true solution of (4), which is toward the top eigenvectors instead. + +To resolve these problems, we propose to use the signless Laplacian (Cvetković et al., 2007), $D + A$ , or its normalized version, $I_{N} + D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$ , as the similarity matrix $S$ . These matrices are PSD and their nonzero entries are all positive. With such a choice, the spectral difference $\Delta$ in (3) has a fundamentally different behavior from the spectral distance used by Jin et al. (2020) when defining the coarsening objective. + +# 5. Computational Equivalence between Graph Coarsening and Weighted Kernel $K$ -means + +By recalling that $\boldsymbol{U} = \boldsymbol{W}^{\frac{1}{2}}\boldsymbol{S}\boldsymbol{W}^{\frac{1}{2}}$ and that $\Pi_w = C_w^\intercal C_w$ is a projector, we rewrite the coarsening objective in (4) as + +$$ +\operatorname {T r} \left(\boldsymbol {W} ^ {\frac {1}{2}} \boldsymbol {S} \boldsymbol {W} ^ {\frac {1}{2}}\right) - \operatorname {T r} \left(\boldsymbol {C} _ {w} \boldsymbol {W} ^ {\frac {1}{2}} \boldsymbol {S} \boldsymbol {W} ^ {\frac {1}{2}} \boldsymbol {C} _ {w} ^ {\intercal}\right). \tag {5} +$$ + +When $S$ is PSD, it could be interpreted as a kernel matrix such that there exists a set of feature vectors $\{\phi_i\}$ for which $S_{ij} = \langle \phi_i,\phi_j\rangle$ for $i,j = 1,\ldots ,N$ . Then, with simple algebraic manipulation, we see that (5) is equivalent to the well-known clustering objective: + +$$ +\sum_ {k} \sum_ {i \in \mathcal {P} _ {k}} m _ {i} \| \phi_ {i} - \boldsymbol {\mu} _ {k} \| ^ {2} \quad \text {w i t h} \quad \boldsymbol {\mu} _ {k} = \sum_ {i \in \mathcal {P} _ {k}} \frac {m _ {i}}{c _ {k}} \phi_ {i}, \tag {6} +$$ + +where the norm $\|\cdot\|$ is induced by the inner product $\langle \cdot, \cdot \rangle$ . Here, $\pmb{\mu}_k$ is the weighted center of $\phi_i$ for all nodes $i$ belonging to cluster $k$ . Hence, the weighted kernel $K$ -means algorithm (Dhillon et al., 2004; 2007) can be applied to minimize (6) by iteratively recomputing the centers and updating cluster assignments according to the distance of a node to all centers. We denote the squared distance between + +# Algorithm 1 Kernel graph coarsening (KGC). + +Input: $S$ : similarity matrix, $m$ : node mass vector, $n$ : number of clusters + +Output: $\mathcal{P} = \{\mathcal{P}_i\}_{i=1}^n$ : node partition + +1: function KGC $\{S, m, n\}$ +2: Initialize the $n$ clusters: $\mathcal{P}^{(0)} = \left\{\mathcal{P}_1^{(0)},\dots ,\mathcal{P}_n^{(0)}\right\}$ + +$$ +c _ {j} ^ {(0)} = \sum_ {k \in \mathcal {P} _ {j} ^ {(0)}} m _ {k}, \forall j \in [ n ]. +$$ + +3: Set the counter of iterations $t = 0$ . +4: for node $i = 1$ to $N$ do +5: Find its new cluster index by (7): + +$$ +\operatorname {idx}(i) = \operatorname *{arg min}_{j\in [n]}\operatorname{dist}_{j}^{(t)}(i). +$$ + +6: end for +7: Update the clusters: for all $j \in [n]$ , + +$$ +\mathcal {P} _ {j} ^ {(t + 1)} = \left\{i: \operatorname {i d x} (i) = j \right\}, c _ {j} ^ {(t + 1)} = \sum_ {k \in \mathcal {P} _ {j} ^ {(t + 1)}} m _ {k}. +$$ + +8: if the partition $\mathcal{P}^{(t + 1)}$ is invariant then +9: return $\mathcal{P}^{(t + 1)}$ + +10: else +11: Set $t = t + 1$ and go to Line 4. +12: end if +13: end function + +$\phi_{i}$ and any $\pmb{\mu}_{j}$ by $\mathrm{dist}_j^2 (i)$ , which is + +$$ +\boldsymbol {S} _ {i i} - 2 \sum_ {k \in \mathcal {P} _ {j}} m _ {k} \boldsymbol {S} _ {k i} / c _ {j} + \sum_ {k _ {1}, k _ {2} \in \mathcal {P} _ {j}} m _ {k _ {1}} m _ {k _ {2}} \boldsymbol {S} _ {k _ {1} k _ {2}} / c _ {j} ^ {2}. \tag {7} +$$ + +The $K$ -means algorithm is summarized in Algorithm 1 and we call this method kernel graph coarsening (KGC). + +KGC as a post-refinement of graph coarsening. KGC can be used as a standalone coarsening method. A potential drawback of this usage is that the $K$ -means algorithm is subject to initialization and the clustering quality varies significantly sometimes. Even advanced initialization techniques, such as $K$ -means++ (Arthur & Vassilvitskii, 2006), are not guaranteed to work well in practice. Moreover, KGC, in the vanilla form, does not fully utilize the graph information (such as node features), unless additional engineering of the similarity matrix $S$ is conducted. Faced with these drawbacks, we suggest a simple alternative: initialize KGC by the output of another coarsening method and use KGC to improve it (Scrucca & Raftery, 2015). In our experience, KGC almost always monotonically reduces the spectral difference $\Delta$ and improves the quality of the initial coarsening. + +Time complexity. Let $T$ be the number of $K$ -means iterations. The time complexity of KGC is $\mathcal{O}\left(T\left(M + Nn\right)\right)$ , where $M = \mathrm{nnz}(S)$ is the number of nonzeros in $S$ . See + +Appendix B.5 for the derivation of this cost and a comparison with the cost of spectral graph coarsening (Jin et al., 2020). + +# 6. Numerical Experiments + +We evaluate graph coarsening methods, including ours, on eight benchmark graph datasets: MUTAG (Debnath et al., 1991; Kriege & Mutzel, 2012), PTC (Helma et al., 2001), PROTEINS (Borgwardt et al., 2005; Schomburg et al., 2004), MSRC (Neumann et al., 2016), IMDB (Yanardag & Vishwanathan, 2015),Tumblr (Oettershagen et al., 2020), AQSOL (Sorkun et al., 2019; Dwivedi et al., 2020), and ZINC (Irwin et al., 2012). Information of these datasets is summarized in Table 1. + +Table 1: Summary of datasets. $|V|$ and $|E|$ denote the average number of nodes and edges, respectively. All graphs are treated undirected. R(1) represents a regression task. + +
DatasetClassesSize|V||E|
MUTAG218817.9319.79
PTC234414.2914.69
PROTEINS2111339.0672.82
MSRC822139.3177.35
IMDB2100019.7796.53
Tumblr237353.11199.78
AQSOLR(1)982317.5717.86
ZINCR(1)1200023.1649.83
+ +We compare our method with the following baseline methods (Loukas, 2019; Jin et al., 2020): ① Variation Neighborhood Graph Coarsening (VNGC); ② Variation Edge Graph Coarsening (VEGC); ③ Multilevel Graph Coarsening (MGC); and ④ Spectral Graph Coarsening (SGC). For our method, we consider the vanilla KGC, which uses $K$ -means++ for initialization, and the variant KGC(A), which takes the output of the best-performing baseline method for initialization. More implementation details are provided in Appendix C. + +# 6.1. GW distance approximation + +For a sanity check, we evaluate each coarsening method on the approximation of the GW distance, to support our motivation of minimizing the upper bound in Theorem 4.2. + +We first compare the average squared GW distance, by using the normalized signless Laplacian $I_N + D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$ as the similarity matrix $S$ (see Section 4.4). We vary the coarsened graph size $n = \lceil c*N\rceil$ for $c = 0.3,\dots ,0.9$ . For each graph in PTC and IMDB, we compute $\mathrm{GW}_2^2 (G,G^{(c)})$ and report the average in Table 5, Appendix C.1. We obtain similar observations across the two datasets. In particular, KGC and KGC(A) outperform baselines. When $c$ is small, it is harder + +Table 2: Change of the matrix of GW distances (computed as Frobenius norm error) and coarsening time. Dataset: PTC. Results are averaged over 10 runs. MGC is a deterministic method and therefore standard deviation is 0. KGC(A) is initialized with the MGC output. + +
Coars. Mat.MethodsFrob. Error↓Time↓
ProjectionVNGC182.85 ± 0.026.38 ± 0.01
VEGC54.81 ± 0.023.81 ± 0.
MGC13.69 ± 0.6.71 ± 0.01
SGC12.41 ± 0.0430.24 ± 0.07
AveragingVNGC17.34 ± 0.016.55 ± 0.18
VEGC9.22 ± 0.023.75 ± 0.01
MGC5.31 ± 0.6.59 ± 0.02
SGC6.06 ± 0.0228.06 ± 0.10
KGC4.45 ± 0.031.34 ± 0.33
KGC(A)5.28 ± 0.0.27 ± 0.
+ +for $K$ -means clustering to find the best clustering plan and hence KGC(A) works better. When $c$ gets larger, KGC can work better, probably because the baseline outputs are poor initializations. + +We then report the average gap between the left- and right-hand sides of the bound (2) in Table 6, Appendix C.1. For all methods, including the baselines, the gap is comparable to the actual squared distance shown in Table 5, showcasing the quality of the bound. Moreover, the gap decreases when $c$ increases, as expected. + +We next compute the matrix of GW distances, $Z$ (before coarsening) and $Z^{(c)}$ (after coarsening), and compute the change $\| Z - Z^{(c)} \|_F$ . Following previous works (Chan & Airoldi, 2014; Xu et al., 2020), we set $n = \lfloor N_{\max} / \log(N_{\max}) \rfloor$ . The similarity matrix for the coarsened graph uses the averaging coarsening matrix as advocated in Section 4.1; that is, $S^{(c)} = \bar{C}_w \left( I_N + D^{-\frac{1}{2}} AD^{-\frac{1}{2}} \right) \bar{C}_w^{\intercal}$ . Additionally, we use a variant of the similarity matrix, $S^{(c)} = I_n + (D^{(c)})^{-\frac{1}{2}} A^{(c)} (D^{(c)})^{-\frac{1}{2}}$ , resulting from the projection coarsening matrix, for comparison. + +Table 2 summarizes the results for PTC. It confirms that using "Averaging" is better than using "Projection" for the coarsening matrix. Additionally, KGC(A) initialized with MGC (the best baseline) induces a significantly small change (though no better than the change caused by KGC), further verifying that minimizing the loss (4) will lead to a small change in the GW distance. The runtime reported in the table confirms the analysis in Section 5, showing that KGC is efficient. Moreover, the runtime of KGC(A) is nearly negligible compared with that of the baseline method used for initializing KGC(A). + +![](images/d710aeeb1ee81abaad711cb31e252e28df6ad7cc8a41143ff3b9301275cf79c2.jpg) +Figure 2: Runtime of coarsening methods and relative error for spectrum preservation on Tumblr. KGC(A) is initialized with SGC (the black dashed curve); its error curve (the red solid curve) is slightly below SGC. + +# 6.2. Laplacian spectrum preservation + +We evaluate the capability of the methods in preserving the Laplacian spectrum, by noting that the spectral objectives of the baseline methods differ from that of ours (see remarks after Theorem 4.2). We coarsen each graph to its $10\%$ , $20\%$ , $30\%$ , and $40\%$ size and evaluate the metric $\frac{1}{5}\sum_{i=1}^{5}\frac{\lambda_i - \lambda_i^{(c)}}{\lambda_i}$ (the relative error of the first five largest eigenvalues). The experiment is conducted on Tumbledr and the calculation of the error metric is repeated ten times. + +In Figure 2, we see that KGC has a comparable error to the variational methods VNGC and VEGC, while incurring a lower time cost, especially when the coarsening ratio is small. Additionally, KGC(A) consistently improves the initialization method SGC and attains the lowest error. + +# 6.3. Graph classification using Laplacian spectrum + +We test the performance of the various coarsening methods for graph classification. We follow the setting of Jin et al. (2020) and adopt the Network Laplacian Spectral Descriptor (Tsitsulin et al., 2018, NetLSD) along with a 1-NN classifier as the classification method. We set $n = 0.2N$ . For evaluation, we select the best average accuracy of 10-fold cross validations among three different seeds. Additionally, we add two baselines, EIG and FULL, which respectively use the first $n$ eigenvalues and the full spectrum of $\mathcal{L}$ in NetLSD. Similarly to before, we initialize KGC(A) with the best baseline. + +Table 3 shows that preserving the spectrum does not neces + +Table 3: Classification accuracy with coarsened graphs five times smaller than the original graphs. + +
DatasetsMUTAGPTCPROTEINSMSRCIMDBTumblr
VNGC76.11 ± 2.2556.69 ± 2.5265.44 ± 1.5714.92 ± 1.5753.90 ± 0.5050.43 ± 2.62
VEGC84.59 ± 2.0256.39 ± 2.0364.08 ± 1.1116.80 ± 2.1564.20 ± 1.9048.26 ± 1.71
MGC84.15 ± 3.1454.66 ± 3.5966.16 ± 1.6415.36 ± 1.8069.50 ± 1.4250.14 ± 2.67
SGC84.44 ± 2.8653.79 ± 2.2863.91 ± 1.5116.76 ± 2.5066.00 ± 1.2648.53 ± 2.35
KGC81.90 ± 2.7461.58 ± 2.4963.45 ± 0.8319.84 ± 2.2367.80 ± 1.6552.52 ± 2.81
KGC(A)86.23 ± 2.6957.25 ± 2.1666.43 ± 0.9217.17 ± 2.9169.20 ± 1.3752.57 ± 2.22
EIG85.61 ± 1.6956.08 ± 2.2864.35 ± 1.4312.19 ± 2.7968.70 ± 1.7149.57 ± 1.95
FULL84.59 ± 2.5154.37 ± 2.1267.51 ± 0.8223.58 ± 2.5069.90 ± 1.4052.57 ± 3.36
+ +Table 4: Graph regression results on AQSOL and ZINC. The asterisk symbol * indicates the TestMAE improvement of KGC(A) over its VEGC initialization is statistically significant at the $95\%$ significance level in an paired $t$ -test. +(a) AQSOL. + +
MethodsTestMAE±s.d.TrainMAE±s.d.Epochs
VNGC1.403 ± 0.0050.629 ± 0.018135.75
VEGC1.390 ± 0.0050.702 ± 0.003107.75
MGC1.447 ± 0.0050.628 ± 0.012111.00
SGC1.489 ± 0.0100.676 ± 0.021107.00
KGC1.389 ± 0.0150.678 ± 0.013112.00
KGC(A)1.383 ± 0.005*0.657 ± 0.013124.75
FULL1.372 ± 0.0200.593 ± 0.030119.50
+ +(b) ZINC. + +
MethodsTestMAE±s.d.TrainMAE±s.d.Epochs
VNGC0.709 ± 0.0050.432 ± 0.012120.00
VEGC0.646 ± 0.0010.418 ± 0.008138.25
MGC0.677 ± 0.0020.414 ± 0.006112.50
SGC0.649 ± 0.0070.429 ± 0.008111.75
KGC0.737 ± 0.0100.495 ± 0.012113.50
KGC(A)0.641 ± 0.003*0.433 ± 0.013126.50
FULL0.416 ± 0.0060.313 ± 0.011159.50
+ +sarily leads to the best classification, since EIG and FULL, which use the original spectrum, are sometime outperformed by other methods. On the other hand, our proposed method, KGC or KGC(A), is almost always the best. + +# 6.4. Graph regression using GCNs + +We follow the setting of Dwivedi et al. (2020) to perform graph regression on AQSOL and ZINC (see Appendix C.4 for details). For evaluation, we pre-coarsen the whole datasets, reduce the graph size by $70\%$ , and run GCN on the coarsened graphs. Table 4 suggests that KGC(A) initialized with VEGC (the best baseline) always returns the best test MAE. On ZINC, KGC sometimes suffers from the initialization, but its performance is still comparable to a reported MLP baseline (Dwivedi et al., 2020, TestMAE $0.706 \pm 0.006$ ). + +# 7. Conclusions + +In this work, we propose a new perspective to study graph coarsening, by analyzing the distance of graphs on the GW space. We derive an upper bound on the change of the distance caused by coarsening, which depends on only the spectral difference $\Delta = \sum_{i=1}^{n} \left( \lambda_i - \lambda_i^{(c)} \right)$ . This bound in a way justifies the idea of preserving the spectral information as the main objective of graph coarsening, although our definition of "spectral-preserving" differs from prior spectral coarsening techniques. More importantly, we point out the equivalence between the bound and the objective of weighted kernel $K$ -means clustering. This equivalence leads to a new coarsening method we termed KGC. Our experiment results validate the theoretical analysis, showing that KGC preserves the GW distance between graphs and improves the accuracy of graph-level classification and regression tasks. + +# Acknowledgements + +We wish to appreciate all the constructive and insightful comments from the anonymous reviewers. Yun Yang's research was supported in part by U.S. NSF grant DMS-2210717. Jie Chen acknowledges supports from the MIT-IBM Watson AI Lab. + +# References + +Arthur, D. and Vassilvitskii, S. k-means++: The advantages of careful seeding. Technical report, Stanford, 2006. +Belkin, M. and Niyogi, P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373-1396, 2003. +Bellman, R. Introduction to matrix analysis. SIAM, 1997. +Borgwardt, K. M., Ong, C. S., Schonauer, S., Vishwanathan, S., Smola, A. J., and Kriegel, H.-P. Protein function prediction via graph kernels. Bioinformatics, 21(suppl_1): i47-i56, 2005. +Briggs, W. L., Henson, V. E., and McCormick, S. F. A Multigrid Tutorial. Society for Industrial and Applied Mathematics, 2nd edition, 2000. +Cai, C., Wang, D., and Wang, Y. Graph coarsening with neural networks. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. +Chan, S. and Airoldi, E. A consistent histogram estimator for exchangeable graph models. In Xing, E. P. and Jebara, T. (eds.), Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pp. 208-216, Beijing, China, 22-24 Jun 2014. PMLR. URL https://proceedings.mlr.press/v32/chan14.html. +Chen, H., Perozzi, B., Hu, Y., and Skiena, S. HARP: Hierarchical representation learning for networks. In AAAI, 2018. +Chen, J., Saad, Y., and Zhang, Z. Graph coarsening: from scientific computing to machine learning. *SeMA Journal*, 79(1):187-223, 2022. +Chowdhury, S. and Mémoli, F. The gromov-wasserstein distance between networks and stable network invariants. Information and Inference: A Journal of the IMA, 8(4): 757-787, 2019. +Chowdhury, S. and Needham, T. Generalized spectral clustering via gromov-wasserstein learning. In International Conference on Artificial Intelligence and Statistics, pp. 712-720. PMLR, 2021. +Chung, F. R. and Langlands, R. P. A combinatorial laplacian with vertex weights. journal of combinatorial theory, Series A, 75(2):316-327, 1996. +Cvetković, D., Rowlinson, P., and Simić, S. K. Signless laplacians of finite graphs. Linear Algebra and its applications, 423(1):155-171, 2007. + +Debnath, A. K., Lopez de Compadre, R. L., Debnath, G., Shusterman, A. J., and Hansch, C. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of medicinal chemistry, 34 (2):786-797, 1991. +Defferrard, M., Bresson, X., and Vandergheynst, P. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pp. 3837-3845, 2016. +Dhillon, I. S., Guan, Y., and Kulis, B. Kernel k-means: spectral clustering and normalized cuts. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 551-556, 2004. +Dhillon, I. S., Guan, Y., and Kulis, B. Weighted graph cuts without eigenvectors a multilevel approach. IEEE transactions on pattern analysis and machine intelligence, 29(11):1944-1957, 2007. +Dobson, P. D. and Doig, A. J. Distinguishing enzyme structures from non-enzymes without alignments. Journal of molecular biology, 330(4):771-783, 2003. +Dwivedi, V. P., Joshi, C. K., Luu, A. T., Laurent, T., Bengio, Y., and Bresson, X. Benchmarking graph neural networks. arXiv preprint arXiv:2003.00982, 2020. +Flamary, R., Courty, N., Gramfort, A., Alaya, M. Z., Bois-bunon, A., Chambon, S., Chapel, L., Corenflos, A., Fatras, K., Fournier, N., Gautheron, L., Gayraud, N. T., Janati, H., Rakotomamonjy, A., Redko, I., Rolet, A., Schutz, A., Seguy, V., Sutherland, D. J., Tavenard, R., Tong, A., and Vayer, T. Pot: Python optimal transport. Journal of Machine Learning Research, 22(78):1-8, 2021. URL http://jmlr.org/papers/v22/20-451.html. +Grattarola, D., Zambon, D., Bianchi, F. M., and Alippi, C. Understanding pooling in graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2022. +Helma, C., King, R. D., Kramer, S., and Srinivasan, A. The predictive toxicology challenge 2000-2001. Bioinformatics, 17(1):107-108, 2001. +Hermsdorff, G. B. and Gunderson, L. M. A unifying framework for spectrum-preserving graph sparsification and coarsening. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 7734-7745, 2019. + +Hu, W., Fey, M., Zitnik, M., Dong, Y., Ren, H., Liu, B., Catasta, M., and Leskovec, J. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118-22133, 2020. +Huang, Z., Zhang, S., Xi, C., Liu, T., and Zhou, M. Scaling up graph neural networks via graph coarsening. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 675-684, 2021. +Irwin, J. J., Sterling, T., Mysinger, M. M., Bolstad, E. S., and Coleman, R. G. Zinc: a free tool to discover chemistry for biology. Journal of chemical information and modeling, 52(7):1757-1768, 2012. +Ivashkin, V. and Chebotarev, P. Do logarithmic proximity measures outperform plain ones in graph clustering? In International Conference on Network Analysis, pp. 87-105. Springer, 2016. +Jin, W., Zhao, L., Zhang, S., Liu, Y., Tang, J., and Shah, N. Graph condensation for graph neural networks. arXiv preprint arXiv:2110.07580, 2021. +Jin, Y., Loukas, A., and JáJa, J. Graph coarsening with preserved spectral properties. In The 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, 26-28 August 2020, Online [Palermo, Sicily, Italy], volume 108 of Proceedings of Machine Learning Research, pp. 4452-4462. PMLR, 2020. +Karypis, G. and Kumar, V. A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM Journal on Scientific Computing, 20(1):359-392, 1999. +Kipf, T. N. and Welling, M. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. +Kokiopoulou, E., Chen, J., and Saad, Y. Trace optimization and eigenproblems in dimension reduction methods. Numerical Linear Algebra with Applications, 18(3):565-602, 2011. +Kriege, N. M. and Mutzel, P. Subgraph matching kernels for attributed graphs. In Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012. icml.cc / Omnipress, 2012. +Liu, Y., Safavi, T., Dighe, A., and Koutra, D. Graph summarization methods and applications: A survey. ACM Comput. Surv., 51(3), jun 2018. ISSN 0360-0300. doi: 10.1145/3186727. URL https://doi.org/10.1145/3186727. + +Loukas, A. Graph reduction with spectral and cut guarantees. Journal of Machine Learning Research, 20(116):1-42, 2019. +Loukas, A. and Vandergheynst, P. Spectrally approximating large graphs with smaller graphs. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pp. 3243-3252. PMLR, 2018. +Mémoli, F. Gromov-wasserstein distances and the metric approach to object matching. Foundations of computational mathematics, 11(4):417-487, 2011. +Minka, T. P. Old and new matrix algebra useful for statistics, 2000. https://tminka.github.io/papers/matrix/. +Morris, C., Kriege, N. M., Bause, F., Kersting, K., Mutzel, P., and Neumann, M. Tudataset: A collection of benchmark datasets for learning with graphs. In ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020), 2020. +Neumann, M., Garnett, R., Bauckhage, C., and Kersting, K. Propagation kernels: efficient graph kernels from propagated information. Machine Learning, 102(2):209-245, 2016. +Oettershagen, L., Kriege, N. M., Morris, C., and Mutzel, P. Temporal graph kernels for classifying dissemination processes. In Proceedings of the 2020 SIAM International Conference on Data Mining, SDM 2020, Cincinnati, Ohio, USA, May 7-9, 2020, pp. 496-504. SIAM, 2020. doi: 10.1137/1.9781611976236.56. +Peyre, G., Cuturi, M., and Solomon, J. Gromov-wasserstein averaging of kernel and distance matrices. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, volume 48 of JMLR Workshop and Conference Proceedings, pp. 2664-2672. JMLR.org, 2016. +Ron, D., Safro, I., and Brandt, A. Relaxation-based coarsening and multiscale graph organization. *Multiscale Modeling & Simulation*, 9(1):407-423, 2011. +Ruge, J. W. and Stüben, K. Algebraic multigrid. In Multigrid methods, pp. 73-130. SIAM, 1987. +Ruhe, A. Perturbation bounds for means of eigenvalues and invariant subspaces. BIT Numerical Mathematics, 10(3): 343-354, 1970. +Saad, Y. Iterative Methods for Sparse Linear Systems. Society for Industrial and Applied Mathematics, 2nd edition, 2003. + +Schomburg, I., Chang, A., Ebeling, C., Gremse, M., Heldt, C., Huhn, G., and Schomburg, D. Brenda, the enzyme database: updates and major new developments. *Nucleic acids research*, 32(suppl_1):D431-D433, 2004. +Scrucca, L. and Raftery, A. E. Improved initialisation of model-based clustering using gaussian hierarchical partitions. Advances in data analysis and classification, 9(4): 447-460, 2015. +Shi, J. and Malik, J. Normalized cuts and image segmentation. IEEE Transactions on pattern analysis and machine intelligence, 22(8):888-905, 2000. +Sorkun, M. C., Khetan, A., and Er, S. Aqsoldb, a curated reference set of aqueous solubility and 2d descriptors for a diverse set of compounds. Scientific data, 6(1):1-8, 2019. +Titouan, V., Courty, N., Tavenard, R., and Flamary, R. Optimal transport for structured data with application on graphs. In International Conference on Machine Learning, pp. 6275-6284. PMLR, 2019. +Tsitsulin, A., Mottin, D., Karras, P., Bronstein, A. M., and Müller, E. Netlsd: Hearing the shape of a graph. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018, London, UK, August 19-23, 2018, pp. 2347-2356. ACM, 2018. doi: 10.1145/3219819.3219991. +Vincent-Cuaz, C., Flamary, R., Corneli, M., Vayer, T., and Courty, N. Semi-relaxed gromov-wasserstein divergence and applications on graphs. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=RShaMexjc-x. +Xu, H., Luo, D., and Carin, L. Scalable gromov-wasserstein learning for graph partitioning and matching. Advances in neural information processing systems, 32, 2019a. +Xu, H., Luo, D., Zha, H., and Carin, L. Gromov-wasserstein learning for graph matching and node embedding. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pp. 6932-6941. PMLR, 2019b. +Xu, H., Luo, D., Carin, L., and Zha, H. Learning graphons via structured gromov-wasserstein barycenters. In AAAI Conference on Artificial Intelligence, 2020. +Yanardag, P. and Vishwanathan, S. V. N. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, NSW, Australia, August 10-13, 2015, pp. 1365-1374. ACM, 2015. doi: 10.1145/2783258.2783417. + +Ying, R., You, J., Morris, C., Ren, X., Hamilton, W. L., and Leskovec, J. Hierarchical graph representation learning with differentiable pooling. In NeurIPS, 2018. +Zheng, L., Xiao, Y., and Niu, L. A brief survey on computational gromov-wasserstein distance. Procedia Computer Science, 199:697-702, 2022. + +# A. List of notations + +
NotationsMeaning
L(L(c))(coarsened) graph Laplacian matrix
L(L(c))(coarsened) normalized graph Laplacian matrix
G(c)coarsened graph
D(D(c))(coarsened) degree matrix
A(A(c))(coarsened) adjacency matrix
Cpmembership matrix, coarsening matrix with entries ∈{0,1}
Cworthogonal coarsening matrix, a variant of coarsening matrix
Cwweighted averaging matrix, a variant of coarsening matrix
Πwprojection matrix induced by Cw
Wdiagonal node mass matrix
S(S(c))(coarsened) similarity matrix
+ +We elaborate more about the family of coarsening matrices here. Specifically, we define the matrix $C_p \in \mathbb{R}^{n \times N}$ as + +$$ +\boldsymbol {C} _ {p} (k, i) = \left\{ \begin{array}{l l} 1 & v _ {i} \in \mathcal {P} _ {k} \\ 0 & o t h e r w i s e \end{array} \right.. +$$ + +Let $p_i = |\mathcal{P}_i|$ be the number of nodes in the $i$ -th partition $\mathcal{P}_i$ and $n$ be the number of partitioning. Then, we define the weight matrix $W = \mathrm{diag}(m_1, \dots, m_N)$ and let $c_i = \sum_{j \in \mathcal{P}_i} m_j$ . We can thus give the definition of the weighted averaging matrix as + +$$ +\bar {C} _ {w} = \left( \begin{array}{c c c c} \frac {1}{c _ {1}} & & & \\ & \frac {1}{c _ {2}} & & \\ & & \dots & \\ & & & \frac {1}{c _ {n}} \end{array} \right) C _ {p} W. +$$ + +and the orthogonal coarsening matrix + +$$ +\boldsymbol {C} _ {w} = \left( \begin{array}{c c c c} \sqrt {\frac {1}{c _ {1}}} & & & \\ & \sqrt {\frac {1}{c _ {2}}} & & \\ & & \dots & \\ & & & \sqrt {\frac {1}{c _ {n}}} \end{array} \right) \boldsymbol {C} _ {p} \boldsymbol {W} ^ {\frac {1}{2}} +$$ + +Let the projection matrix be $\Pi_w = C_w^\intercal C_w$ . We can check that $\Pi_w^2 = \Pi_w$ . + +# B. Derivations Omitted in the Main Text + +# B.1. Weighted graph coarsening leads to doubly-weighted Laplacian + +We show in the following that $C_w \mathcal{L} C_w^\intercal$ can reproduce the normalized graph Laplacian $\mathcal{L}^{(c)}$ . In this case, $C_w = (C_p D C_p^\intercal)^{-\frac{1}{2}} C_p D^{\frac{1}{2}}$ and interestingly $C_p D C_p^\intercal = \mathrm{diag}(C_p A C_p^\intercal \mathbf{1}) = \mathrm{diag}(A^{(c)} \mathbf{1}) = D^{(c)}$ , indicating + +$$ +\begin{array}{l} C _ {w} \mathcal {L} C _ {w} ^ {\intercal} = I _ {n} - C _ {w} D ^ {- \frac {1}{2}} A D ^ {- \frac {1}{2}} C _ {w} ^ {\intercal} \\ = \boldsymbol {I} _ {n} - \left(\boldsymbol {D} ^ {(c)}\right) ^ {- \frac {1}{2}} \boldsymbol {C} _ {p} \boldsymbol {D} ^ {\frac {1}{2}} \boldsymbol {D} ^ {- \frac {1}{2}} \boldsymbol {A} \boldsymbol {D} ^ {- \frac {1}{2}} \boldsymbol {D} ^ {\frac {1}{2}} \boldsymbol {C} _ {p} ^ {\intercal} \left(\boldsymbol {D} ^ {(c)}\right) ^ {- \frac {1}{2}} \\ = \boldsymbol {I} _ {n} - \left(\boldsymbol {D} ^ {(c)}\right) ^ {- \frac {1}{2}} \boldsymbol {A} ^ {(c)} \left(\boldsymbol {D} ^ {(c)}\right) ^ {- \frac {1}{2}} = \boldsymbol {\mathcal {L}} ^ {(c)}. \\ \end{array} +$$ + +The results above imply we can unify previous coarsening results under the weighted graph coarsening framework in this paper, with a proper choice of similarity matrix $S$ and node measure $\mu$ . + +# B.2. A toy example of coarsening a 3-node graph + +Consider a fully-connected 3-node graph with equal weights for nodes and the partition $\{\{v_1\}, \{v_2, v_3\}\}$ . Its similarity matrix $S = D + A$ and the three possible coarsened similarity matrices will respectively be + +$$ +\boldsymbol {S} = \left( \begin{array}{c c c} 2 & 1 & 1 \\ 1 & 2 & 1 \\ 1 & 1 & 2 \end{array} \right), \quad \bar {\boldsymbol {C}} _ {w} \boldsymbol {S} \bar {\boldsymbol {C}} _ {w} ^ {\intercal} = \left( \begin{array}{c c} 2 & 1 \\ 1 & 3 / 2 \end{array} \right), \quad \boldsymbol {C} _ {w} \boldsymbol {S} \boldsymbol {C} _ {w} ^ {\intercal} = \left( \begin{array}{c c} 2 & \sqrt {2} \\ \sqrt {2} & 3 \end{array} \right), \quad \boldsymbol {C} _ {p} \boldsymbol {S} \boldsymbol {C} _ {p} ^ {\intercal} = \left( \begin{array}{c c} 2 & 2 \\ 2 & 6 \end{array} \right). +$$ + +It is clear that $\bar{C}_w\mathcal{S}\bar{C}_w^\intercal$ , which will have the most appropriate entry magnitude to give the minimal GW-distance, is different from $C_p\mathcal{S}C_p^\intercal$ proposed in Jin et al. (2020) and (Cai et al., 2021). We note in $S^{(c)} = \bar{C}_w\mathcal{S}\bar{C}_w^\intercal$ the edge weight (similarity) is still 1, since we explicitly specify the node weight of the supernode becomes 2. The new GW geometric framework thus decouples the node weights and similarity intensities. + +# B.3. Deriving the "Averaging" magnitude from the constrained srGW barycenter problem + +We first introduce the definition of srGW divergence. For a given graph $G$ with node mass distribution $\pmb{m} \in \Delta^{N-1}$ and similarity matrix $\pmb{S}$ , we can construct another graph $G^{(c)}$ with given similarity matrix $\pmb{S}^{(c)}$ and unspecified node mass distribution $\pmb{m}^{(c)} \in \Delta^{n-1}$ . To better reflect the dependence of $\mathrm{GW}_2^2(G, G^{(c)})$ on node mass distribution and similarity matrix, we abuse the notation $\mathrm{GW}_2$ as $\mathrm{GW}_2^2(S, m, S^{(c)}, m^{(c)})$ and $\Pi(m, m^{(c)}) := \{T \in \mathbb{R}_+^{N \times n} : T\mathbf{1} = m, T^\top\mathbf{1} = m^{(c)}\}$ in this subsection. Vincent-Cuaz et al. (2022) then defined + +$$ +\operatorname {s r G W} _ {2} ^ {2} \left(\boldsymbol {S}, \boldsymbol {m}, \boldsymbol {S} ^ {(c)}\right) := \min _ {\boldsymbol {m} ^ {(c)}} \operatorname {G W} _ {2} ^ {2} \left(\boldsymbol {S}, \boldsymbol {m}, \boldsymbol {S} ^ {(c)}, \boldsymbol {m} ^ {(c)}\right), +$$ + +and we can further find an optimal (w.r.t. the srGW divergence) by solving the following srGW barycenter problem with only one input $(S, m)$ , i.e. + +$$ +\min _ {\boldsymbol {S} ^ {(c)}} \operatorname {s r G W} _ {2} ^ {2} (\boldsymbol {S}, \boldsymbol {m}, \boldsymbol {S} ^ {(c)}). \tag {8} +$$ + +We can then do the following transform to Equation (8) to reveal its connection with our proposed "Averaging" magnitude $\bar{C}_w\bar{S}\bar{C}_w^\intercal$ for the coarsened similarity matrix $S^{(c)}$ . + +$$ +\begin{array}{l} \min _ {\boldsymbol {S} ^ {(c)}} \operatorname {s r G W} _ {2} ^ {2} (\boldsymbol {S}, \boldsymbol {m}, \boldsymbol {S} ^ {(c)}) = \min _ {\boldsymbol {S} ^ {(c)} \boldsymbol {m} ^ {(c)}} \operatorname {s r G W} _ {2} ^ {2} (\boldsymbol {S}, \boldsymbol {m}, \boldsymbol {S} ^ {(c)}, \boldsymbol {m} ^ {(c)}) \\ = \min _ {\boldsymbol {S} ^ {(c)}} \min _ {\boldsymbol {m} ^ {(c)}} \min _ {\boldsymbol {T} \in \Pi (\boldsymbol {m}, \boldsymbol {m} ^ {(c)})} \left\langle \boldsymbol {M} \left(\boldsymbol {S}, \boldsymbol {S} ^ {(c)}, \boldsymbol {T}\right), \boldsymbol {T} \left(\boldsymbol {m}, \boldsymbol {m} ^ {(c)}\right) \right\rangle \\ = \min _ {\boldsymbol {m} ^ {(c)}, \boldsymbol {T} \in \Pi (\boldsymbol {m}, \boldsymbol {m} ^ {(c)})} \min _ {\boldsymbol {S} ^ {(c)}} \left\langle \boldsymbol {M} \left(\boldsymbol {S}, \boldsymbol {S} ^ {(c)}, \boldsymbol {T}\right), \boldsymbol {T} \left(\boldsymbol {m}, \boldsymbol {m} ^ {(c)}\right) \right\rangle \\ = \min _ {\boldsymbol {T} \in \mathbb {R} _ {+} ^ {N \times n}: \boldsymbol {T} \mathbf {1} = \boldsymbol {m}} \min _ {\boldsymbol {S} ^ {(c)}} \left\langle \boldsymbol {M} \left(\boldsymbol {S}, \boldsymbol {S} ^ {(c)}, \boldsymbol {T}\right), \boldsymbol {T} \right\rangle . \\ \end{array} +$$ + +In the display above, we use $M(S, S^{(c)}, T)$ and $T(m, m^{(c)})$ to show the dependence terms of the cross-graph dissimilarity matrix $M$ and the transport matrix $T$ . The last equation holds since for a given transport matrix $T$ the new node mass distribution $m^{(c)}$ is uniquely decided by $m^{(c)} = T^{\top}1$ . + +Notably, there is a closed-form solution for the inner minimization problem $\min_{\boldsymbol{S}^{(c)}}\left\langle M\left(\boldsymbol {S},\boldsymbol{S}^{(c)},\boldsymbol {T}\right),\boldsymbol{T}\right\rangle$ . Peyre et al. (2016, Equation (14)) derive that the optimal $S^{(c)}$ reads + +$$ +\operatorname {d i a g} \left(\boldsymbol {m} ^ {(c)}\right) ^ {- 1} \boldsymbol {T} ^ {\intercal} \boldsymbol {S} \boldsymbol {T} \operatorname {d i a g} \left(\boldsymbol {m} ^ {(c)}\right) ^ {- 1}. +$$ + +If we then enforce the restriction that the node mass transport must be performed in a clustering manner (i.e., the transport matrix $\boldsymbol{T} = \boldsymbol{W}\boldsymbol{C}_{p}^{\intercal} \in \mathbb{R}^{N \times n}$ for a certain membership matrix $\boldsymbol{C}_{p}$ ), we exactly have $\boldsymbol{S}^{(c)} = \bar{\boldsymbol{C}}_w\boldsymbol{S}\bar{\boldsymbol{C}}_w^{\intercal}$ . The derivation above verifies the effectiveness of the "Averaging" magnitude we propose. + +# B.4. Comparing spectral difference $\Delta$ to spectral distance in Jin et al. (2020) + +Jin et al. (2020) proposed to specify the graph coarsening plan by minimizing the following full spectral distance term (c.f. their Equation (8)): + +$$ +\begin{array}{l} S D _ {\text {f u l l}} \left(G, G ^ {(c)}\right) = \sum_ {i = 1} ^ {k _ {1}} | \boldsymbol {\lambda} (i) - \boldsymbol {\lambda} _ {c} (i) | + \sum_ {i = k _ {1} + 1} ^ {k _ {2}} | \boldsymbol {\lambda} (i) - 1 | + \sum_ {i = k _ {2} + 1} ^ {N} | \boldsymbol {\lambda} (i) - \boldsymbol {\lambda} _ {c} (i - N + n) | \\ = \sum_ {i = 1} ^ {k _ {1}} \left(\boldsymbol {\lambda} _ {c} (i) - \boldsymbol {\lambda} (i)\right) + \sum_ {i = k _ {2} + 1} ^ {N} \left(\boldsymbol {\lambda} (i) - \boldsymbol {\lambda} _ {c} (i - N + n)\right) + \sum_ {i = k _ {1} + 1} ^ {k _ {2}} | \boldsymbol {\lambda} (i) - 1 |, \\ \end{array} +$$ + +where $\lambda (i)$ 's and $\lambda_{c}(i)$ 's correspond to the eigenvalues (in an ascending order) of the normalized Laplacian $\mathcal{L}$ and $\mathcal{L}^{(c)}$ , and $k_{1}$ and $k_{2}$ are defined as $k_{1} = \arg \max_{i}\left\{i:\lambda_{c}(i) < 1\right\}$ , $k_{2} = N - n + k_{1}$ . The last equation holds due to their Interlacing Property 4.1 (similar to Theorem D.1). + +We note (i) the two "spectral" loss functions, $\Delta$ and $SD_{\mathrm{full}}$ , refer to different spectra. We leverage the spectrum of $C_wUC_w^T$ while they focus on graph Laplacians. Our framework is more general and takes node weights into consideration. (ii) They actually divided $\lambda_c(i)$ 's into two sets and respectively compared them to $\lambda (i)$ and $\lambda (i + N - n)$ ; the signs for $\lambda (i) - \lambda_c(i)$ and $\lambda (i + N - n) - \lambda_c(i)$ are thus different. + +# B.5. Time complexity of Algorithm 1 + +We first recall the complexity of Algorithm 1 stated in Section 5. Let $T$ be the upper bound of the $K$ -means iteration, the time complexity of Algorithm 1 is $\mathcal{O}(T(M + Nn))$ , where $M = \mathrm{nnz}(S)$ is the number of non-zero elements in $S$ . + +The cost mainly comes from the computation of the "distance" (7), which takes $\mathcal{O}(M)$ time to obtain the second term in Equation (7) and pre-compute the third term (independent of $i$ ); obtaining the exact $\mathrm{dist}_j(i)$ for all the $Nn(i,j)$ pairs requires another $\mathcal{O}(Nn)$ time. Compared to the previous spectral graph coarsening method (Jin et al., 2020), Algorithm 1 removes the partial sparse eigenvalue decomposition, which takes $\mathcal{O}\left(R(Mn + Nn^2)\right)$ time using Lanczos iteration with $R$ restarts. The $K$ -means part of spectral graph coarsening takes $\mathcal{O}(TNn^2)$ for a certain choice of the eigenvector feature dimension $k_{1}$ (Jin et al., 2020, Section 5.2), while weighted kernel $K$ -means clustering nested in our algorithm can better utilize the sparsity in the similarity matrix. + +# C. Details of Experiments + +We first introduce the hardware and the codebases we utilize in the experiments. The algorithms tested are all implemented in unoptimized Python code, and run with one core of a server CPU (Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz) on Ubuntu 18.04. Specifically, our method KGC is developed based on the (unweighted) kernel $K$ -means clustering program provided by Ivashkin & Chebotarev (2016); for the other baseline methods, the variation methods VNGC and VEGC are implemented by Loukas (2019), and the spectrum-preserving methods MGC and SGC are implemented by Jin et al. (2020); The S-GWL method (Xu et al., 2019a; Chowdhury & Needham, 2021) is tested as well (can be found in the GitHub repository of this work), while this algorithm cannot guarantee that the number of output partitions is the same as specified. For the codebases, we implement the experiments mainly using the code by Jin et al. (2020); Dwivedi et al. (2020), for graph classification and graph regression respectively. More omitted experimental settings in Section 6 will be introduced along the following subsections. + +# C.1. Details of GW distance approximation in Section 6.1 + +We compute the pair-wise $\mathrm{GW}_2$ distance matrix $\pmb{G}$ for graphs in PTC and IMDB, with the normalized signless Laplacians set as the similarity matrices. For the computation of the $\mathrm{GW}_2$ distance, we mainly turn to the popular OT package POT (Flamary et al., 2021, Python Optimal Transport). + +The omitted tables of average squared GW distance $GW_2^2 (G^{(c)},G)$ and average empirical bound gaps are presented in Tables 5 and 6. + +Regarding time efficiency, we additionally remark our method KGC works on the dense graph matrix, even though it has a better theoretical complexity using a sparse matrix; for small graphs in MUTAG, directly representing them by dense + +Table 5: Average squared GW distance $GW_2^2(G^{(c)}, G)$ on PTC and IMDB dataset. + +
DatasetMethodsc=0.3↓c=0.5↓c=0.7↓c=0.9↓
PTCVNGC0.055580.048800.037810.03326
VEGC0.030640.023520.016140.00927
MGC0.052900.043600.026350.00598
SGC0.038860.033960.023090.00584
KGC0.033320.023690.012550.00282
KGC(A)0.030550.023460.016090.00392
IMDBVNGC0.051390.050590.050430.05042
VEGC0.027910.021060.011700.00339
MGC0.027480.021160.011750.00339
SGC0.029070.022000.012120.00352
KGC0.028730.021110.011370.00320
KGC(A)0.027480.021060.011700.00337
+ +Table 6: Average empirical bound gaps (in Theorem 4.2) on PTC and IMDB dataset. + +
DatasetMethodsc=0.3c=0.5c=0.7c=0.9
PTCVNGC0.067010.066710.053930.04669
VEGC0.062460.061290.044240.02577
MGC0.032030.032000.021670.00540
SGC0.045990.041560.024880.00554
KGC0.051450.051730.035300.00852
KGC(A)0.065190.064020.047020.00372
IMDBVNGC0.0092810.009270.0092780.009268
VEGC0.0168790.0167350.016360.008221
MGC0.0173070.016630.0163090.008179
SGC0.0157190.0157930.0159340.008049
KGC0.0160540.0166790.0166870.008347
KGC(A)0.0173070.0167350.016360.008177
+ +matrices would even be faster, when a modern CPU is used. + +# C.2. Details of Laplacian spectrum preservation in Section 6.2 + +We mainly specify the evaluation metric for spectrum preservation in this subsection. Following the eigenvalue notation in Theorem 4.2, we define the top-5 eigenvalue relative error as $\frac{1}{5}\sum_{i=1}^{5}\frac{\lambda_i - \lambda_i^{(c)}}{\lambda_i}$ . Here for all coarsening methods $\lambda_i - \lambda_i^{(c)}$ is always non-negative thanks to Poincaré separation theorem (Theorem D.1). + +# C.3. Details of Graph classification with Laplacian spectrum in Section 6.3 + +We remark the graph classification experiments are mainly adapted from the similar experiments in Jin et al. (2020, Section 6.1). For MUTAG and PTC datasets, we apply Algorithm 1 to $D + A$ ; for the other four datasets, we utilize the normalized signless Laplacian $I_N + D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$ . + +# C.4. Details of Graph regression with GCNs in Section 6.4 + +We mainly follow the GCN settings in Dwivedi et al. (2020). More detailed settings are stated as follows. + +Graph regression. The procedure of graph regression is as follows: Taking a graph $G$ as input, a GCN will return a graph embedding $\mathbf{y}_G\in \mathbb{R}^d$ ; Dwivedi et al. (2020) then pass $\mathbf{y}_G$ to an MLP and compute the prediction score $y_{\mathrm{pred}}\in \mathbb{R}$ as + +$$ +y _ {\text {p r e d}} = \boldsymbol {P} \cdot \operatorname {R e L U} (\boldsymbol {Q} \boldsymbol {y} _ {\mathcal {G}}), +$$ + +where $P \in \mathbb{R}^{1 \times d}$ , $Q \in \mathbb{R}^{d \times d}$ . They will then use $|y_{\mathrm{pred}} - y|$ , the $L_1$ loss (the MAE metric in Table 4) between the predicted score $y_{\mathrm{pred}}$ and the groundtruth score $y$ , both in training and performance evaluation. + +Data splitting. They apply a scaffold splitting (Hu et al., 2020) to the AQSOL dataset in the ratio $8:1:1$ to have 7831, 996, and 996 samples for train, validation, and test sets. + +Training hyperparameters. For the learning rate strategy across all GCN models, we follow the existing setting to choose the initial learning rate as $1 \times 10^{-3}$ , the reduce factor is set as 0.5, and the stopping learning rate is $1 \times 10^{-5}$ . Also, all the GCN models tested in our experiments share the same architecture—the network has 4 layers and 108442 tunable parameters. + +As for the concrete usage of graph coarsening to GCNs, we discuss the main idea in Appendix C.4.1 and leave the technical details to Appendix C.4.2. + +# C.4.1. APPLICATION OF GRAPH COARSENING TO GCNS + +Motivated by the GW setting, we discuss the application of graph coarsening to a prevailing and fundamental graph model— $\mathrm{GCN}^5$ . After removing the bias term for simplicity and adapting the notations in this paper, we take a vanilla 1-layer GCN as an example and formulate it as + +$$ +\boldsymbol {y} ^ {\intercal} = \frac {\boldsymbol {1} _ {N} ^ {\intercal}}{N} \sigma \left(\boldsymbol {D} ^ {- \frac {1}{2}} \boldsymbol {A} \boldsymbol {D} ^ {- \frac {1}{2}} \boldsymbol {H} \boldsymbol {W} _ {\mathrm {G C N}}\right), +$$ + +where $\pmb{y}$ is the graph representation of a size- $N$ graph associated with the adjacency matrix $\mathbf{A}$ , $\sigma$ is an arbitrary activation function, $\pmb{H}$ is the embedding matrix for nodes in the graph, and $\pmb{W}_{\mathrm{GCN}}$ is the weight matrix for the linear transform in the layer. We take the mean operation $\frac{1_N^\top}{N}$ in the GCN as an implication of even node weights (therefore no $w$ subscript for coarsening matrices), and intuitively incorporate graph coarsening into the computation by solely replacing $D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$ with $\Pi D^{-\frac{1}{2}}AD^{-\frac{1}{2}}\Pi$ and denote $\pmb{y}^{(c)}$ as the corresponding representation. We have + +$$ +\left(\boldsymbol {y} ^ {(c)}\right) ^ {\intercal} = \boldsymbol {c} ^ {\intercal} \sigma \left(\bar {\boldsymbol {C}} \boldsymbol {D} ^ {- \frac {1}{2}} \boldsymbol {A} \boldsymbol {D} ^ {- \frac {1}{2}} \boldsymbol {C} _ {p} \bar {\boldsymbol {C}} \boldsymbol {H} \boldsymbol {W} _ {\mathrm {G C N}}\right), \tag {9} +$$ + +and the graph matrix $D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$ is supposed to be coarsened as $\bar{C}D^{-\frac{1}{2}}AD^{-\frac{1}{2}}C_p$ , which can be extended to multiple layers. Due to the space limit, we defer the derivation to Appendix C.4.2 and solely leave some remarks here. + +The propagation above is guided by the GW setting along this paper: even the nodes in the original graph are equally-weighted, the supernodes after coarsening can have different weights, which induces the new readout operation $c^{\top}$ ( $c$ is the mass vector of the supernodes). Furthermore, an obvious difference between Equation (9) and previous designs (Huang et al., 2021; Cai et al., 2021) of applying graph coarsening to GNNs is the asymmetry of the coarsened graph matrix $\bar{C}D^{-\frac{1}{2}}AD^{-\frac{1}{2}}C_p$ . The design (9) is tested for the graph regression experiment in Section 6.4. More technical details are introduced in the subsequent subsection. + +# C.4.2. DERIVATION OF THE GCN PROPAGATION IN THE GW SETTING + +We consider a general layer- $L$ GCN used by Dwivedi et al. (2020). We first recall the definition (normalization modules are omitted for simplicity): + +$$ +\boldsymbol {y} ^ {\intercal} = \frac {\mathbf {1} _ {N} ^ {\intercal}}{N} \boldsymbol {H} ^ {(L)}, +$$ + +$$ +\boldsymbol {H} ^ {(l)} = \sigma (\boldsymbol {Z} ^ {(l)}), \quad \boldsymbol {Z} ^ {(l)} = \left(\boldsymbol {D} ^ {- \frac {1}{2}} \boldsymbol {A} \boldsymbol {D} ^ {- \frac {1}{2}}\right) \boldsymbol {H} ^ {(l - 1)} \boldsymbol {W} ^ {(l)}, \quad \forall l \in [ L ], +$$ + +$\pmb{H}^{(0)} \coloneqq \pmb{X}$ , the embedding matrix for each nodes, + +where $\sigma$ is an activation function and from now on we will abuse $P$ here to denote $D^{-\frac{1}{2}}AD^{-\frac{1}{2}}; H^{(l)}$ is the embedding matrix of the graph nodes in the $l$ -th layer, and $W^{(l)}$ is the weight matrix of the same layer. + +To apply the coarsened graph, we enforce the following regulations: + +$$ +\boldsymbol {H} ^ {(c, l)} = \sigma (\boldsymbol {Z} ^ {(c, l)}), \quad \boldsymbol {Z} ^ {(c, l)} = (\Pi P \Pi) \boldsymbol {H} ^ {(c, l - 1)} \boldsymbol {W} ^ {(l)}, \quad \forall l \in [ L ]. +$$ + +To reduce the computation in node aggregation, we utilize the two properties that $\Pi = C_p^\top \bar{C}$ and $\sigma (C_p^\top B) = C_p^\top \sigma (B)$ , for any element-wise activation function and matrix $B$ with a proper shape (considering $C_p^\top$ simply "copy" the rows from $B$ ); for the top two layers, we then have + +$$ +\boldsymbol {y} ^ {\intercal} = \frac {\mathbf {1} _ {N} ^ {\intercal}}{N} \sigma (\boldsymbol {C} _ {p} ^ {\intercal} \bar {\boldsymbol {C}} \boldsymbol {P} \boldsymbol {\Pi} \boldsymbol {H} ^ {(c, L - 1)} \boldsymbol {W} ^ {(L)}) = \frac {\mathbf {1} _ {N} ^ {\intercal}}{N} \boldsymbol {C} _ {p} ^ {\intercal} \sigma (\bar {\boldsymbol {C}} \boldsymbol {P} \boldsymbol {C} _ {p} ^ {\intercal} \bar {\boldsymbol {C}} \boldsymbol {H} ^ {(c, L - 1)} \boldsymbol {W} ^ {(L)}), +$$ + +$$ +\pmb {H} ^ {(c, L - 1)} = \sigma \left(\pmb {C} _ {p} ^ {\intercal} \bar {\pmb {C}} \pmb {P} \pmb {C} _ {p} ^ {\intercal} \bar {\pmb {C}} \pmb {H} ^ {(c, L - 2)} \pmb {W} ^ {(L - 1)}\right) = \pmb {C} _ {p} ^ {\intercal} \sigma \left(\bar {\pmb {C}} \pmb {P} \pmb {C} _ {p} ^ {\intercal} \bar {\pmb {C}} \pmb {H} ^ {(c, L - 2)} \pmb {W} ^ {(L - 1)}\right). +$$ + +Note $C_p^\intercal \bar{C} C_p^\intercal = C_p^\intercal$ ; for the top two layers we finally obtain + +$$ +\boldsymbol {y} ^ {\intercal} = \frac {\boldsymbol {1} _ {N} ^ {\intercal}}{N} \boldsymbol {C} _ {p} ^ {\intercal} \sigma \left(\bar {\boldsymbol {C}} \boldsymbol {P} \boldsymbol {C} _ {p} ^ {\intercal} \bar {\boldsymbol {C}} \boldsymbol {H} ^ {(c, L - 1)} \boldsymbol {W} ^ {(L)}\right) = \boldsymbol {c} ^ {\intercal} \sigma \left(\bar {\boldsymbol {C}} \boldsymbol {P} \boldsymbol {C} _ {p} ^ {\intercal} \sigma \left(\bar {\boldsymbol {C}} \boldsymbol {P} \boldsymbol {C} _ {p} ^ {\intercal} \bar {\boldsymbol {C}} \boldsymbol {H} ^ {(c, L - 2)} \boldsymbol {W} ^ {(L - 1)}\right) \boldsymbol {W} ^ {(L)}\right), +$$ + +implying that we can replace $P$ with $\bar{C}PC_{p}^{\intercal}$ in the propagation of the top two layers. The trick can indeed be repeated for each layer, and specifically, in the bottom layer we can have + +$$ +\boldsymbol {H} ^ {(c, 1)} = \boldsymbol {C} _ {p} ^ {\intercal} \sigma \left[ \left(\bar {\boldsymbol {C}} \boldsymbol {P} \boldsymbol {C} _ {p} ^ {\intercal}\right) \left(\bar {\boldsymbol {C}} \boldsymbol {H} ^ {(0)}\right) \boldsymbol {W} ^ {(1)} \right], +$$ + +which well corresponds to the node weight concept in the GW setting: $\tilde{C}H^{(0)}$ uses the average embedding of the nodes within the cluster to represent the coarsened centroid. We then finish the justification of the new GCN propagation in Equation (9). + +# D. Useful Facts + +# D.1. Poincaré separation theorem + +For convenience of the reader, we repeat Poincaré separation theorem (Bellman, 1997) in this subsection. + +Theorem D.1 (Poincaré separation theorem). Let $\mathbf{A}$ be an $N \times N$ real symmetric matrix and $\mathbf{C}$ an $n \times N$ semi-orthogonal matrix such that $\mathbf{C}\mathbf{C}^{\intercal} = \mathbf{I}_n$ . Denote by $\lambda_i, i = 1,2,\dots,N$ and $\lambda_i^{(c)}, i = 1,2,\dots,n$ the eigenvalues of $\mathbf{A}$ and $CAC^{\intercal}$ , respectively (in descending order). We have + +$$ +\lambda_ {i} \geq \lambda_ {i} ^ {(c)} \geq \lambda_ {N - n + i}, \tag {10} +$$ + +# D.2. Ruhe's trace inequality + +For convenience of the reader, Ruhe's trace inequality (Ruhe, 1970) is stated as follows. + +Lemma D.2 (Ruhe's trace inequality). If $A, B$ are both $N \times N$ PSD matrices with eigenvalues, + +$$ +\lambda_ {1} \geq \dots \geq \lambda_ {N} \geq 0, \quad \nu_ {1} \geq \dots \geq \nu_ {N} \geq 0, +$$ + +then + +$$ +\sum_ {i = 1} ^ {N} \lambda_ {i} \nu_ {N - i + 1} \leq \operatorname {t r} (A B) \leq \sum_ {i = 1} ^ {N} \lambda_ {i} \nu_ {i}. +$$ + +# E. Proof of Theorem 4.2 and Theorem 4.4 + +We will first prove an intermediate result Lemma E.1 for coarsened graph $G_1^{(c)}$ and un-coarsend graph $G_2$ to introduce necessary technical tools for Theorem 4.2 and Theorem 4.4. The ultimate proof of Theorem 4.2 and Theorem 4.4 will be stated in Appendix E.2 and Appendix E.3 respectively. + +# E.1. Lemma E.1 and Its Proof + +We first give the complete statement of Lemma E.1. In Lemma E.1, we consider the case in which only graph $G_{1}$ is coarsened, and the notations are slightly simplified: when the context is clear, the coarsening-related terms without subscripts specific to graphs, e.g. $C_p, \bar{C}_w$ , are by default associated with $G_{1}$ unless otherwise announced. We follow the simplified notation in the statement and proof of Theorem 4.2, which focuses on solely $\mathrm{GW}_2(G^{(c)}, G)$ and the indices 1, 2 are not involved. For Theorem 4.4, we will explicitly use specific subscripts, such as $C_{p,1}, \bar{C}_{w,2}$ , for disambiguation. + +Lemma E.1. Let $P = W_1^{-\frac{1}{2}} T^* W_2^{-\frac{1}{2}}$ . If both $S_1$ and $S_2$ are PSD, we have + +$$ +\begin{array}{l} \left| \mathrm {G W} _ {2} ^ {2} \left(G _ {1}, G _ {2}\right) - \mathrm {G W} _ {2} ^ {2} \left(G _ {1} ^ {(c)}, G _ {2}\right) \right| \leq \max \left\{\lambda_ {N _ {1} - n _ {1} + 1} \sum_ {i = 1} ^ {n _ {1}} \left(\lambda_ {i} - \lambda_ {i} ^ {(c)}\right) + C _ {\boldsymbol {U}, n _ {1}}, \right. \tag {11} \\ \left. 2 \left(\nu_ {N _ {1} - n _ {1} + 1} \sum_ {i = 1} ^ {n _ {1}} \left(\lambda_ {i} - \lambda_ {i} ^ {(c)}\right) + C _ {\boldsymbol {U}, \boldsymbol {V}, n _ {1}}\right) \right\}. \\ \end{array} +$$ + +Here, $\lambda_1^{(c)}\geq \dots \geq \lambda_n^{(c)}$ are eigenvalues of $\Pi_wU\Pi_w$ , $U = W_{1}^{\frac{1}{2}}S_{1}W_{1}^{\frac{1}{2}}$ with eigenvalues $\lambda_{1}\geq \lambda_{2}\geq \dots \geq \lambda_{N_{1}}$ , and $V = P W_{2}^{\frac{1}{2}}S_{2}W_{2}^{\frac{1}{2}}P^{\intercal}$ with eigenvalues $\nu_{1}\geq \nu_{2}\geq \dots \geq \nu_{N_{1}}$ , and we let $C_{U,n_1}\coloneqq \sum_{i = 1}^{n_1}\lambda_i(\lambda_i - \lambda_{N_1 - n_1 + i}) + \sum_{i = n_1 + 1}^{N_1}\lambda_i^2$ and $C_{U,V,n_1}\coloneqq \sum_{i = 1}^{n_1}\lambda_i(\nu_i - \nu_{N_1 - i + 1}) + \sum_{i = n_1 + 1}^{N_1}\lambda_i\nu_i$ be two non-negative constants. + +Remark E.2. Take $G_{2} = G_{1}$ , and we have $T^{*} = W_{1}$ which implies $P = I_{N}$ and $V = W_{1}^{\frac{1}{2}}S_{1}W_{1}^{\frac{1}{2}} = U$ . This directly leads to the bound in Theorem 4.2 though with an additional factor 2. In Appendix E.2 we will show the approach to obtain a slightly tighter bound removing the unnecessary factor 2. + +To illustrate our idea more clearly, we will start from the following decomposition of $\mathrm{GW}_2$ distance. The detailed proofs of all the lemmas in this section will be provided to Appendix E.4 for the readers interested. + +Lemma E.3. For any two graphs $G_{1}$ and $G_{2}$ , we have + +$$ +\mathrm {G W} _ {2} ^ {2} \left(G _ {1}, G _ {2}\right) = I _ {1} + I _ {2} - 2 I _ {3}, +$$ + +where + +$$ +I _ {1} = \operatorname {T r} \left(\left(\boldsymbol {W} _ {1} ^ {\frac {1}{2}} \boldsymbol {S} _ {\mathbf {1}} \boldsymbol {W} _ {1} ^ {\frac {1}{2}}\right) \left(\boldsymbol {W} _ {1} ^ {\frac {1}{2}} \boldsymbol {S} _ {\mathbf {1}} \boldsymbol {W} _ {1} ^ {\frac {1}{2}}\right)\right), \qquad I _ {2} = \operatorname {T r} \left(\left(\boldsymbol {W} _ {2} ^ {\frac {1}{2}} \boldsymbol {S} _ {\mathbf {2}} \boldsymbol {W} _ {2} ^ {\frac {1}{2}}\right) \left(\boldsymbol {W} _ {2} ^ {\frac {1}{2}} \boldsymbol {S} _ {\mathbf {2}} \boldsymbol {W} _ {2} ^ {\frac {1}{2}}\right)\right) +$$ + +do not depend on the choice of transport map, while + +$$ +I _ {3} = \operatorname {T r} \left(\boldsymbol {S} _ {1} \boldsymbol {T} ^ {*} \boldsymbol {S} _ {2} \left(\boldsymbol {T} ^ {*}\right) ^ {\intercal}\right) +$$ + +requires the optimal transport map $\mathbf{T}^*$ from the graph $G_{1}$ to the graph $G_{2}$ . + +Replacing the $G_{1}$ -related terms in Equation (15) with their $G_{1}^{(c)}$ counterparts, we know that the distance between $G_{1}^{(c)}$ and $G_{2}$ is $\mathrm{GW}_2(G_1^{(c)}, G_2) = I_1' + I_2 - 2I_3'$ , where: + +$$ +I _ {1} ^ {\prime} = \operatorname {T r} \left(\left[ \left(\boldsymbol {W} _ {1} ^ {(c)}\right) ^ {\frac {1}{2}} \boldsymbol {S} _ {1} ^ {(c)} \left(\boldsymbol {W} _ {1} ^ {(c)}\right) ^ {\frac {1}{2}} \right] ^ {2}\right), \quad I _ {3} ^ {\prime} = \operatorname {T r} \left(\boldsymbol {S} _ {1} ^ {(c)} \boldsymbol {T} _ {c o} ^ {*} \boldsymbol {S} _ {2} \left(\boldsymbol {T} _ {c o} ^ {*}\right) ^ {\intercal}\right). \tag {12} +$$ + +$T_{co}^{*}\in \Pi (\mu_{1}^{(c)},\mu_{2})$ (c o represents the transport from the "c"oarsened graph to the "o"original graph) here is the optimal transport matrix induced by the GW distance and $S_1^{(c)}\coloneqq \bar{C}_wS_1\bar{C}_w^{\intercal}$ + +The key step to preserve the GW distance is to control the difference $|I_1 - I_1'|$ and $|I_3 - I_3'|$ , since $I_2 = I_2'$ will cancel each other. We will start from the bound of $|I_1 - I_1'|$ . + +Lemma E.4. Let $\pmb{U} \coloneqq \pmb{W}_1^{\frac{1}{2}}\pmb{S}_1\pmb{W}_1^{\frac{1}{2}}$ . If the similarity matrix $S_1 \in \mathbb{R}^{N \times N}$ is PSD, we have + +$$ +0 \leq I _ {1} - I _ {1} ^ {\prime} \leq \lambda_ {N - n + 1} \sum_ {i = 1} ^ {n} \left(\lambda_ {i} - \lambda_ {i} ^ {(c)}\right) + C _ {\boldsymbol {U}, n}. +$$ + +Here, $\lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_N$ are the eigenvalues of $\mathbf{U}$ , and $C_{\mathbf{U},n} := \sum_{i=1}^{n} \lambda_i (\lambda_i - \lambda_{N-n+i}) + \sum_{i=n+1}^{N} \lambda_i^2$ is non-negative. + +We can similarly bound $I_3 - I_3'$ with an additional tool Ruhe's trace inequality (a variant of Von Neumann's trace inequality specific to PSD matrices, c.f. Appendix D.2). + +Lemma E.5. Let $\pmb{P} = \pmb{W}_1^{-\frac{1}{2}}\pmb{T}^*\pmb{W}_2^{-\frac{1}{2}}$ be the normalized optimal transport matrix, and $\pmb{V} := \pmb{P}\pmb{W}_2^{\frac{1}{2}}\pmb{S}_2\pmb{W}_2^{\frac{1}{2}}\pmb{P}^\top$ with eigenvalues $\nu_1 \geq \nu_2 \geq \dots \geq \nu_N$ . If both $S_1$ and $S_2$ are PSD, we have + +$$ +0 \leq I _ {3} - I _ {3} ^ {\prime} \leq \nu_ {N - n + 1} \sum_ {i = 1} ^ {n} \left(\lambda_ {i} - \lambda_ {i} ^ {(c)}\right) + C _ {\boldsymbol {U}, \boldsymbol {V}, n}. +$$ + +Here, $C_{U,V,n} \coloneqq \sum_{i=1}^{n} \lambda_i (\nu_i - \nu_{N-i+1}) + \sum_{i=n+1}^{N} \lambda_i \nu_i$ is non-negative. + +Now we have + +$$ +| \mathrm {G W} _ {2} ^ {2} (G _ {1}, G _ {2}) - \mathrm {G W} _ {2} ^ {2} (G _ {1} ^ {(c)}, G _ {2}) | = | I _ {1} - I _ {1} ^ {\prime} + 2 (I _ {3} ^ {\prime} - I _ {3}) | \leq \max \{I _ {1} - I _ {1} ^ {\prime}, 2 (I _ {3} - I _ {3} ^ {\prime}) \} +$$ + +considering that $I_1 - I_1' \geq 0$ and $I_3' - I_3 \leq 0$ . Then, combining all the pieces above yields the bound in Lemma E.1. + +# E.2. Proof of Theorem 4.2 + +With the above dissection of the terms $I_1', I_3'$ , we can now give a finer control of the distance $\mathrm{GW}_2^2(G_1^{(c)}, G_1)$ . We first expand $\mathrm{GW}_2^2(G_1^{(c)}, G_1)$ as + +$$ +\mathbf {G W} _ {2} ^ {2} (G _ {1} ^ {(c)}, G _ {1}) = I _ {1} + I _ {1} ^ {\prime} - 2 \operatorname {T r} \left(\pmb {S} _ {1} ^ {(c)} \pmb {T} _ {c o} ^ {*} \pmb {S} _ {1} (\pmb {T} _ {c o} ^ {*}) ^ {\intercal}\right), +$$ + +where the notation $T_{co}^{*}$ is now abused as the optimal transport matrix induced by $\mathrm{GW}_2(G_1^{(c)}, G_1)$ . Applying the optimality inequality in Lemma E.7, we have + +$$ +\mathbf {G W} _ {2} ^ {2} (G _ {1} ^ {(c)}, G _ {1}) \leq I _ {1} ^ {\prime} + I _ {1} - 2 \operatorname {T r} \left(\boldsymbol {S} _ {1} ^ {(c)} \boldsymbol {C} _ {p} \boldsymbol {W} _ {1} \boldsymbol {S} _ {1} \boldsymbol {W} _ {1} \boldsymbol {C} _ {p} ^ {\intercal}\right), +$$ + +where we remark $C_pW_1$ is a qualified transport matrix for $G_1^{(c)}$ and $G_1$ . + +To further simplify the upper bound above, we show the equivalence between $I_1'$ and $\mathrm{Tr}\left(S_1^{(c)}C_pW_1S_1W_1C_p^\intercal\right)$ as follows: + +$$ +\begin{array}{l} \operatorname {T r} \left(\boldsymbol {S} _ {1} ^ {(c)} \boldsymbol {C} _ {p} \boldsymbol {W} _ {1} \boldsymbol {S} _ {1} \boldsymbol {W} _ {1} \boldsymbol {C} _ {p} ^ {\intercal}\right) = \operatorname {T r} \left(\bar {\boldsymbol {C}} _ {w} \boldsymbol {S} _ {1} \bar {\boldsymbol {C}} _ {w} \boldsymbol {C} _ {p} \boldsymbol {W} _ {1} \boldsymbol {S} _ {1} \boldsymbol {W} _ {1} \boldsymbol {C} _ {p} ^ {\intercal}\right) = \operatorname {T r} \left(\boldsymbol {\Pi} _ {w} \boldsymbol {U} \boldsymbol {\Pi} _ {w} \boldsymbol {U}\right) = \operatorname {T r} \left(\boldsymbol {\Pi} _ {w} \boldsymbol {\Pi} _ {w} \boldsymbol {U} \boldsymbol {\Pi} _ {w} \boldsymbol {\Pi} _ {w} \boldsymbol {U}\right) \\ = \operatorname {T r} \left(\boldsymbol {\Pi} _ {w} \boldsymbol {U} \boldsymbol {\Pi} _ {w} \boldsymbol {\Pi} _ {w} \boldsymbol {U} \boldsymbol {\Pi} _ {w}\right) = I _ {1} ^ {\prime}. \\ \end{array} +$$ + +Combing the above pieces together, we obtain + +$$ +\mathrm {G W} _ {2} ^ {2} \left(G _ {1} ^ {(c)}, G _ {1}\right) \leq I _ {1} + I _ {1} ^ {\prime} - 2 I _ {1} ^ {\prime} \leq \lambda_ {N - n + 1} \sum_ {i = 1} ^ {n} \left(\lambda_ {i} - \lambda_ {i} ^ {(c)}\right) + C _ {\boldsymbol {U}, n}, +$$ + +which uses Lemma E.4 for the last inequality. The proof of Theorem 4.2 is now complete. + +Remark E.6. Theorem 4.2 can be leveraged to directly control $\left|\mathrm{GW}_2(G_1^{(c)},G_2^{(c)}) - \mathrm{GW}_2(G_1,G_2)\right|$ . Note that $\mathrm{GW}_2$ is a pseudo-metric and satisfies triangular inequality (Chowdhury & Mémoli, 2019), which implies + +$$ +\begin{array}{l} \mathrm {G W} _ {2} (G _ {1} ^ {(c)}, G _ {2} ^ {(c)}) - \mathrm {G W} _ {2} (G _ {1}, G _ {2}) \leq \mathrm {G W} _ {2} (G _ {1} ^ {(c)}, G _ {1}) + \mathrm {G W} _ {2} (G _ {1}, G _ {2}) + \mathrm {G W} _ {2} (G _ {2}, G _ {2} ^ {(c)}) - \mathrm {G W} _ {2} (G _ {1}, G _ {2}) \\ = \mathrm {G W} _ {2} (G _ {1} ^ {(c)}, G _ {1}) + \mathrm {G W} _ {2} (G _ {2}, G _ {2} ^ {(c)}) \\ \leq \sqrt {2} \left(\mathbf {G W} _ {2} ^ {2} (G _ {1} ^ {(c)}, G _ {1}) + \mathbf {G W} _ {2} ^ {2} (G _ {2}, G _ {2} ^ {(c)})\right) ^ {\frac {1}{2}}, \\ \end{array} +$$ + +and similarly we have + +$$ +\mathbf {G W} _ {2} (G _ {1}, G _ {2}) - \mathbf {G W} _ {2} (G _ {1} ^ {(c)}, G _ {2} ^ {(c)}) \leq \mathbf {G W} _ {2} (G _ {1} ^ {(c)}, G _ {1}) + \mathbf {G W} _ {2} (G _ {2}, G _ {2} ^ {(c)}) \leq \sqrt {2} \left(\mathbf {G W} _ {2} ^ {2} (G _ {1} ^ {(c)}, G _ {1}) + \mathbf {G W} _ {2} ^ {2} (G _ {2}, G _ {2} ^ {(c)})\right) ^ {\frac {1}{2}}. +$$ + +This implies + +$$ +\begin{array}{l} \left| \mathbf {G W} _ {2} (G _ {1} ^ {(c)}, G _ {2} ^ {(c)}) - \mathbf {G W} _ {2} (G _ {1}, G _ {2}) \right| \leq \sqrt {2} \left(\mathbf {G W} _ {2} ^ {2} (G _ {1} ^ {(c)}, G _ {1}) + \mathbf {G W} _ {2} ^ {2} (G _ {2}, G _ {2} ^ {(c)})\right) ^ {\frac {1}{2}} \\ \leq \sqrt {2} \left(\lambda_ {1, N _ {1} - n _ {1} + 1} \sum_ {i = 1} ^ {n _ {1}} \left(\lambda_ {1, i} - \lambda_ {1, i} ^ {(c)}\right) + C _ {\boldsymbol {U} _ {1}, n _ {1}} + \lambda_ {2, N _ {2} - n _ {2} + 1} \sum_ {i = 1} ^ {n _ {2}} \left(\lambda_ {2, i} - \lambda_ {2, i} ^ {(c)}\right) + C _ {\boldsymbol {U} _ {2}, n _ {2}}\right) ^ {\frac {1}{2}}. \\ \end{array} +$$ + +Here, the last inequality is due to Theorem 4.2. We comment the above result obtained from a direct application of triangle inequality is indeed weaker than the result stated in Theorem 4.4; we will then devote the remaining part of this section to the proof thereof. + +# E.3. Proof of Theorem 4.4 + +For one side of the result, we can follow the derivation in Lemma E.3 and apply Theorem 4.2 to have + +$$ +\begin{array}{l} \mathrm {G W} _ {2} ^ {2} (G _ {1}, G _ {2}) - \mathrm {G W} _ {2} ^ {2} (G _ {1} ^ {(c)}, G _ {2} ^ {(c)}) = I _ {1} - I _ {1} ^ {\prime} + I _ {2} - I _ {2} ^ {\prime} + 2 (I _ {3} ^ {\prime} - I _ {3}) \leq I _ {1} - I _ {1} ^ {\prime} + I _ {2} - I _ {2} ^ {\prime} \\ \leq \lambda_ {N _ {1} - n _ {1} + 1} \sum_ {i = 1} ^ {n _ {1}} \left(\lambda_ {i} - \lambda_ {i} ^ {(c)}\right) + C _ {\boldsymbol {U} _ {1}, n _ {1}} + \lambda_ {N _ {2} - n _ {2} + 1} \sum_ {i = 1} ^ {n _ {2}} \left(\lambda_ {i} - \lambda_ {i} ^ {(c)}\right) + C _ {\boldsymbol {U} _ {2}, n _ {2}}, \\ \end{array} +$$ + +where we recall $I_3 \coloneqq \operatorname{Tr}(\pmb{S}_1\pmb{T}^*\pmb{S}_2(\pmb{T}^*)^\top)$ and $I_3'$ now is $\operatorname{Tr}\left(\pmb{S}_1^{(c)}\pmb{T}_{cc}^*\pmb{S}_2^{(c)}(\pmb{T}_{cc}^*)^\top\right)(T_{cc}^*)$ is the optimal transport matrix for $G_1^{(c)}$ and $G_2^{(c)}$ . + +For the other side, we still decompose the object $\mathrm{GW}_2^2 (G_1^{(c)},G_2^{(c)}) - \mathrm{GW}_2^2 (G_1,G_2)$ as $(I_1^{\prime} - I_1) + (I_2^{\prime} - I_2) + 2(I_3 - I_3^{\prime})$ and similarly we can follow Lemma E.4 to bound the first two terms. The next task is to disassemble $I_{3} - I_{3}^{\prime}$ . + +We first prepare some notations for the analysis. To clarify the affiliation, we redefine $U_{1} \coloneqq W_{1}^{\frac{1}{2}}S_{1}W_{1}^{\frac{1}{2}}$ with eigenvalues $\lambda_{1,1} \geq \lambda_{1,2} \geq \dots \geq \lambda_{1,N_{1}}$ , $V_{1} = PW_{2}^{\frac{1}{2}}S_{2}W_{2}^{\frac{1}{2}}P^{\intercal}$ with eigenvalues $\nu_{1,1} \geq \nu_{1,2} \geq \dots \geq \nu_{1,N_{1}}^{6}$ , $U_{2} = W_{2}^{\frac{1}{2}}S_{2}W_{2}^{\frac{1}{2}}$ with eigenvalues $\lambda_{2,1} \geq \lambda_{2,2} \geq \dots \geq \lambda_{2,N_{2}}$ , $V_{2} = P^{\intercal}W_{1}^{\frac{1}{2}}S_{1}W_{1}^{\frac{1}{2}}P$ with eigenvalues $\nu_{2,1} \geq \nu_{2,2} \geq \dots \geq \nu_{2,N_{2}}$ , and similarly re-introduce $C_{p,1},\bar{C}_{w,1},C_{w,1},C_{p,2},\bar{C}_{w,2},C_{w,2}$ , and + +$$ +\boldsymbol {\Pi} _ {w, 1} := \boldsymbol {W} _ {1} ^ {\frac {1}{2}} \boldsymbol {C} _ {p, 1} ^ {\intercal} \bar {\boldsymbol {C}} _ {w, 1} \boldsymbol {W} _ {1} ^ {- \frac {1}{2}} = \boldsymbol {C} _ {w, 1} ^ {\intercal} \boldsymbol {C} _ {w, 1}, \qquad \boldsymbol {\Pi} _ {w, 2} := \boldsymbol {W} _ {2} ^ {\frac {1}{2}} \boldsymbol {C} _ {p, 2} ^ {\intercal} \bar {\boldsymbol {C}} _ {w, 2} \boldsymbol {W} _ {2} ^ {- \frac {1}{2}} = \boldsymbol {C} _ {w, 2} ^ {\intercal} \boldsymbol {C} _ {w, 2}. +$$ + +Recalling Lemma E.7, we can analogously obtain $I_3' - I_3 \leq 0$ ; replacing $T_{cc}^*$ with $C_{p,1}T^{*}C_{p,2}^{\intercal}$ , we have + +$$ +\begin{array}{l} I _ {3} - I _ {3} ^ {\prime} \leq \operatorname {T r} \left(\boldsymbol {S} _ {1} \boldsymbol {T} ^ {*} \boldsymbol {S} _ {2} \left(\boldsymbol {T} ^ {*}\right) ^ {\intercal}\right) - \operatorname {T r} \left(\boldsymbol {S} _ {1} ^ {(c)} \boldsymbol {C} _ {p, 1} \boldsymbol {T} ^ {*} \boldsymbol {C} _ {p, 2} ^ {\intercal} \boldsymbol {S} _ {2} ^ {(c)} \boldsymbol {C} _ {p, 2} \left(\boldsymbol {T} ^ {*}\right) ^ {\intercal} \boldsymbol {C} _ {p, 1} ^ {\intercal}\right) \\ = \operatorname {T r} \left(\boldsymbol {U} _ {1} \boldsymbol {P} \boldsymbol {U} _ {2} \boldsymbol {P} ^ {\intercal}\right) - \operatorname {T r} \left(\boldsymbol {\Pi} _ {w, 1} \boldsymbol {U} _ {1} \boldsymbol {\Pi} _ {w, 1} \boldsymbol {P} \boldsymbol {\Pi} _ {w, 2} \boldsymbol {U} _ {2} \boldsymbol {\Pi} _ {w, 2} \boldsymbol {P} ^ {\intercal}\right), \\ \end{array} +$$ + +where we apply the same derivation as in Equation (16) to obtain the second line above. For simplicity, we let $U_1^{\Pi} \coloneqq \Pi_{w,1}U_1\Pi_{w,1}$ and $U_2^{\Pi} \coloneqq \Pi_{w,2}U_2\Pi_{w,2}$ . We can now bound $I_3 - I_3'$ as + +$$ +\begin{array}{l} I _ {3} - I _ {3} ^ {\prime} \leq \operatorname {T r} \left(\boldsymbol {U} _ {1} \boldsymbol {P} \boldsymbol {U} _ {2} \boldsymbol {P} ^ {\intercal}\right) - \operatorname {T r} \left(\boldsymbol {U} _ {1} ^ {\Pi} \boldsymbol {P} \boldsymbol {U} _ {2} ^ {\Pi} \boldsymbol {P} ^ {\intercal}\right) \\ = \operatorname {T r} \left(\boldsymbol {U} _ {1} \boldsymbol {P} \boldsymbol {U} _ {2} \boldsymbol {P} ^ {\intercal}\right) - \operatorname {T r} \left(\boldsymbol {U} _ {1} ^ {\Pi} \boldsymbol {P} \boldsymbol {U} _ {2} \boldsymbol {P} ^ {\intercal}\right) + \operatorname {T r} \left(\boldsymbol {U} _ {1} ^ {\Pi} \boldsymbol {P} \boldsymbol {U} _ {2} \boldsymbol {P} ^ {\intercal}\right) - \operatorname {T r} \left(\boldsymbol {U} _ {1} ^ {\Pi} \boldsymbol {P} \boldsymbol {U} _ {2} ^ {\Pi} \boldsymbol {P} ^ {\intercal}\right) \\ = \operatorname {T r} \left[ \left(\boldsymbol {U} _ {1} - \boldsymbol {U} _ {1} ^ {\Pi}\right) \boldsymbol {P} \boldsymbol {U} _ {2} \boldsymbol {P} ^ {\intercal} \right] + \operatorname {T r} \left[ \boldsymbol {U} _ {1} ^ {\Pi} \boldsymbol {P} \left(\boldsymbol {U} _ {2} - \boldsymbol {U} _ {2} ^ {\Pi}\right) \boldsymbol {P} ^ {\intercal} \right] \\ = \operatorname {T r} \left[ \left(\boldsymbol {U} _ {1} - \boldsymbol {U} _ {1} ^ {\Pi}\right) \boldsymbol {P} \boldsymbol {U} _ {2} \boldsymbol {P} ^ {\intercal} \right] + \operatorname {T r} \left[ \boldsymbol {U} _ {1} \boldsymbol {P} \left(\boldsymbol {U} _ {2} - \boldsymbol {U} _ {2} ^ {\Pi}\right) \boldsymbol {P} ^ {\intercal} \right] - \operatorname {T r} \left[ \boldsymbol {U} _ {1} \boldsymbol {P} \left(\boldsymbol {U} _ {2} - \boldsymbol {U} _ {2} ^ {\Pi}\right) \boldsymbol {P} ^ {\intercal} \right] + \operatorname {T r} \left[ \boldsymbol {U} _ {1} ^ {\Pi} \boldsymbol {P} \left(\boldsymbol {U} _ {2} - \boldsymbol {U} _ {2} ^ {\Pi}\right) \boldsymbol {P} ^ {\intercal} \right] \tag {13} \\ = \operatorname {T r} \left[ \left(\boldsymbol {U} _ {1} - \boldsymbol {U} _ {1} ^ {\Pi}\right) \boldsymbol {V} _ {1} \right] + \operatorname {T r} \left[ \boldsymbol {V} _ {2} \left(\boldsymbol {U} _ {2} - \boldsymbol {U} _ {2} ^ {\Pi}\right) \right] - \operatorname {T r} \left[ \left(\boldsymbol {U} _ {1} - \boldsymbol {U} _ {1} ^ {\Pi}\right) \boldsymbol {P} \left(\boldsymbol {U} _ {2} - \boldsymbol {U} _ {2} ^ {\Pi}\right) \boldsymbol {P} ^ {\intercal} \right]. \\ \end{array} +$$ + +The first two terms in the last line above can be directly addressed by Lemma E.5; for the last term, we can bound it as + +$$ +\begin{array}{l} \left| \operatorname {T r} \left[ \left(\boldsymbol {U} _ {1} - \boldsymbol {U} _ {1} ^ {\Pi}\right) \boldsymbol {P} \left(\boldsymbol {U} _ {2} - \boldsymbol {U} _ {2} ^ {\Pi}\right) \boldsymbol {P} ^ {\intercal} \right] \right| \leq \left\| \left(\boldsymbol {U} _ {1} - \boldsymbol {U} _ {1} ^ {\Pi}\right) \boldsymbol {P} \right\| _ {F} \left\| \left(\boldsymbol {U} _ {2} - \boldsymbol {U} _ {2} ^ {\Pi}\right) \boldsymbol {P} ^ {\intercal} \right\| _ {F} \\ \leq \left\| \boldsymbol {U} _ {1} - \boldsymbol {U} _ {1} ^ {\Pi} \right\| _ {F} \| \boldsymbol {P} \| \left\| \boldsymbol {U} _ {2} - \boldsymbol {U} _ {2} ^ {\Pi} \right\| _ {F} \| \boldsymbol {P} \| \leq \left\| \boldsymbol {U} _ {1} - \boldsymbol {U} _ {1} ^ {\Pi} \right\| _ {F} \| \boldsymbol {U} _ {2} - \boldsymbol {U} _ {2} ^ {\Pi} \| _ {F}, \\ \end{array} +$$ + +where Lemma E.8 shows $\| P\| \leq 1$ and justifies the last inequality. + +We now give another useful fact that $I_1 - I_1' = \left\| \boldsymbol{U}_1 - \boldsymbol{U}_1^\Pi \right\|_F^2$ : + +$$ +\begin{array}{l} I _ {1} - I _ {1} ^ {\prime} \stackrel {(i)} {=} \operatorname {T r} \left[ \boldsymbol {U} _ {1} ^ {2} - \left(\boldsymbol {U} _ {1} ^ {\Pi}\right) ^ {2} \right] = \operatorname {T r} \left[ \boldsymbol {U} _ {1} ^ {2} + \left(\boldsymbol {U} _ {1} ^ {\Pi}\right) ^ {2} \right] - 2 \operatorname {T r} \left[ \left(\boldsymbol {\Pi} _ {w, 1} \boldsymbol {U} _ {1} \boldsymbol {\Pi} _ {w, 1}\right) ^ {2} \right] \\ = \operatorname {T r} \left[ \boldsymbol {U} _ {1} ^ {2} + \left(\boldsymbol {U} _ {1} ^ {\Pi}\right) ^ {2} \right] - \operatorname {T r} \left(\boldsymbol {\Pi} _ {w, 1} \boldsymbol {U} _ {1} \boldsymbol {\Pi} _ {w, 1} \boldsymbol {U} _ {1}\right) - \operatorname {T r} \left(\boldsymbol {U} _ {1} \boldsymbol {\Pi} _ {w, 1} \boldsymbol {U} _ {1} \boldsymbol {\Pi} _ {w, 1}\right) = \operatorname {T r} \left[ \left(\boldsymbol {U} _ {1} - \boldsymbol {U} _ {1} ^ {\Pi}\right) ^ {2} \right] \\ = \left\| \boldsymbol {U} _ {1} - \boldsymbol {U} _ {1} ^ {\Pi} \right\| _ {F} ^ {2}, \\ \end{array} +$$ + +where $(i)$ comes from Equation (17). Analogously we have $I_2 - I_2' = \left\| U_2 - U_2^\Pi \right\|_F^2$ . Combining the above pieces together we can bound the object as + +$$ +\begin{array}{l} \left. \right. \mathbf {G W} _ {2} ^ {2} \left(G _ {1} ^ {(c)}, G _ {2} ^ {(c)}\right) - \mathbf {G W} _ {2} ^ {2} \left(G _ {1}, G _ {2}\right) \leq - \left\| \boldsymbol {U} _ {1} - \boldsymbol {U} _ {1} ^ {\Pi} \right\| _ {F} ^ {2} - \left\| \boldsymbol {U} _ {2} - \boldsymbol {U} _ {2} ^ {\Pi} \right\| _ {F} ^ {2} + \\ 2 \operatorname {T r} \left[ \left(\boldsymbol {U} _ {1} - \boldsymbol {U} _ {1} ^ {\Pi}\right) \boldsymbol {V} _ {1} \right] + 2 \operatorname {T r} \left[ \boldsymbol {V} _ {2} \left(\boldsymbol {U} _ {2} - \boldsymbol {U} _ {2} ^ {\Pi}\right) \right] + 2 \left\| \boldsymbol {U} _ {1} - \boldsymbol {U} _ {1} ^ {\Pi} \right\| _ {F} \left\| \boldsymbol {U} _ {2} - \boldsymbol {U} _ {2} ^ {\Pi} \right\| _ {F} \\ \leq 2 \operatorname {T r} \left[ \left(\boldsymbol {U} _ {1} - \boldsymbol {U} _ {1} ^ {\Pi}\right) \boldsymbol {V} _ {1} \right] + 2 \operatorname {T r} \left[ \boldsymbol {V} _ {2} \left(\boldsymbol {U} _ {2} - \boldsymbol {U} _ {2} ^ {\Pi}\right) \right] \\ \stackrel {(i)} {\leq} 2 \cdot \left[ \nu_ {1, N _ {1} - n _ {1} + 1} \sum_ {i = 1} ^ {n _ {1}} \left(\lambda_ {1, i} - \lambda_ {1, i} ^ {(c)}\right) + C _ {\boldsymbol {U} _ {1}, \boldsymbol {V} _ {1}, n _ {1}} + \right. \\ \left. \nu_ {2, N _ {2} - n _ {2} + 1} \sum_ {i = 1} ^ {n _ {2}} \left(\lambda_ {2, i} - \lambda_ {2, i} ^ {(c)}\right) + C _ {\boldsymbol {U} _ {2}, \boldsymbol {V} _ {2}, n _ {2}} \right], \\ \end{array} +$$ + +where we reuse Inequality (17) to attain $(i)$ . The proof of Theorem 4.4 is then completed. + +# E.4. Proof of some technical results + +# E.4.1. PROOF OF LEMMA E.3 + +Proof. Following the definition in Equation (1), we rewrite the GW distance in the trace form and have + +$$ +\begin{array}{l} \langle \boldsymbol {M}, \boldsymbol {T} ^ {*} \rangle = \left\langle f _ {1} (\boldsymbol {S} _ {1}) \boldsymbol {m} _ {1} \mathbf {1} _ {N _ {2}} ^ {\intercal}, \boldsymbol {T} ^ {*} \right\rangle + \left\langle \mathbf {1} _ {N _ {1}} \boldsymbol {m} _ {2} ^ {\intercal} f _ {2} (\boldsymbol {S} _ {2}) ^ {\intercal}, \boldsymbol {T} ^ {*} \right\rangle - \left\langle h _ {1} (\boldsymbol {S} _ {1}) \boldsymbol {T} ^ {*} h _ {2} (\boldsymbol {S} _ {2}) ^ {\intercal}, \boldsymbol {T} ^ {*} \right\rangle \\ = \operatorname {T r} \left(f _ {1} \left(\boldsymbol {S} _ {1}\right) \boldsymbol {m} _ {1} \mathbf {1} _ {N _ {2}} ^ {\intercal} \left(\boldsymbol {T} ^ {*}\right) ^ {\intercal}\right) + \operatorname {T r} \left(f _ {2} \left(\boldsymbol {S} _ {2}\right) \boldsymbol {m} _ {2} \mathbf {1} _ {N _ {1}} ^ {\intercal} \boldsymbol {T} ^ {*}\right) - \operatorname {T r} \left(h _ {1} \left(\boldsymbol {S} _ {1}\right) \boldsymbol {T} ^ {*} h _ {2} \left(\boldsymbol {S} _ {2}\right) ^ {\intercal} \left(\boldsymbol {T} ^ {*}\right) ^ {\intercal}\right) \\ = \operatorname {T r} \left(\boldsymbol {m} _ {1} ^ {\intercal} f _ {1} (\boldsymbol {S} _ {1}) \boldsymbol {m} _ {1}\right) + \operatorname {T r} \left(\boldsymbol {m} _ {2} ^ {\intercal} f _ {2} (\boldsymbol {S} _ {2}) \boldsymbol {m} _ {2}\right) - \operatorname {T r} \left(h _ {1} (\boldsymbol {S} _ {1}) \boldsymbol {T} ^ {*} h _ {2} (\boldsymbol {S} _ {2}) ^ {\intercal} \left(\boldsymbol {T} ^ {*}\right) ^ {\intercal}\right); \tag {14} \\ \end{array} +$$ + +the third equation above holds because for any $T \in \Pi(\mu_1, \mu_2)$ , $T\mathbf{1}_{N_2} = \mathbf{m}_1$ and $T^{\top}\mathbf{1}_{N_1} = \mathbf{m}_2$ . + +In the classical square loss case, we can immediately have $f_{1}(S_{1}) = S_{1} \odot S_{1}, f_{2}(S_{2}) = S_{2} \odot S_{2}, h_{1}(S_{1}) = S_{1}$ and $h_{2}(S_{2}) = 2S_{2}$ , where $\odot$ denotes the Hadamard product of two matrices with the same size. We can accordingly expand the first term in Equation (14) as + +$$ +\operatorname {T r} \left(\boldsymbol {m} _ {1} ^ {\intercal} f _ {1} (\boldsymbol {S} _ {1}) \boldsymbol {m} _ {1}\right) = \boldsymbol {m} _ {1} ^ {\intercal} \left(\boldsymbol {S} _ {1} \odot \boldsymbol {S} _ {1}\right) \boldsymbol {m} _ {1} = \operatorname {T r} \left(\boldsymbol {S} _ {1} ^ {\intercal} \operatorname {d i a g} (\boldsymbol {m} _ {1}) \boldsymbol {S} _ {1} \operatorname {d i a g} (\boldsymbol {m} _ {1})\right), +$$ + +in which the proof of the last equation is provided in a summary sheet by Minka (2000). We note $W_{1} \coloneqq \mathrm{diag}(m_{1})$ and $S_{1}$ is constructed symmetric; combining the pieces above we have + +$$ +\operatorname {T r} \left(\boldsymbol {m} _ {1} ^ {\intercal} f _ {1} (\boldsymbol {S} _ {1}) \boldsymbol {m} _ {1}\right) = \operatorname {T r} \left(\left(\boldsymbol {W} _ {1} ^ {\frac {1}{2}} \boldsymbol {S} _ {1} \boldsymbol {W} _ {1} ^ {\frac {1}{2}}\right) \left(\boldsymbol {W} _ {1} ^ {\frac {1}{2}} \boldsymbol {S} _ {1} \boldsymbol {W} _ {1} ^ {\frac {1}{2}}\right)\right), +$$ + +and similarly we obtain $\operatorname{Tr}(m_2^{\mathsf{T}}f_2(S_2)m_2) = \operatorname{Tr}\left(\left(W_2^{\frac{1}{2}}S_2W_2^{\frac{1}{2}}\right)\left(W_2^{\frac{1}{2}}S_2W_2^{\frac{1}{2}}\right)\right)$ . The GW distance (14) can be therefore represented as + +$$ +\underbrace {\operatorname {T r} \left(\left(\boldsymbol {W} _ {1} ^ {\frac {1}{2}} \boldsymbol {S} _ {1} \boldsymbol {W} _ {1} ^ {\frac {1}{2}}\right) \left(\boldsymbol {W} _ {1} ^ {\frac {1}{2}} \boldsymbol {S} _ {1} \boldsymbol {W} _ {1} ^ {\frac {1}{2}}\right)\right)} _ {=: I _ {1}} + \underbrace {\operatorname {T r} \left(\left(\boldsymbol {W} _ {2} ^ {\frac {1}{2}} \boldsymbol {S} _ {2} \boldsymbol {W} _ {2} ^ {\frac {1}{2}}\right) \left(\boldsymbol {W} _ {2} ^ {\frac {1}{2}} \boldsymbol {S} _ {2} \boldsymbol {W} _ {2} ^ {\frac {1}{2}}\right)\right)} _ {=: I _ {2}} - 2 \underbrace {\operatorname {T r} \left(\boldsymbol {S} _ {1} \boldsymbol {T} ^ {*} \boldsymbol {S} _ {2} \left(\boldsymbol {T} ^ {*}\right) ^ {\intercal}\right)} _ {=: I _ {3}}. \tag {15} +$$ + +# E.4.2. PROOF OF LEMMA E.4 + +Proof. First, recall that $\Pi_w = W_1^{\frac{1}{2}}C_p^\top \bar{C}_wW_1^{-\frac{1}{2}} = C_w^\top C_w$ . So, we have + +$$ +\begin{array}{l} I _ {1} ^ {\prime} = \operatorname {T r} \left(\left[ \left(\boldsymbol {C} _ {p} \boldsymbol {W} _ {1} \boldsymbol {C} _ {p} ^ {\intercal}\right) ^ {\frac {1}{2}} \left(\bar {\boldsymbol {C}} _ {w} \boldsymbol {S} _ {1} \bar {\boldsymbol {C}} _ {w} ^ {\intercal}\right) \left(\boldsymbol {C} _ {w} \boldsymbol {W} _ {1} \boldsymbol {C} _ {w} ^ {\intercal}\right) ^ {\frac {1}{2}} \right] ^ {2}\right) \\ = \operatorname {T r} \left[ \left(\boldsymbol {C} _ {p} \boldsymbol {W} _ {1} \boldsymbol {C} _ {p} ^ {\intercal}\right) \left(\bar {\boldsymbol {C}} _ {w} \boldsymbol {S} _ {1} \bar {\boldsymbol {C}} _ {w} ^ {\intercal}\right) \left(\boldsymbol {C} _ {w} \boldsymbol {W} _ {1} \boldsymbol {C} _ {w} ^ {\intercal}\right) \left(\bar {\boldsymbol {C}} _ {w} \boldsymbol {S} _ {1} \bar {\boldsymbol {C}} _ {w} ^ {\intercal}\right) \right] \\ = \operatorname {T r} \left(\boldsymbol {W} _ {1} ^ {\frac {1}{2}} \boldsymbol {C} _ {p} ^ {\intercal} \bar {\boldsymbol {C}} _ {w} \boldsymbol {S} _ {1} \bar {\boldsymbol {C}} _ {w} ^ {\intercal} \boldsymbol {C} _ {w} \boldsymbol {W} _ {1} \boldsymbol {C} _ {p} ^ {\intercal} \bar {\boldsymbol {C}} _ {w} \boldsymbol {S} _ {1} \bar {\boldsymbol {C}} _ {w} ^ {\intercal} \boldsymbol {C} _ {p} \boldsymbol {W} _ {1} ^ {\frac {1}{2}}\right) \tag {16} \\ = \operatorname {T r} \left(\boldsymbol {\Pi} _ {w} \boldsymbol {U} _ {1} \boldsymbol {\Pi} _ {w} ^ {\intercal} \boldsymbol {\Pi} _ {w} \boldsymbol {U} _ {1} \boldsymbol {\Pi} _ {w}\right) \\ = \operatorname {T r} \left[ \left(\boldsymbol {\Pi} _ {w} \boldsymbol {U} _ {1} \boldsymbol {\Pi} _ {w}\right) ^ {2} \right]. \\ \end{array} +$$ + +This implies $I_1 - I_1' = \operatorname{Tr}\left[\boldsymbol{U}^2 - (\boldsymbol{\Pi}_w \boldsymbol{U} \boldsymbol{\Pi}_w)^2\right]$ . Applying Lemma E.9 yields $I_1 - I_1' \geq 0$ . + +To bound the other direction, we have + +$$ +\begin{array}{l} I _ {1} - I _ {1} ^ {\prime} = \operatorname {T r} \left[ \boldsymbol {U} ^ {2} - \left(\boldsymbol {\Pi} _ {w} \boldsymbol {U} \boldsymbol {\Pi} _ {w}\right) ^ {2} \right] \\ = \sum_ {i = 1} ^ {N _ {1}} \lambda_ {i} ^ {2} - \sum_ {i = 1} ^ {n _ {1}} \left(\lambda_ {i} ^ {(c)}\right) ^ {2} \stackrel {\text {(i)}} {\leq} \sum_ {i = 1} ^ {N _ {i}} \lambda_ {i} ^ {2} - \sum_ {i = 1} ^ {n _ {i}} \lambda_ {i} ^ {(c)} \lambda_ {N - n + i} \\ = \sum_ {i = 1} ^ {n _ {1}} \left(\lambda_ {i} - \lambda_ {i} ^ {(c)}\right) \lambda_ {N - n + i} + \sum_ {i = 1} ^ {n _ {1}} \lambda_ {i} \left(\lambda_ {i} - \lambda_ {N - n + i}\right) + \sum_ {i = n _ {1} + 1} ^ {N _ {1}} \lambda_ {i} ^ {2} \\ = \sum_ {i = 1} ^ {n _ {1}} \left(\lambda_ {i} - \lambda_ {i} ^ {(c)}\right) \lambda_ {N _ {1} - n _ {1} + i} + C _ {\boldsymbol {U}, n _ {1}} \\ \stackrel {\text {(i i)}} {\leq} \lambda_ {N _ {1} - n _ {1} + 1} \sum_ {i = 1} ^ {n _ {1}} \left(\lambda_ {i} - \lambda_ {i} ^ {(c)}\right) + C _ {U, n _ {1}}, \tag {17} \\ \end{array} +$$ + +Here, both (i) and (ii) are by Theorem D.1. + +![](images/642c0ad7045c8c1771a25e1b3597cbfca04572e299f1e437993759340e629a53.jpg) + +# E.4.3. PROOF OF LEMMA E.5 + +Proof. By applying Lemma E.7, we have + +$$ +\begin{array}{l} 0 \leq \operatorname {T r} \left(\boldsymbol {S} _ {1} \boldsymbol {T} ^ {*} \boldsymbol {S} _ {2} (\boldsymbol {T} ^ {*}) ^ {\intercal} - \boldsymbol {S} _ {1} ^ {(c)} \boldsymbol {T} _ {c o} ^ {*} \boldsymbol {S} _ {2} (\boldsymbol {T} _ {c o} ^ {*}) ^ {\intercal}\right) \leq \operatorname {T r} \left[ (\boldsymbol {U} - \boldsymbol {\Pi} _ {w} \boldsymbol {U} \boldsymbol {\Pi} _ {w}) \boldsymbol {V} \right] \\ = \operatorname {T r} \left(\boldsymbol {U} \boldsymbol {V}\right) - \operatorname {T r} \left(\boldsymbol {C} _ {w} \boldsymbol {U} \boldsymbol {C} _ {w} ^ {\intercal} \boldsymbol {C} _ {w} \boldsymbol {V} \boldsymbol {C} _ {w} ^ {\intercal}\right). \\ \end{array} +$$ + +Recall that $\left\{\lambda_i^{(c)}\right\}_{i = 1}^{n_1}$ are eigenvalues of $C_wUC_w^\intercal$ , and let $\nu_{1}\geq \nu_{2}\geq \dots \geq \nu_{n_{1}}$ be eigenvalues of $C_wVC_w^\intercal$ . Applying Ruhe's trace inequality (c.f. Appendix D.2), we further have + +$$ +\begin{array}{l} \operatorname {T r} \left(\boldsymbol {U} \boldsymbol {V}\right) - \operatorname {T r} \left(\boldsymbol {C} _ {w} \boldsymbol {U} \boldsymbol {C} _ {w} ^ {\intercal} \boldsymbol {C} _ {w} \boldsymbol {V} \boldsymbol {C} _ {w} ^ {\intercal}\right) \stackrel {(i)} {\leq} \sum_ {i = 1} ^ {N _ {1}} \lambda_ {i} \nu_ {i} - \sum_ {i = 1} ^ {n _ {1}} \lambda_ {i} ^ {(c)} \nu_ {n _ {1} - i + 1} ^ {(c)} \stackrel {(i i)} {\leq} \sum_ {i = 1} ^ {N _ {1}} \lambda_ {i} \nu_ {i} - \sum_ {i = 1} ^ {n _ {1}} \lambda_ {i} ^ {(c)} \nu_ {N _ {1} - i + 1} \\ = \sum_ {i = 1} ^ {n _ {1}} \left(\lambda_ {i} - \lambda_ {i} ^ {(c)}\right) \nu_ {N _ {1} - i + 1} + \sum_ {i = 1} ^ {n _ {1}} \lambda_ {i} \left(\nu_ {i} - \nu_ {N _ {1} - i + 1}\right) + \sum_ {i = n _ {1} + 1} ^ {N _ {1}} \lambda_ {i} \nu_ {i} = \sum_ {i = 1} ^ {n _ {1}} \left(\lambda_ {i} - \lambda_ {i} ^ {(c)}\right) \nu_ {N _ {1} - i + 1} + C _ {\boldsymbol {U}, \boldsymbol {V}, n _ {1}} \\ \stackrel {(i i i)} {\leq} \nu_ {N _ {1} - n _ {1} + 1} \sum_ {i = 1} ^ {n _ {1}} \left(\lambda_ {i} - \lambda_ {i} ^ {(c)}\right) + C _ {\mathbf {U}, \mathbf {V}, n _ {1}}, \tag {18} \\ \end{array} +$$ + +(i) is by Ruhe's trace inequality (Lemma D.2), since both $\mathbf{U}$ and $\mathbf{V}$ are PSD, and both (ii) and (iii) are by Poincaré separation theorem (Theorem D.1). + +# E.5. Other technical lemmas + +Lemma E.7. We have + +$$ +0 \leq \operatorname {T r} \left(\boldsymbol {S} _ {1} \boldsymbol {T} ^ {*} \boldsymbol {S} _ {2} (\boldsymbol {T} ^ {*}) ^ {\intercal} - \boldsymbol {S} _ {1} ^ {(c)} \boldsymbol {T} _ {c o} ^ {*} \boldsymbol {S} _ {2} (\boldsymbol {T} _ {c o} ^ {*}) ^ {\intercal}\right) \leq \operatorname {T r} \left[ (\boldsymbol {U} - \boldsymbol {\Pi} _ {w} \boldsymbol {U} \boldsymbol {\Pi} _ {w}) \boldsymbol {V} \right]. +$$ + +Proof. The proof is based on the optimality of $T_{co}^{*}$ , i.e. the GW distance induced by $T_{co}^{*}$ must be upper bounded by any other transport matrix. Intuitively, we can imagine the mass of a cluster center is transported to the same target nodes in $G_{2}$ as the original source nodes within this cluster, which corresponds to the transport matrix $\widetilde{T}_{co} \coloneqq C_{1,p}T^{*}$ . We can verify $\widetilde{T}_c \in \Pi (\mu_1^{(c)},\mu_2)$ is feasible, since $(C_{1,p}T^{*})\mathbf{1}_{N_2} = C_{1,p}\pmb {m}_1 = \pmb {m}_1^{(c)}$ and $(C_{1,p}T^{*})^{\intercal}\mathbf{1}_{n_1} = (T^{*})^{\intercal}\mathbf{1}_{N_1} = \pmb {m}_2$ . + +To derive the upper bound, applying the optimality of $T_{co}^{*}$ yields + +$$ +\begin{array}{l} \left. \operatorname {T r} \left(\boldsymbol {S} _ {1} \boldsymbol {T} ^ {*} \boldsymbol {S} _ {2} \left(\boldsymbol {T} ^ {*}\right) ^ {\intercal} - \boldsymbol {S} _ {1} ^ {(c)} \boldsymbol {T} _ {c o} ^ {*} \boldsymbol {S} _ {2} \left(\boldsymbol {T} _ {c o} ^ {*}\right) ^ {\intercal}\right) \leq \operatorname {T r} \left(\boldsymbol {S} _ {1} \boldsymbol {T} ^ {*} \boldsymbol {S} _ {2} \left(\boldsymbol {T} ^ {*}\right) ^ {\intercal}\right) - \operatorname {T r} \left(\boldsymbol {S} _ {1} ^ {(c)} \widetilde {\boldsymbol {T}} _ {c o} \boldsymbol {S} _ {2} \widetilde {\boldsymbol {T}} _ {c o} ^ {\intercal}\right) \right. (19) \\ = \operatorname {T r} \left(\boldsymbol {S} _ {1} \boldsymbol {T} ^ {*} \boldsymbol {S} _ {2} \left(\boldsymbol {T} ^ {*}\right) ^ {\intercal}\right) - \operatorname {T r} \left(\left(\bar {\boldsymbol {C}} _ {w} \boldsymbol {S} _ {1} \bar {\boldsymbol {C}} _ {w} ^ {\intercal}\right) \left(\boldsymbol {C} _ {p} \boldsymbol {T} ^ {*}\right) \boldsymbol {S} _ {2} \left(\boldsymbol {C} _ {p} \boldsymbol {T} ^ {*}\right) ^ {\intercal}\right) (20) \\ = \operatorname {T r} \left(\boldsymbol {S} _ {1} \boldsymbol {T} ^ {*} \boldsymbol {S} _ {2} \left(\boldsymbol {T} ^ {*}\right) ^ {\intercal}\right) - \operatorname {T r} \left(\bar {\boldsymbol {C}} _ {w} \boldsymbol {S} _ {1} \bar {\boldsymbol {C}} _ {w} ^ {\intercal} \boldsymbol {C} _ {p} \boldsymbol {T} ^ {*} \boldsymbol {S} _ {2} \left(\boldsymbol {T} ^ {*}\right) ^ {\intercal} \boldsymbol {C} _ {p} ^ {\intercal}\right) (21) \\ = \operatorname {T r} \left(\boldsymbol {S} _ {1} \left(\boldsymbol {W} _ {1} ^ {\frac {1}{2}} \boldsymbol {P} \boldsymbol {W} _ {2} ^ {\frac {1}{2}}\right) \boldsymbol {S} _ {2} \left(\boldsymbol {W} _ {1} ^ {\frac {1}{2}} \boldsymbol {P} \boldsymbol {W} _ {2} ^ {\frac {1}{2}}\right) ^ {\intercal}\right) - \operatorname {T r} \left(\bar {\boldsymbol {C}} _ {w} \boldsymbol {S} _ {1} \bar {\boldsymbol {C}} _ {w} ^ {\intercal} \boldsymbol {C} _ {p} \left(\boldsymbol {W} _ {1} ^ {\frac {1}{2}} \boldsymbol {P} \boldsymbol {W} _ {2} ^ {\frac {1}{2}}\right) \boldsymbol {S} _ {2} \left(\boldsymbol {W} _ {2} ^ {\frac {1}{2}} \boldsymbol {P} ^ {\intercal} \boldsymbol {W} _ {1} ^ {\frac {1}{2}}\right) \boldsymbol {C} _ {p} ^ {\intercal}\right) \\ = \operatorname {T r} \left(\boldsymbol {S} _ {1} \boldsymbol {W} _ {1} ^ {\frac {1}{2}} \boldsymbol {P} \boldsymbol {W} _ {2} ^ {\frac {1}{2}} \boldsymbol {S} _ {2} \boldsymbol {W} _ {2} ^ {\frac {1}{2}} \boldsymbol {P} ^ {\intercal} \boldsymbol {W} _ {1} ^ {\frac {1}{2}}\right) - \operatorname {T r} \left(\bar {\boldsymbol {C}} _ {w} \boldsymbol {S} _ {1} \bar {\boldsymbol {C}} _ {w} ^ {\intercal} \boldsymbol {C} _ {p} \boldsymbol {W} _ {1} ^ {\frac {1}{2}} \boldsymbol {P} \boldsymbol {W} _ {2} ^ {\frac {1}{2}} \boldsymbol {S} _ {2} \boldsymbol {W} _ {2} ^ {\frac {1}{2}} \boldsymbol {P} ^ {\intercal} \boldsymbol {W} _ {1} ^ {\frac {1}{2}}, \boldsymbol {C} _ {p} ^ {\intercal}\right) \\ = \operatorname {T r} \left[ \left(\underbrace {\boldsymbol {W} _ {1} ^ {\frac {1}{2}} \boldsymbol {S} _ {1} \boldsymbol {W} _ {1} ^ {\frac {1}{2}} - \boldsymbol {W} _ {1} ^ {\frac {1}{2}} \boldsymbol {C} _ {p} ^ {\intercal} \bar {\boldsymbol {C}} _ {w} \boldsymbol {S} _ {1} \bar {\boldsymbol {C}} _ {w} ^ {\intercal} \boldsymbol {C} _ {p} \boldsymbol {W} _ {1} ^ {\frac {1}{2}}}\right) \boldsymbol {P} \boldsymbol {W} _ {2} ^ {\frac {1}{2}} \boldsymbol {S} _ {2} \boldsymbol {W} _ {2} ^ {\frac {1}{2}} \boldsymbol {P} ^ {\intercal} \right]. (22) \\ \end{array} +$$ + +The treatment is similar for the lower bound. We replace $T^{*}$ with $\widetilde{T} \coloneqq \bar{C}_w^\intercal T_{co}^* \in \Pi(\mu_1, \mu_2)$ . Then again due to the optimality of $T^{*}$ : + +$$ +\left. \right. \operatorname {T r} \left(\boldsymbol {S} _ {1} ^ {(c)} \boldsymbol {T} _ {c o} ^ {*} \boldsymbol {S} _ {2} \left(\boldsymbol {T} _ {c o} ^ {*}\right) ^ {\intercal} - \boldsymbol {S} _ {1} \boldsymbol {T} ^ {*} \boldsymbol {S} _ {2} \left(\boldsymbol {T} ^ {*}\right) ^ {\intercal}\right) \leq \operatorname {T r} \left(\boldsymbol {S} _ {1} ^ {(c)} \boldsymbol {T} _ {c o} ^ {*} \boldsymbol {S} _ {2} \left(\boldsymbol {T} _ {c o} ^ {*}\right) ^ {\intercal}\right) - \operatorname {T r} \left(\boldsymbol {S} _ {1} \widetilde {\boldsymbol {T}} \boldsymbol {S} _ {2} \widetilde {\boldsymbol {T}} ^ {\intercal}\right) = 0 +$$ + +given our definition. + +Remark. An intuitive scheme to control the upper bound (22) is to upper bound the trace of the $G_{1}$ -related matrix difference $D_{1}$ , since in coarsening $G_{1}$ we have no information about $G_{2}$ ; we will shortly showcase the term is the key to bound the whole GW distance difference $|(15) - (12)|$ as well. + +Lemma E.8. Consider a non-negative matrix $\mathbf{T} \in \mathbb{R}^{N_1 \times N_2}$ in which all the elements $t_{ij} \geq 0$ . We keep to denote $\mathbf{W}_1 = \mathrm{diag}\left(\mathbf{T}\mathbf{1}_{N_2}\right), \mathbf{W}_2 = \mathrm{diag}\left(\mathbf{T}^\top \mathbf{1}_{N_1}\right)$ , and $\mathbf{P} = \mathbf{W}_1^{-\frac{1}{2}}\mathbf{T}\mathbf{W}_2^{-\frac{1}{2}}$ . Then we have $\| \mathbf{P} \| \leq 1$ . + +Proof. Motivated by the regular proof for bounding the eigenvalues of normalized Laplacian matrices, with two arbitrary vectors $\pmb{u} \in \mathbb{R}^{N_1}$ , $\pmb{v} \in \mathbb{R}^{N_2}$ on the unit spheres we can recast the target statement as + +$$ +1 - \boldsymbol {u} ^ {\intercal} \boldsymbol {P} \boldsymbol {v} \geq 0, \quad \forall \boldsymbol {u}, \boldsymbol {v} \text {s . t .} \| \boldsymbol {u} \| = 1, \| \boldsymbol {v} \| = 1. \tag {23} +$$ + +We denote the diagonals for $W_{1}$ and $W_{2}$ respectively as $\pmb{m}_{1} = [m_{1,1}, m_{1,2}, \dots, m_{1,N_{1}}]^{\intercal}$ and $\pmb{m}_{2} = [m_{2,1}, m_{2,2}, \dots, m_{2,N_{2}}]^{\intercal}$ ; then we can rewrite the right-hand-side quantity in Inequality (23) as + +$$ +\begin{array}{l} 1 - \boldsymbol {u} ^ {\intercal} \boldsymbol {P} \boldsymbol {v} = \frac {1}{2} \left(\| \boldsymbol {u} \| ^ {2} + \| \boldsymbol {v} \| ^ {2}\right) - \sum_ {i = 1} ^ {N _ {1}} \sum_ {j = 1} ^ {N _ {2}} \frac {t _ {i j} \boldsymbol {u} _ {i} \boldsymbol {v} _ {j}}{\sqrt {m _ {1 , i} m _ {2 , j}}} \\ = \frac {1}{2} \sum_ {i = 1} ^ {N _ {1}} \sum_ {j = 1} ^ {N _ {2}} \frac {t _ {i j} \boldsymbol {u} _ {i} ^ {2}}{m _ {1 , i}} + \frac {1}{2} \sum_ {j = 1} ^ {N _ {2}} \sum_ {i = 1} ^ {N _ {1}} \frac {t _ {i j} \boldsymbol {v} _ {j} ^ {2}}{m _ {2 , j}} - \sum_ {i = 1} ^ {N _ {1}} \sum_ {j = 1} ^ {N _ {2}} \frac {t _ {i j} \boldsymbol {u} _ {i} \boldsymbol {v} _ {j}}{\sqrt {m _ {1 , i} m _ {2 , j}}} \\ = \frac {1}{2} \sum_ {i = 1} ^ {N _ {1}} \sum_ {j = 1} ^ {N _ {2}} t _ {i j} \left(\frac {\boldsymbol {u} _ {i}}{\sqrt {m _ {1 , i}}} - \frac {\boldsymbol {v} _ {j}}{\sqrt {m _ {2 , j}}}\right) ^ {2} \geq 0, \\ \end{array} +$$ + +where the second equation holds due to the conditions $\sum_{j=1}^{N_2} t_{ij} = m_{1,i}$ and $\sum_{i=1}^{N_1} t_{ij} = m_{2,j}$ , and the last inequality holds since $t_{ij} \geq 0, \forall i \in [N_1], j \in [N_2]$ . + +Lemma E.9. Consider a positive semi-definite (PSD) similarity matrix $S \in \mathbb{R}^{N \times N}$ along with the probability mass vector $\pmb{m}$ and the diagonal matrix $\pmb{W} \coloneqq \operatorname{diag}(\pmb{m})$ . For any non-overlapping partition $\{\mathcal{P}_1, \mathcal{P}_2, \dots, \mathcal{P}_n\}$ , we denote the corresponding coarsening matrices $C_p, \bar{C}_w$ and the projection matrix $\Pi_w = C_w^\top C_w$ . Let $\pmb{A} \coloneqq \pmb{W}^{\frac{1}{2}}\pmb{S}\pmb{W}^{\frac{1}{2}}$ . Then we have + +$$ +\operatorname {T r} \left(\boldsymbol {A} ^ {2}\right) \geq \operatorname {T r} \left[ \left(\boldsymbol {\Pi} _ {w} \boldsymbol {A} \boldsymbol {\Pi} _ {w}\right) ^ {2} \right]. \tag {24} +$$ + +Proof. We first transform $\operatorname{Tr}\left[(\mathbf{\Pi}_w\mathbf{A}\mathbf{\Pi}_w)^2\right]$ as follows: + +$$ +\mathrm {T r} \left[ (\pmb {\Pi} _ {w} \pmb {A} \pmb {\Pi} _ {w}) ^ {2} \right] = \mathrm {T r} (\pmb {\Pi} _ {w} \pmb {A} \pmb {\Pi} _ {w} \pmb {\Pi} _ {w} \pmb {A} \pmb {\Pi} _ {w}) = \mathrm {T r} (\pmb {C} _ {w} ^ {\intercal} \pmb {C} _ {w} \pmb {A} \pmb {C} _ {w} ^ {\intercal} \pmb {C} _ {w} \pmb {A} \pmb {C} _ {w} ^ {\intercal} \pmb {C} _ {w}) = \mathrm {T r} (\pmb {C} _ {w} \pmb {A} \pmb {C} _ {w} ^ {\intercal} \pmb {C} _ {w} \pmb {A} \pmb {C} _ {w} ^ {\intercal}), +$$ + +and the last two equations hold due to $C_w C_w^\intercal = I_n$ . + +We notice $\mathbf{A} := \mathbf{W}^{\frac{1}{2}}\mathbf{S}\mathbf{W}^{\frac{1}{2}}$ is symmetric and we can apply Poincaré separation theorem (c.f. Appendix D.1 for the complete statement) to control the eigenvalues of $C_w\mathbf{A}\mathbf{C}_w^\intercal$ . Specifically, let $\lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_N$ be the eigenvalues of $\mathbf{A}$ , and let $\lambda_1^{(c)} \geq \dots \geq \lambda_n^{(c)}$ be the eigenvalues of $C_w\mathbf{A}\mathbf{C}_w^\intercal$ ; for all $i \in [n]$ we have $\lambda_i \geq \lambda_i^{(c)} \geq 0$ (being non-negative due to the PSD-ness of $\mathbf{A}$ ); and a further conclusion $\lambda_i^2 \geq (\lambda_i^{(c)})^2$ , $\forall i \in [n]$ . We can therefore complete the proof with $\operatorname{Tr}(\mathbf{A}^2) = \sum_{i=1}^{N} \lambda_i^2 \geq \sum_{i=1}^{n} (\lambda_i^{(c)})^2 = \operatorname{Tr}[(\boldsymbol{\Pi}_w\mathbf{A}\boldsymbol{\Pi}_w)^2]$ . \ No newline at end of file diff --git a/agromovwassersteingeometricviewofspectrumpreservinggraphcoarsening/images.zip b/agromovwassersteingeometricviewofspectrumpreservinggraphcoarsening/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..002cdad3bc3ccbfabacfe33a1646cb6860c65eac --- /dev/null +++ b/agromovwassersteingeometricviewofspectrumpreservinggraphcoarsening/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8476e1372ff579217442d65136c5811228ae88f35f4197df06974704de6f1b97 +size 1414295 diff --git a/agromovwassersteingeometricviewofspectrumpreservinggraphcoarsening/layout.json b/agromovwassersteingeometricviewofspectrumpreservinggraphcoarsening/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..012caffe8a3885caa8e3f4a850bc0a887885bf67 --- /dev/null +++ b/agromovwassersteingeometricviewofspectrumpreservinggraphcoarsening/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24a6c07a950a4678021bac775fc24b646dde4e3bb7bd083187f02914909a2014 +size 1313134 diff --git a/agroupsymmetricstochasticdifferentialequationmodelformoleculemultimodalpretraining/0d2260c6-3727-4b94-9bf1-95d5b3b4f02c_content_list.json b/agroupsymmetricstochasticdifferentialequationmodelformoleculemultimodalpretraining/0d2260c6-3727-4b94-9bf1-95d5b3b4f02c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..2fa639c27d5158fb0761ef6086cb9fbf354d1916 --- /dev/null +++ b/agroupsymmetricstochasticdifferentialequationmodelformoleculemultimodalpretraining/0d2260c6-3727-4b94-9bf1-95d5b3b4f02c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c5f4de818528b6eb9fffcf14e2a7dc154b8e704dc9b04b862331a6cb3780b93 +size 223911 diff --git a/agroupsymmetricstochasticdifferentialequationmodelformoleculemultimodalpretraining/0d2260c6-3727-4b94-9bf1-95d5b3b4f02c_model.json b/agroupsymmetricstochasticdifferentialequationmodelformoleculemultimodalpretraining/0d2260c6-3727-4b94-9bf1-95d5b3b4f02c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..10304967689a646e81af43a62397e519a773904c --- /dev/null +++ b/agroupsymmetricstochasticdifferentialequationmodelformoleculemultimodalpretraining/0d2260c6-3727-4b94-9bf1-95d5b3b4f02c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b024b31357a8697f70bdb27ad6b594754143c59881c6f0fe0d058b5de0f02d19 +size 260782 diff --git a/agroupsymmetricstochasticdifferentialequationmodelformoleculemultimodalpretraining/0d2260c6-3727-4b94-9bf1-95d5b3b4f02c_origin.pdf b/agroupsymmetricstochasticdifferentialequationmodelformoleculemultimodalpretraining/0d2260c6-3727-4b94-9bf1-95d5b3b4f02c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..264a2729bb98828fec5ab0fc6d488fa12aa4858e --- /dev/null +++ b/agroupsymmetricstochasticdifferentialequationmodelformoleculemultimodalpretraining/0d2260c6-3727-4b94-9bf1-95d5b3b4f02c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:564e5e8ab374bbbb7845e763dda883c5f9e11aca1ea4fad34446a42f805df65b +size 2358133 diff --git a/agroupsymmetricstochasticdifferentialequationmodelformoleculemultimodalpretraining/full.md b/agroupsymmetricstochasticdifferentialequationmodelformoleculemultimodalpretraining/full.md new file mode 100644 index 0000000000000000000000000000000000000000..bc0a3d7ddeb13f716ea19235174aa6d2a8f6396d --- /dev/null +++ b/agroupsymmetricstochasticdifferentialequationmodelformoleculemultimodalpretraining/full.md @@ -0,0 +1,987 @@ +# A Group Symmetric Stochastic Differential Equation Model for Molecule Multi-modal Pretraining + +Shengchao Liu $^{*12}$ Weitao Du $^{*3}$ Zhiming Ma $^{3}$ Hongyu Guo $^{4}$ Jian Tang $^{15}$ + +# Abstract + +Molecule pretraining has quickly become the go-to schema to boost the performance of AI-based drug discovery. Naturally, molecules can be represented as 2D topological graphs or 3D geometric point clouds. Although most existing pertaining methods focus on merely the single modality, recent research has shown that maximizing the mutual information (MI) between such two modalities enhances the molecule representation ability. Meanwhile, existing molecule multimodal pretraining approaches approximate MI based on the representation space encoded from the topology and geometry, thus resulting in the loss of critical structural information of molecules. To address this issue, we propose MoleculeSDE. MoleculeSDE leverages group symmetric (e.g., SE(3)-equivariant and reflection-antisymmetric) stochastic differential equation models to generate the 3D geometries from 2D topologies, and vice versa, directly in the input space. It not only obtains tighter MI bound but also enables prosperous downstream tasks than the previous work. By comparing with 17 pretraining baselines, we empirically verify that MoleculeSDE can learn an expressive representation with state-of-the-art performance on 26 out of 32 downstream tasks. The source codes are available in this repository. + +# 1. Introduction + +Artificial intelligence (AI) for drug discovery has recently attracted a surge of research interest in both the machine learning and cheminformatics communities, demonstrating + +$^{*}$ Equal contribution $^{1}$ Mila - Québec AI Institute, Canada $^{2}$ Université de Montréal, Canada $^{3}$ Chinese Academy of Sciences, China $^{4}$ National Research Council Canada, Canada $^{5}$ HEC Montréal, Canada. Correspondence to: Shengchao Liu , Weitao Du . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +encouraging outcomes in many challenging drug discovery tasks (Gilmer et al., 2017; Gómez-Bombarelli et al., 2018; Liu et al., 2019; Corso et al., 2020; Jin et al., 2020; Rampášek et al., 2022; Liu et al., 2022; Jumper et al., 2021; Demirel et al., 2021; Hsu et al., 2022; Liu et al., 2023b; Yao et al., 2023). These successes are primarily attributed to the informative representations of molecules. + +Molecules can be naturally represented as topological graphs, where atoms and covalent bonds are the nodes and edges. Additionally, molecular structures (a.k.a. conformations) can be treated as 3D geometric graphs, where the atoms are the point clouds in the 3D Euclidean space. Based on such two modalities, tremendous representation methods have been proposed in a supervised setting (Gilmer et al., 2017; Thomas et al., 2018). Further, by leveraging a large number of molecule datasets (Irwin et al., 2012; Hu et al., 2020b; Axelrod & Gomez-Bombarelli, 2022; Xu et al., 2021a), molecule pretraining strategies (Hu et al., 2019; Sun et al., 2022; Liu et al., 2022) have proven their effectiveness in learning robust and expressive molecule representations. However, most such pertaining works focus on exploring the 2D topology modality, and typical algorithms include reconstructing the masked substructures (Hu et al., 2019; Liu et al., 2019) and aligning the positive subgraph pairs and contrasting the negative pairs (Sun et al., 2020; You et al., 2020b; Wang et al., 2021) simultaneously. Recently, there have also been successful explorations (Liu et al., 2023a; Jiao et al., 2022) on the 3D geometric pretraining, where the key idea is to reconstruct the masked distances or coordinates through a group symmetric reconstruction operation. + +Nevertheless, despite its shown potential for forming high-quality representations, multi-modal pretraining over the molecular 2D topologies and 3D conformations has been under-explored. GraphMVP (Liu et al., 2021a) is the first to build a unified multi-modal self-supervised learning (SSL) paradigm. It introduces contrastive and generative learning to estimate the mutual information (MI) between the two modalities. Contrastive learning treats the topologies and conformations as positive if and only if they are referred to the same molecules. It aims to align the positive pairs while distributing the negative pairs. Such a contrastive idea has also been studied in (Stärk et al., 2022). More encourag- + +![](images/53ae78233065781c9176388164f36573b32fc2efefac581f07e013e252eadc70.jpg) +Figure 1. Illustration of the MoleculeSDE pretraining. It is composed of one contrastive learning and two generative learning objectives. Contrastive learning aims to align the 2D topological and 3D conformational representations for the same molecule. The two generative learning objectives are molecule conditional generation from 2D topology to 3D conformation and from 3D conformation to 2D topology, respectively. The generative modeling from topology to conformation is an SE(3)-equivariant diffusion process that satisfies the physical symmetry in molecule geometries. The other direction from conformation to topology is an SE(3)-invariant diffusion process since only the invariant type-0 features (nodes and edges) are considered. We further include a demo, showing the trajectory of this process. + +ingly, GraphMVP demonstrates the benefits of generative SSL, which aims at reconstructing the conformations from the topologies (or vice versa) by introducing a proxy to the evidence lower bound of MI estimation, i.e., the variational representation reconstruction (VRR). VRR transforms the reconstruction on the data space (i.e., molecular topologies or conformations) to the representation space that compresses input features. Hence, the use of such a proxy can result in the loss of important structural information of molecules. + +Our Approach. To address the aforementioned issue, we propose MoleculeSDE, a multi-modal pretraining method on molecules' topologies and conformations. As illustrated in Figure 1, MoleculeSDE contains both contrastive and generative SSLs. The former adopts the EBM-NCE in (Liu et al., 2021a), and the latter leverages the stochastic differential equation (SDE) framework (Song et al., 2020; Du et al., 2022a). Such design brings in two main benefits. First, for the objective function, the generative loss in GraphMVP is a proxy to the MI, while the diffusion process in MoleculeSDE leads to a more accurate MI estimation with less information loss. We list a brief performance comparison of two methods in Table 1 and will provide theoretical insights that MoleculeSDE can lead to a more accurate MI estimation. Second, the SDE-based generative SSL enables prosperous downstream tasks. For example, MoleculeSDE enables conformation generation (CG) on tasks where only 2D topologies are available (Wu et al., 2018). Based on this, we can apply more advanced geometric modeling methods for prediction. As shown in Section 5, such generated conformations lead to improved predictive performance over existing CG methods (Shi et al., 2021; Du et al., 2022b). + +Table 1. Downstream tasks' performance comparison with merely generative pretraining. The complete results are in Appendix H. + +
ModelTox21 ↑MUV ↑Bace ↑GAP ↓U0 ↓Aspirin ↓
VRR (GraphMVP)73.675.572.744.6413.961.177
SDE (MoleculeSDE)75.680.179.042.7511.851.087
+ +The core components of the proposed MoleculeSDE are the two SDE generative processes. The first SDE aims to convert from topology to conformation. This conversion needs to satisfy the physical nature of molecules: the molecules' physical and chemical attributes need to be equivariant to the rotations and translations in the 3D Euclidean space, i.e., the SE(3)-equivariant and reflection-antisymmetric property (Appendix B), and we use SE(3)-equivariance for short. We note that existing topology to conformation deep generative methods are either SE(3)-invariant (Shi et al., 2021) or not SE(3)-equivariant (Zhu et al., 2022). We propose an SE(3)-equivariant diffusion process by building equivariant local frames. The inputs of local frames are SE(3)-equivariant vector features (e.g., the atom coordinates), and the local frames transform them into three SE(3)-invariant features, which will be transformed back to the equivariant data using tensorization (Du et al., 2022b). The second SDE targets the conformation to topology reconstruction task in the discrete topological space. The main challenge for this task is to adopt the diffusion model for discrete data generation. We follow the recent work on graph diffusion generation (Jo et al., 2022), where the joint generation of atom and bond leads to better estimation. + +To the best of our knowledge, we are the first to build an SE(3)-equivariant and reflection-antisymmetric SDE for the topology to conformation generation and also the first to + +devise an SE(3)-invariant SDE for the conformation to topology generation for representation learning. We also note that our proposed MoleculeSDE is agnostic to the backbone representation methods since the SDE process is disentangled with the representation function, as illustrated in Figure 1. + +Our main contributions include: (1) We propose a group symmetric pretraining method, MoleculeSDE, on the 2D and 3D modalities of molecules. (2) We provide theoretical insights on the tighter MI estimation of MoleculeSDE over previous works. (3) We show that MoleculeSDE enables prosperous downstream tasks. (4) We empirically verify that MoleculeSDE retains essential knowledge from both modalities, resulting in state-of-the-art performance on 26 out of 32 downstream tasks compared with 17 competitive baselines. + +# 2. Related Work + +Molecule SSL pretraining on a single modality. The pretraining on 2D molecular topology shares common ideas with the general graph pretraining (Sun et al., 2022; Wang et al., 2022a). One classical approach (Hu et al., 2019; Liu et al., 2019) is to mask certain key substructures of molecular graphs and then perform the reconstruction in an auto-encoding manner. Another prevalent molecule pretraining method is contrastive learning (Oord et al., 2018), where the goal is to align the views from the positive pairs and contrast the views from the negative pairs simultaneously. For example, ContextPred (Hu et al., 2019) constructs views based on different radii of neighborhoods, Deep Graph InfoMax (Veličković et al., 2018) and InfoGraph (Sun et al., 2020) treat the local and global graph representations as the two views, MolCLR (Wang et al., 2022c) and GraphCL (You et al., 2020a) create different views using discrete graph augmentation methods. + +Recent studies start to explore the 3D geometric pretraining on molecules. GeoSSL (Liu et al., 2023a) proposes maximizing the mutual information between noised conformations using an SE(3)-invariant denoising score matching, and a parallel work (Zaidi et al., 2022) is a special case of GeoSSL using only one denoising layer. 3D-EMGP (Jiao et al., 2022) is also a parallel work, but it is E(3)-equivariant, which needlessly satisfies the reflection-equivariant constraint in molecular conformation distribution. + +Molecule SSL pretraining on multiple modalities. The GraphMVP (Liu et al., 2021a) proposes one contrastive objective (EBM-NCE) and one generative objective (variational representation reconstruction, VRR) to optimize the mutual information between the topological and conformational modalities. Specifically for VRR, it is a proxy loss to the evidence lower bound (ELBO) by doing the reconstruction in the representation space, which may risk losing critical information. 3D InfoMax (Stark et al., 2022) is a + +special case of GraphMVP, where only the contrastive loss is considered. Notice that MoleculeSDE solves the approximation issue in GraphMVP with two SDE processes. + +# 3. Preliminaries + +2D topological molecular graph. A topological molecular graph is denoted as $g_{2\mathrm{D}} = (X, E)$ , where $X$ is the atom attribute matrix and $E$ is the bond attribute matrix. The 2D graph representation with graph neural network (GNN) is: + +$$ +\boldsymbol {H} _ {\mathrm {2 D}} = \operatorname {G N N - 2 D} \left(T _ {\mathrm {2 D}} \left(g _ {\mathrm {2 D}}\right)\right) = \operatorname {G N N - 2 D} \left(T _ {\mathrm {2 D}} (\boldsymbol {X}, \boldsymbol {E})\right), \tag {1} +$$ + +where $T_{2\mathrm{D}}$ is the data transformation on the 2D topology, and GNN-2D is the representation function. $\pmb{H}_{2\mathrm{D}} = [h_{2\mathrm{D}}^{0}, h_{2\mathrm{D}}^{1}, \dots]$ , where $h_{2\mathrm{D}}^{i}$ is the $i$ -th node representation. + +3D conformational molecular graph. The molecular conformation is denoted as $g_{\mathrm{3D}} = (X, R)$ , where $R = \{r^1, r^2, \ldots\}$ is the collection of 3D coordinates of atoms. The conformational representation is: + +$$ +\boldsymbol {H} _ {\mathrm {3 D}} = \operatorname {G N N - 3 D} \left(T _ {\mathrm {3 D}} \left(g _ {\mathrm {3 D}}\right)\right) = \operatorname {G N N - 3 D} \left(T _ {\mathrm {3 D}} (\boldsymbol {X}, \boldsymbol {R})\right), \tag {2} +$$ + +where $T_{3\mathrm{D}}$ is the data transformation on the 3D geometry, and GNN-3D is the representation function. $H_{3\mathrm{D}} = [h_{3\mathrm{D}}^{0}, h_{3\mathrm{D}}^{1}, \ldots]$ , where $h_{3\mathrm{D}}^{i}$ is the $i$ -th node representation. In our approach, we take the masking as the transformation for both 2D and 3D GNN, and the masking ratio is $M$ . In what follows, we use $x$ and $y$ for the 2D and 3D graphs for notation simplicity, i.e., $x \triangleq g_{2\mathrm{D}}$ and $y \triangleq g_{3\mathrm{D}}$ . + +SE(3)-Equivalence and Reflection-Antisymmetry. Two 3D geometric graphs $R_{1}$ and $R_{2}$ are SE(3)-isometric if there exists an element $g \in \mathrm{SE}(3)$ such that $R_{2} = gR_{1}$ , where $g \in \mathrm{SE}(3)$ is a 3D rotation or translation acting on each node (atom) of $R_{1}$ . In this article, we will consider vector-valued functions defined on the 3D molecular graph. Specifically, given a conformation-related function $f(\boldsymbol{r}): \mathbf{R}^{3} \to \mathbf{R}^{3}$ on the graph $R$ , we say it's equivariant if + +$$ +f (g \boldsymbol {r}) = g f (\boldsymbol {r}) \tag {3} +$$ + +for arbitrary $g \in \mathrm{SE}(3)$ . Since different chiralities can lead to different chemical properties (Clayden et al., 2012; Cotton, 1991), the function $f(\boldsymbol{r})$ we consider in this article is SE(3)-equivariant and reflection-antisymmetric. + +Stochastic Differential Equation (SDE). The score-based generative modeling with stochastic differential equations (SDEs) (Song et al., 2020) provides a novel and expressive tool for distribution estimation. It is also a united framework including denoising score matching (Vincent, 2011; Song & Ermon, 2019) and denoising diffusion (Ho et al., 2020). In general, these methods can be split into two processes: the forward and backward processes. The forward process is a parameter-free deterministic process, and it diffuses a data point $x$ into a random noise by adding noises, as + +$$ +d \boldsymbol {x} _ {t} = f (\boldsymbol {x} _ {t}, t) d t + g (t) d w _ {t}, \tag {4} +$$ + +with $f(\pmb{x}_t, t)$ the vector-value drift coefficient, $g(t)$ the diffusion coefficient, and $w_t$ the Wiener process. Note that Equation (4) induces a family of densities $\pmb{x}_t \sim p_t(\cdot)$ . On the other hand, the backward process generates real data from the stationary distribution of Equation (4) by evolving along the following SDE: + +$$ +d \boldsymbol {x} _ {t} = \left[ f (\boldsymbol {x} _ {t}, t) - g (t) ^ {2} \nabla_ {\boldsymbol {x}} \log p _ {t} (\boldsymbol {x} _ {t}) \right] d t + g (t) d w _ {t}. \tag {5} +$$ + +Thus, the learning objective is to estimate $\nabla_{\pmb{x}}\log p_t(\pmb {x})$ This derivative term is called score in the literature of score matching (Vincent, 2011; Song & Ermon, 2019; Song et al., 2020). We will use this framework in MoleculeSDE to generate 3D conformation and 2D topology. + +# 4. The MoleculeSDE Method + +In Section 4.1, we first provide the mutual information (MI) aspect in molecule multi-modal pretraining, and then we present the limitation of VRR. We discuss two SDE models as generative pretraining in Sections 4.2 and 4.3, respectively. The ultimate learning objective and inference process are illustrated in Section 4.4. Additionally in Section 4.5, we provide theoretical insights on how MoleculeSDE obtains a more accurate MI estimation. + +# 4.1. An Overview from Mutual Information Perspective + +Mutual information (MI) measures the non-linear dependency between random variables, and it has been widely adopted as the principle for self-supervised pretraining (Oord et al., 2018; Hjelm et al., 2018). The expectation is that, by maximizing the MI between modalities, the learned representation can keep the most shared information. Thus, MI-guided SSL serves as an intuitive and powerful framework for representation pretraining. + +GraphMVP (Liu et al., 2021a) transforms the MI maximization into the summation of two conditional log-likelihoods: + +$$ +\mathcal {L} _ {\mathrm {M I}} = \frac {1}{2} \mathbb {E} _ {p (\boldsymbol {x}, \boldsymbol {y})} \left[ \log p (\boldsymbol {y} | \boldsymbol {x}) + \log p (\boldsymbol {x} | \boldsymbol {y}) \right]. \tag {6} +$$ + +GraphMVP (Liu et al., 2021a) further solves Equation (6) by proposing a contrastive loss (EBM-NCE) and a generative loss (variational representation reconstruction, VRR). VRR conducts the reconstruction on the representation space, i.e., from 2D (3D) data to the 3D (2D) representation space (details in Appendix F). The main advantage of VRR is its simple implementation without topology or conformation reconstruction, yet the trade-off is that it can lose critical information since it is only a proxy solution to generative learning. + +Thus, to this end, we raise one question: If there exists a more accurate conditional density estimation method for generative pretraining? The answer is yes, and we propose MoleculeSDE. MoleculeSDE utilizes two stochastic differential (SDE) models to estimate Equation (6). SDE is a + +broad generative model class (Karras et al., 2022) where a neural network is used to model the score (Song & Kingma, 2021) of various levels of noise in a diffusion process (Ho et al., 2020). To adapt it in MoleculeSDE, we propose an SDE from 2D topology to 3D conformation (Section 4.2) and an SDE from 3D conformation to 2D topology (Section 4.3). We also want to highlight that such reconstructions are challenging, as both the 2D topologies and 3D conformations are highly structured: the 2D topologies are permutation invariant, and the 3D conformations additionally obey the SE(3)-equivariance. + +MoleculeSDE has three main advantages. (1) MoleculeSDE is a powerful generative pretraining method, as the SDE models have shown promising performance in applications including image generation (Ho et al., 2020; Song et al., 2020) and geometric representation (Liu et al., 2023a; Zaidi et al., 2022). (2) MoleculeSDE is a more accurate estimation to Equation (6) than previous methods. Thus, the pretrained representation contains more critical information with a more accurate MI estimation. (3) MoleculeSDE enables prosperous downstream tasks, like topology to conformation generation for property prediction (Section 5.4). + +# 4.2. An SE(3)-Equivariant Conformation Generation + +The first objective we consider is the conditional generation from topology to conformation, $p(\pmb{y}|\pmb{x})$ . One thing to highlight is that the molecule 3D conformation needs to satisfy the physical property, i.e., it needs to be equivariant to the rotation and transition in the 3D Euclidean space, which is known as the $SE(3)$ -equivariance and reflection-antisymmetry property (Appendix B). Notice that for notation simplicity, we may call it $SE(3)$ -equivariance, and only expand into details in the "Local frame" paragraph below. + +The core module in SDE is the score network, $S_{\theta}^{2D \to 3D}$ . To satisfy the physical nature of molecule 3D structure, such a score network needs to be SE(3)-equivariant. Specifically, the input includes the 2D graph $\mathbf{x}$ , the noised 3D information $\mathbf{y}_t$ at time $t$ , and the time $t$ . The output is the SE(3)-equivariant 3D scores at time $t$ , accordingly. The goal is to use $S_{\theta}^{2D \to 3D}$ to estimate the score $\nabla \log p_t(\mathbf{y}_t | \mathbf{x})$ . + +To learn $p(\boldsymbol{y}|\boldsymbol{x})$ , we formulate it as solving an SDE problem. Then based on the score network, the training objective is: + +$$ +\begin{array}{l} \mathcal {L} ^ {2 \mathrm {D} \rightarrow 3 \mathrm {D}} = \mathbb {E} _ {\boldsymbol {x}, \boldsymbol {y}} \mathbb {E} _ {t} \mathbb {E} _ {\boldsymbol {y} _ {t} | \boldsymbol {y}} \\ \left[\left\| \nabla_ {\boldsymbol {y} _ {t}} \log p _ {t} (\boldsymbol {y} _ {t} | \boldsymbol {y}, \boldsymbol {x}) - S _ {\theta} ^ {2 D \rightarrow 3 D} (\boldsymbol {x}, \boldsymbol {y} _ {t}, t) \right\| _ {2} ^ {2} \right]. \tag {7} \\ \end{array} +$$ + +Coordinate reconstruction. Notice that both 2D topologies and 3D conformations share the same atom information (atom types), so in this subsection specifically, by reconstructing $y$ , we are referring to reconstructing the coordi- + +nates $R$ . Thus, the objective function becomes: + +$$ +\begin{array}{l} \mathcal {L} _ {\mathrm {2 D} \rightarrow 3 \mathrm {D}} = \mathbb {E} _ {\boldsymbol {x}, \boldsymbol {R}} \mathbb {E} _ {t} \mathbb {E} _ {\boldsymbol {R} _ {t} | \boldsymbol {R}} \\ \left[\left\| \nabla_ {\boldsymbol {R} _ {t}} \log p _ {t} (\boldsymbol {R} _ {t} | \boldsymbol {R}, \boldsymbol {x}) - S _ {\theta} ^ {2 \mathrm {D} \rightarrow 3 \mathrm {D}} (\boldsymbol {x}, \boldsymbol {R} _ {t}, t) \right\| _ {2} ^ {2} \right], \tag {8} \\ \end{array} +$$ + +Local frame. Before going into details of the score network, we want to introduce the SE(3)-equivariant & reflection-antisymmetric local frame (Appendix B). Such equivariant frames are introduced to fill in the gap between invariant 2D features and the output SE(3) vector field. It is equivalent to a 3D coordinate system that transforms equivariantly with the geometric graph. Through equivariant frames, we can project noised 3D coordinates into invariant scalars (isomers are projected differently), such that they are ready to be combined with invariant 2D features. On the other hand, by projecting back the 2D graph's invariant predictions into an equivariant frame, our final output can transform equivariantly with respect to global rotation and translation. We leave the precise formulations of our local frames in Appendix B. Briefly in MoleculeSDE, we focus on the equivariant frame attached on each edge $(\boldsymbol{r}^i,\boldsymbol{r}^j)$ : + +$$ +\boldsymbol {t} _ {\text {f r a m e}} ^ {i j} = \operatorname {L o c a l - F r a m e} \left(\boldsymbol {r} ^ {i}, \boldsymbol {r} ^ {j}\right). \tag {9} +$$ + +SE(3)-equivariant score network. Then we introduce how to build an SE(3)-equivariant score network based on the local frame. We first concat the atom representations $h_{2\mathrm{D}}$ into the atom pairwise representations, as $e_{2\mathrm{D}}^{ij} = \mathrm{MLP}(\mathrm{concat}\{h_{2\mathrm{D}}^i || h_{2\mathrm{D}}^j\})$ for the $i$ -th and $j$ -th atoms. Then the 2D pairwise representations are further added to the 3D pairwise representations $e_{3\mathrm{D}}^{ij} = \mathrm{projection}_{t_{\mathrm{frame}}^{ij}}(\boldsymbol{r}^i,\boldsymbol{r}^j)$ , produced by the equivariant frames. Based on such frame feature, the final invariant edge feature $e^{ij}$ is defined by: + +$$ +e ^ {i j} = \operatorname {r b f} \left(r ^ {i j}\right) \odot e _ {\mathrm {2 D}} ^ {i j} + e _ {\mathrm {3 D}} ^ {i j}, \tag {10} +$$ + +where $r^{ij}$ denotes the relative distance between the $i$ -th and $j$ -th atoms, and we use the radial basis function (RBF) to embed such distance features. Note that the input 3D coordinates and the corresponding distance matrix $\{r^{ij}\}$ are based on the diffused positions at a given diffusion step, rather than the ground truth 3D conformation. + +Then we process $e^{ij}$ through multiple graph attention layers (Shi et al., 2020): $h^{ij} = \text{Attention}(e^{ij})$ . Finally, by pairing the invariant aggregated edge features $h^{ij}$ with our SE(3)-equivariant frames $t_{\text{frame}}^{ij}$ , we get the vector-valued score function: $S(\boldsymbol{r}^i) = \sum_j h^{ij} \odot t_{\text{frame}}^{ij}$ . Here, our equivariant construction guarantees that the output vector field is SE(3)-equivariant and reflection-antisymmetric. + +Discussions. We want to clarify the following points between molecule geometric modeling ( $h_{3\mathrm{D}}$ ) and score network ( $S_{\theta}^{2\mathrm{D}\rightarrow 3\mathrm{D}}$ ). (1) The score network proposed here is SE(3)-equivariant and reflection-antisymmetric. The input + +to the score network is the topology and diffused conformation, so it cannot be shared with the molecule geometric modeling, where the input is the ground-truth 3D conformation. (2) To the best of our knowledge, we are the first to propose the SE(3)-equivariant and reflection-antisymmetric SDE for the topology to conformation generation task. + +# 4.3. An SE(3)-Invariant Topology Generation + +The second objective is to reconstruct the 2D topology from 3D conformation, i.e., $p(\boldsymbol{x}|\boldsymbol{y})$ . Note that the 2D topology information (atoms and bonds) belongs to the type-0 feature (Thomas et al., 2018), thus such a generative process should satisfy the SE(3)-invariance property. If we formulate it as an SDE problem, then the training objective is: + +$$ +\begin{array}{l} \mathcal {L} ^ {\mathrm {3 D \rightarrow 2 D}} = \mathbb {E} _ {\boldsymbol {y}, \boldsymbol {x}} \mathbb {E} _ {t} \mathbb {E} _ {\boldsymbol {x} _ {t} | \boldsymbol {x}} \\ \left[\left\| \nabla_ {\boldsymbol {x} _ {t}} \log p _ {t} (\boldsymbol {x} _ {t} | \boldsymbol {x}, \boldsymbol {y}) - S _ {\theta} ^ {3 \mathrm {D} \rightarrow 2 \mathrm {D}} (\boldsymbol {y}, \boldsymbol {x} _ {t}, t) \right\| _ {2} ^ {2} \right], \tag {11} \\ \end{array} +$$ + +where $S_{\theta}^{3\mathrm{D}\rightarrow 2\mathrm{D}}$ is the score network. + +SE(3)-invariant score network. For modeling $S_{\theta}^{3D \to 2D}$ , it needs to satisfy the SE(3)-invariance symmetry property. The inputs are 3D conformational representation $\pmb{y}$ , the noised 2D information $\pmb{x}_t$ at time $t$ , and time $t$ . The output of $S_{\theta}^{3D \to 2D}$ is the invariant 2D score function at time $t$ , as $\nabla \log p_t(\pmb{x}_t|\pmb{y})$ . As introduced in Section 3, the diffused 2D information contains two parts: $\pmb{x}_t = (\pmb{X}_t, \pmb{E}_t)$ , so the corresponding forward SDE is a joint variant of Equation (4): + +$$ +\left\{ \begin{array}{l} \mathrm {d} \boldsymbol {X} _ {t} = f _ {1, t} \left(\boldsymbol {X} _ {t}, \boldsymbol {E} _ {t}\right) d t + g _ {1} (t) \mathrm {d} w _ {t} ^ {1}, \\ \mathrm {d} \boldsymbol {E} _ {t} = f _ {2, t} \left(\boldsymbol {X} _ {t}, \boldsymbol {E} _ {t}\right) d t + g _ {2} (t) \mathrm {d} w _ {t} ^ {2}, \end{array} \right. \tag {12} +$$ + +where $w_{t}^{1}$ and $w_{t}^{2}$ are two independent Brownian motion. Then the score network $S_{\theta}^{3\mathrm{D}\rightarrow 2\mathrm{D}}$ is also decomposed into two parts for the atoms and bonds: $S_{\theta}^{\boldsymbol{X}_t}(\boldsymbol{x}_t)$ and $S_{\theta}^{\boldsymbol{E}_t}(\boldsymbol{x}_t)$ . + +Similar to the topology to conformation generation procedure, we first merge the 3D representation $\mathbf{z}_y$ with the diffused atom feature $\mathbf{X}_t$ as $H_0 = \mathrm{MLP}(\mathbf{X}_t) + \mathbf{H}_y$ . Then we apply a GCN as the score network to estimate the node-level score, as $S_{\theta}^{\mathbf{X}_t}(\mathbf{x}_t) = \mathrm{MLP}(\mathrm{concat}\{H_0||\dots ||H_L\})$ , where $H_{i+1} = \mathrm{GCN}(H_i, \mathbf{E}_t)$ and $L$ is the number of GCN layer. On the other hand, the edge-level score is modeled by an unnormalized dot product attention $S_{\theta}^{\mathbf{E}_t}(\mathbf{x}_t) = \mathrm{MLP}(\{\mathrm{Attention}(H_i)\}_{0 \leq i \leq L})$ . + +# 4.4. Learning and Inference of MoleculeSDE + +Learning. In addition to the two generative objectives, we also consider a contrastive loss, EBM-NCE (Liu et al., 2021a). EBM-NCE can be viewed as another way to approximate the mutual information $I(X;Y)$ , and it is expected to be complementary to the generative SSL. Therefore, our final objective is + +$$ +\mathcal {L} _ {\text {M o l e c u l e S D E}} = \alpha_ {1} \mathcal {L} _ {\text {C o n t r a s t i v e}} + \alpha_ {2} \mathcal {L} ^ {2 \mathrm {D} \rightarrow 3 \mathrm {D}} + \alpha_ {3} \mathcal {L} ^ {3 \mathrm {D} \rightarrow 2 \mathrm {D}}, \tag {13} +$$ + +where $\alpha_{1},\alpha_{2},\alpha_{3}$ are three coefficient hyperparameters. + +Inference. After we train the SDE model from 2D topologies to 3D conformations, we can generate 3D conformations from a 2D topology by 'reversing' the forward SDE. More precisely, we take the Predictor-Corrector sampling method (Song et al., 2020) as tailored to our continuous framework. Further, generating 3D conformations from 2D topologies enable us to conduct prosperous downstream tasks such as property prediction with 2D and 3D data. + +# 4.5. Theoretical Insights of MoleculeSDE + +Since the two terms in Equation (6) are in the mirroring direction, here we take $\pmb{x}|\pmb{y}$ for theoretical illustrations. The other direction can be obtained similarly. We adapt the continuous diffusion framework proposed in (Song et al., 2020), in which the Markov chain denoising generative model (DDPM) is included as a discretization of the continuous Ornstein-Uhlenbeck process. The diffusion framework originates from the noised score-matching scheme (Vincent, 2011) of training the energy-based model (EBM), in which the authors introduced a noised version of $p(\pmb{y})$ by adding noise to each data point $\tilde{\pmb{y}} = \pmb{y} + \epsilon$ , where $\epsilon$ is sampled from a scaled normal distribution. Then, + +$$ +\begin{array}{l} \min _ {\theta} \mathbb {E} _ {p (\tilde {\boldsymbol {y}})} \| \nabla_ {\boldsymbol {y}} \log p (\tilde {\boldsymbol {y}} | \boldsymbol {x}) - \nabla_ {\boldsymbol {y}} \log p _ {\theta} (\tilde {\boldsymbol {y}} | \boldsymbol {x}) \| _ {2} ^ {2} \tag {14} \\ = \min _ {\boldsymbol {\theta}} \mathbb {E} _ {p (\tilde {\boldsymbol {y}}, \boldsymbol {y})} \| \nabla_ {\boldsymbol {y}} \log p (\tilde {\boldsymbol {y}} | \boldsymbol {y}, \boldsymbol {x}) - \nabla_ {\boldsymbol {y}} \log p _ {\boldsymbol {\theta}} (\tilde {\boldsymbol {y}} | \boldsymbol {x}) \| _ {2} ^ {2} + C, \\ \end{array} +$$ + +where the conditional score function $\nabla_{\boldsymbol{y}}\log p(\tilde{\boldsymbol{y}} |\boldsymbol {y},\boldsymbol {x})$ is analytically tractable. The diffusion generative model further pushes the one-step (14) to a continuous noising process from raw data $\pmb{y}$ to $\pmb{y}_t$ for $0\leq t\leq T$ .We call $\pmb{y}_t$ the nosing (forward) diffusion process starting at $y$ , which is usually formulated as the solution of a stochastic differential equation. Then, the corresponding (continuous) score matching loss is: + +$$ +\min _ {\theta} \mathbb {E} _ {t} \mathbb {E} _ {\boldsymbol {y}} \mathbb {E} _ {\boldsymbol {y} _ {t} | \boldsymbol {y}} \| \nabla_ {\boldsymbol {y} _ {t}} \log p _ {t} (\boldsymbol {y} _ {t} | \boldsymbol {y}, \boldsymbol {x}) - \nabla_ {\boldsymbol {y} _ {t}} \log p _ {\theta , t} (\boldsymbol {y} _ {t} | \boldsymbol {x}) \| _ {2} ^ {2}. \tag {15} +$$ + +It's worth mentioning that the weighted continuous score matching is equivalent to learning the infinitesimal reverse of the noising process from $t$ to $t + \Delta t$ for each time $t$ , which greatly reduces the difficulty of recovering $p_{data}$ from the white noise in one shot (Ho et al., 2020). + +To make a connection between Equation (15) and Equation (6), it's crucial to relate score matching with maximal log-likelihood method. To solve this problem, (Huang et al., 2021) defined a key quantity (ELBO) $\mathcal{E}_{\theta}^{\infty}(\boldsymbol{y}|\boldsymbol{x})$ as a functional on the infinite-dimensional path space (consists of all stochastic paths starting at $\boldsymbol{y}$ ). Then, the authors show that + +$$ +\mathbb {E} _ {p (\boldsymbol {y})} \log p _ {\theta , T} (\boldsymbol {y} | \boldsymbol {x}) \geq \mathbb {E} _ {p (\boldsymbol {y})} \mathcal {E} _ {\theta} ^ {\infty} (\boldsymbol {y} | \boldsymbol {x}), \tag {16} +$$ + +where the probability $p_{\theta, T}(\boldsymbol{y}|\boldsymbol{x})$ corresponds to the marginal distribution of a parameterized (denoted by $\theta$ ) SDE at time $T$ . Moreover, the ELBO $\mathbb{E}_{p(\boldsymbol{y})}\mathcal{E}_{\theta}^{\infty}(\boldsymbol{y}|\boldsymbol{x})$ is equivalent to the score matching loss. Therefore, training the diffusion + +model is equivalent to maximizing a lower bound of the likelihood defined by SDEs. Since the variational capacity of the infinite-dimensional SDE space is larger than previous models, we expect to find a better estimation of Equation (6). + +# 5. Experiments + +MoleculeSDE enables both a pretrained 2D and 3D representation and can be further fine-tuned toward the downstream tasks. Meanwhile, another main advantage of MoleculeSDE is that it also learns an SDE model from topology to conformation. Such a design enables us to adopt more versatile downstream tasks. For instance, there is a wide range of molecular property prediction tasks (Wu et al., 2018) considering only the 2D topology, yet the 3D conformation has proven to be beneficial towards such property prediction tasks (Liu et al., 2021a). Thus, with the pretrained generative model $p(\boldsymbol{y}|\boldsymbol{x})$ , we can generate the corresponding 3D structure for each molecule topology and apply the pretrained 3D encoders, which is expected to improve the performance further. A visual illustration of such three categories of downstream tasks is in Figure 2. + +# 5.1. Pretraining and Baselines + +Dataset. For pretraining, we use PCQM4Mv2 (Hu et al., 2020a). It's a sub-dataset of PubChemQC (Nakata & Shimazaki, 2017) with 3.4 million molecules with both the topological graph and geometric conformations. We are aware of the Molecule3D (Xu et al., 2021b) dataset, which is also extracted from PubChemQC (Nakata & Shimazaki, 2017). Yet, after confirming with the authors, certain mismatches exist between the 2D topologies and 3D conformations. Thus, in this work, we use PCQM4Mv2 for pretraining. + +Baselines for 2D topology pretraining. Enormous 2D topological pretraining methods have been proposed (Liu et al., 2021c; Xie et al., 2021; Wu et al., 2021; Liu et al., 2021b). Recent works (Sun et al., 2022) re-exlore the effects of these pretraining methods, and we pick up the most promising ones as follows. AttrMask (Hu et al., 2019; Liu et al., 2019), ContextPred (Hu et al., 2019), Deep Graph Infomax (Velicković et al., 2018) and InfoGraph (Sun et al., 2020), MolCLR (Wang et al., 2022c) and GraphCL (You et al., 2020a). The detailed explanations are in Section 2. + +Baselines for 3D conformation pretraining. The 3D conformation SSL pretraining has been less explored. We adopt the comprehensive baselines from (Liu et al., 2023a). The type prediction, distance prediction, and angle prediction predict the masked atom type, pairwise distance, and triplet angle, respectively. The 3D InfoGraph predicts whether the node- and graph-level 3D representation are for the same molecule. RR, InfoNCE, and EBM-NCE are to max + +Table 2. Results for molecular property prediction tasks (with 2D topology only). For each downstream task, we report the mean (and standard deviation) ROC-AUC of 3 seeds with scaffold splitting. The best and second best results are marked bold and bold, respectively. + +
Pre-trainingBBBP↑Tox21 ↑ToxCast ↑Sider ↑ClinTox ↑MUV ↑HIV ↑Bace ↑Avg ↑
-(random init)68.1±0.5975.3±0.2262.1±0.1957.0±1.3383.7±2.9374.6±2.3575.2±0.7076.7±2.5171.60
AttrMask65.0±2.3674.8±0.2562.9±0.1161.2±0.1287.7±1.1973.4±2.0276.8±0.5379.7±0.3372.68
ContextPred65.7±0.6274.2±0.0662.5±0.3162.2±0.5977.2±0.8875.3±1.5777.1±0.8676.0±2.0871.28
InfoGraph67.5±0.1173.2±0.4363.7±0.5059.9±0.3076.5±1.0774.1±0.7475.1±0.9977.8±0.8870.96
MolCLR66.6±1.8973.0±0.1662.9±0.3857.5±1.7786.1±0.9572.5±2.3876.2±1.5171.5±3.1770.79
3D InfoMax68.3±1.1276.1±0.1864.8±0.2560.6±0.7879.9±3.4974.4±2.4575.9±0.5979.7±1.5472.47
GraphMVP69.4±0.2176.2±0.3864.5±0.2060.5±0.2586.5±1.7076.2±2.2876.2±0.8179.8±0.7473.66
MoleculeSDE (VE)73.2±0.4876.5±0.3365.2±0.3159.6±0.8286.6±3.7379.9±0.1978.5±0.2880.4±0.9274.98
MoleculeSDE (VP)71.8±0.7676.8±0.3465.0±0.2660.8±0.3987.0±0.5380.9±0.3778.8±0.9279.5±2.1775.07
+ +Table 3. Results on 12 quantum mechanics prediction tasks from QM9. We take 110K for training, 10K for validation, and 11K for testing. The evaluation is mean absolute error, and the best and the second best results are marked in bold and bold, respectively. + +
Pretrainingα↓∇E↓EHOMO↓ELUMO↓μ↓Cv↓G↓H↓R²↓U↓U0↓ZPVE↓
-(random init)0.06044.1327.6422.550.0280.03114.1914.050.13313.9313.271.749
Type Prediction0.07345.3828.7624.830.0360.03216.6616.280.27515.5614.662.094
Distance Prediction0.06545.8727.6123.340.0310.03314.8315.810.24815.0715.011.837
Angle Prediction0.06648.4529.0224.400.0340.03114.1313.770.21413.5013.471.861
3D InfoGraph0.06245.9629.2924.600.0280.03013.9313.970.13313.5513.471.644
RR0.06043.7127.7122.840.0280.03114.5413.700.12213.8113.751.694
InfoNCE0.06144.3827.6722.850.0270.03013.3813.360.11613.0513.001.643
EBM-NCE0.05743.7527.0522.750.0280.03012.8712.650.12313.4412.641.652
3D InfoMax0.05742.0925.9021.600.0280.03013.7313.620.14113.8113.301.670
GraphMVP0.05641.9925.7521.580.0270.02913.4313.310.13613.0313.071.609
GeoSSL-1L0.05842.6426.3221.870.0280.03012.6112.810.17312.4512.121.696
GeoSSL0.05642.2925.6121.880.0270.02911.5411.140.16811.0610.961.660
MoleculeSDE (VE)0.05641.8425.7921.630.0270.02911.4710.710.23311.0410.951.474
MoleculeSDE (VP)0.05441.7725.7421.410.0260.02813.0712.050.15112.5412.041.587
+ +imize the MI between the conformation and augmented conformation using different objective functions, respectively. GeoSSL optimizes the same objective function using denoising score matching. Another work (Zaidi et al., 2022) is a special case of GeoSSL with one layer of denoising, and we name it GeoSSL-1L. + +Baselines for 2D-3D multi-modality pretraining. There are two baselines on the 2D-3D multi-modal pretraining: vanilla GraphMVP (Liu et al., 2021a) utilizes both the contrastive and generative SSL, and 3D InfoMax (Stärk et al., 2022) only uses the contrastive learning part in GraphMVP. + +Backbone models and MoleculeSDE. For all the baselines and MoleculeSDE, we use the same backbone models to better verify the effectiveness of the pretraining algorithms. We take the GIN model (Xu et al., 2018) and SchNet model (Schütt et al., 2018) for modeling 2D topology and 3D conformation, respectively. For MoleculeSDE training, we consider both the Variance Exploding (VE) and Variance Preserving (VP) (details in Appendix E). + +# 5.2. Downstream with 2D Topology + +We consider eight binary classification tasks from MoleculeNet (Wu et al., 2018). The results are in Table 2. We can observe that MoleculeSDE works best on 6 out of 8 + +tasks, and both the VE and VP version of MoleculeSDE pretraining can reach the best average performance. + +# 5.3. Downstream with 3D Conformation + +We consider 12 tasks from QM9 (Ramakrishnan et al., 2014) and 8 tasks from MD17 (Chmiela et al., 2017). QM9 is a dataset of 134K molecules consisting of 9 heavy atoms, and the 12 tasks are related to the quantum properties, such as energies at various settings. MD17 is a dataset on molecular dynamics simulation, and the 8 tasks correspond to 8 organic molecules. The goal is to predict the forces at different 3D positions. The results are in Tables 3 and 4, and MoleculeSDE can reach the best performance on 9 tasks in QM9 and 7 tasks in MD17. + +# 5.4. Downstream with Topology to Conformation + +We note that the pretraining in MoleculeSDE does two things: representation learning and topology/conformation generation. Such a conformation generation pretraining enables more prosperous downstream tasks. Here we consider 4 MoleculeNet tasks where only the 2D molecular graphs are available. Then we apply the SE(3)-equivariant conformation generation in MoleculeSDE, after which a 3D GNN will be trained for property prediction (Figure 2). + +Table 4. Results on eight force prediction tasks from MD17. We take 1K for training, 1K for validation, and 48K to 991K molecules for the test concerning different tasks. The evaluation is mean absolute error, and the best results are marked in bold and bold, respectively. + +
PretrainingAspirin ↓Benzene ↓Ethanol ↓Malonaldehyde ↓Naphthalene ↓Salicylic ↓Toluene ↓Uracil ↓
-(random init)1.2030.3800.3860.7940.5870.8260.5680.773
Type Prediction1.3830.4020.4500.8790.6221.0280.6620.840
Distance Prediction1.4270.3960.4340.8180.7930.9520.5091.567
Angle Prediction1.5420.4470.6691.0220.6801.0320.6230.768
3D InfoGraph1.6100.4150.5600.9000.7881.2780.7681.110
RR1.2150.3930.5141.0920.5960.8470.5700.711
InfoNCE1.1320.3950.4660.8880.5420.8310.5540.664
EBM-NCE1.2510.3730.4570.8290.5120.9900.5600.742
3D InfoMax1.1420.3880.4690.7310.7850.7980.5160.640
GraphMVP1.1260.3770.4300.7260.4980.7400.5080.620
GeoSSL-1L1.3640.3910.4320.8300.5990.8170.6280.607
GeoSSL1.1070.3600.3570.7370.5680.9020.4840.502
MoleculeSDE (VE)1.1120.3040.2820.5200.4550.7250.5150.447
MoleculeSDE (VP)1.2440.3150.3380.4880.4320.7120.4780.468
+ +![](images/b1739ae2d5c500166f5eff1592234cf895b05ff300cd56c18d6a54f8f04bb91d.jpg) +(1) Downstream w/ Topology + +![](images/71b92134f520f0f4b7daafba97b81019d3bcc65cc397955fd056a7cca47bfbed.jpg) +(2) Downstream w/ Conformation + +![](images/896eae4f7d026e317a14734ac2a28a9b773063e729584f8ee426b1ec4f595209.jpg) +(3) Downstream w/ Topology to Conformation +Figure 2. Illustration on three downstream tasks. The first two cover single-modal information only, and we fine-tune the pretrained 2D and 3D GNN from MoleculeSDE, respectively. The last downstream tasks contain topology only, then we use the pretrained 2D GNN and SE(3)-equivariant SDE model to generate conformations, followed by a 3D GNN for property prediction. + +Table 5. Results for molecular property prediction with SchNet as backbone. The geometric structures (conformers) are generated using either MMFF or MoleculeSDE. We report the mean (and standard deviation) ROC-AUC of 3 seeds with scaffold splitting for each downstream task. CG denotes Conformation Generation. + +
ModelCG MethodBBBP ↑Sider ↑ClinTox ↑Bace ↑
GIN-64.1±1.7958.4±0.5063.1±7.2176.5±2.96
SchNetMMFF61.4±0.2959.4±0.2764.6±0.5074.3±0.66
SchNetConfGF62.7±1.9760.1±0.8764.1±2.8373.2±3.53
SchNetClofNet61.7±1.1956.0±0.1058.2±0.4462.5±0.17
SchNetMoleculeSDE65.2±0.4360.5±0.3972.9±1.0278.6±0.40
+ +We consider three classic and state-of-the-art topology-to-conformation generation baselines. Merck molecular force field (MMFF) (Halgren, 1996) is a heuristic method using physical simulation. ConfGF (Shi et al., 2021) is an SE(3)-invariant conformation generation method using score matching, and ClofNet (Du et al., 2022b) is the state-of-the-art SE(3)-equivariant conformation generation method using GNN. With the generated conformations, we apply a SchNet model (without pretraining) for property prediction. We also add a 2D GNN baseline with the atom and bond type information. As shown in Table 5, we can + +observe that the CG baselines act worse than the 2D GNN for property prediction. Yet, MoleculeSDE can beat both the 2D and CG baselines. More discussions are in Appendix H. + +# 5.5. Discussion on MoleculeSDE + +For pretraining, data reconstruction is stronger than latent representation reconstruction. Starting from BOYL (Grill et al., 2020) and SimSiam (Chen & He, 2021), the non-contrastive SSL methods have been widely explored. GraphMVP (Liu et al., 2021a) summarizes that these methods essentially reconstruct the latent representation space. Our proposed MoleculeSDE further proves that directly applying the data reconstruction is superior on the graph data. This observation also aligns well in the vision domain (He et al., 2022). Complete results can be found in Appendix H. + +# 6. Conclusion and Outlook + +We proposed MoleculeSDE, a group symmetric pretraining method on the 2D topology and 3D geometry modalities of molecules. MoleculeSDE introduces the first SE(3)-equivariant and reflection-antisymmetric SDE for the topology to conformation generation and also the first SE(3)-invariant SDE for the conformation to topology generation for molecule representation learning. We provide theoretical insights that MoleculeSDE obtains tighter MI estimation over previous works. We also empirically verified that MoleculeSDE retains essential knowledge from both modalities, resulting in state-of-the-art performance on 26 out of 32 tasks compared to 17 competitive baselines. + +We note that multi-modal pretraining has been widely explored in drug discovery, not only between the topologies and conformations (i.e., the chemical structures), but also between the natural language and chemical structures. This research track is not exclusive to our work, and we believe that this can be a promising direction in the future exploration of the foundation model for molecule discovery. + +# Acknowledgement + +This project is supported by the Natural Sciences and Engineering Research Council (NSERC) Discovery Grant, the Canada CIFAR AI Chair Program, collaboration grants between Microsoft Research and Mila, Samsung Electronics Co., Ltd., Amazon Faculty Research Award, Tencent AI Lab Rhino-Bird Gift Fund and two NRC Collaborative R&D Projects (AI4D-CORE-06, AI4D-CORE-08). This project was also partially funded by IVADO Fundamental Research Project grant PRF-2019-3583139727. + +# References + +Axelrod, S. and Gomez-Bombarelli, R. Geom, energy-annotated molecular conformations for property prediction and molecular generation. Scientific Data, 9(1):1-14, 2022. +Chen, X. and He, K. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15750-15758, 2021. +Chmiela, S., Tkatchenko, A., Sauceda, H. E., Holtavsky, I., Schütt, K. T., and Müller, K.-R. Machine learning of accurate energy-conserving molecular force fields. Science advances, 3(5):e1603015, 2017. +Clayden, J., Greeves, N., and Warren, S. Organic chemistry. Oxford university press, 2012. +Corso, G., Cavalleri, L., Beaini, D., Lio, P., and Velicković, P. Principal neighbourhood aggregation for graph nets. Advances in Neural Information Processing Systems, 33: 13260-13271, 2020. +Cotton, F. A. Chemical applications of group theory. John Wiley & Sons, 1991. +Demirel, M. F., Liu, S., Garg, S., Shi, Z., and Liang, Y. Attentive walk-aggregating graph neural networks. Transactions on Machine Learning Research, 2021. +Dhariwal, P. and Nichol, A. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34:8780-8794, 2021. +Du, W., Yang, T., Zhang, H., and Du, Y. A flexible diffusion model. arXiv preprint arXiv:2206.10365, 2022a. +Du, W., Zhang, H., Du, Y., Meng, Q., Chen, W., Zheng, N., Shao, B., and Liu, T.-Y. Se (3) equivariant graph neural networks with complete local frames. In International Conference on Machine Learning, pp. 5583-5608. PMLR, 2022b. + +Du, W., Du, Y., Wang, L., Feng, D., Wang, G., Ji, S., Gomes, C., and Ma, Z.-M. A new perspective on building efficient and expressive 3d equivariant graph neural networks. arXiv preprint arXiv:2304.04757, 2023. +Fang, X., Liu, L., Lei, J., He, D., Zhang, S., Zhou, J., Wang, F., Wu, H., and Wang, H. Chemrl-gem: Geometry enhanced molecular representation learning for property prediction. arXiv preprint arXiv:2106.06130, 2021. +Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., and Dahl, G. E. Neural message passing for quantum chemistry. In International conference on machine learning, pp. 1263-1272. PMLR, 2017. +Gómez-Bombarelli, R., Wei, J. N., Duvenaud, D., Hernández-Lobato, J. M., Sánchez-Lengeling, B., Sheberla, D., Aguilera-Iparraguirre, J., Hirzel, T. D., Adams, R. P., and Aspuru-Guzik, A. Automatic chemical design using a data-driven continuous representation of molecules. ACS central science, 4(2):268-276, 2018. +Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al. Bootstrap your own latent-a new approach to self-supervised learning. Advances in neural information processing systems, 33:21271-21284, 2020. +Gutmann, M. and Hyvarinen, A. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 297-304. JMLR Workshop and Conference Proceedings, 2010. +Halgren, T. A. Merck molecular force field. i. basis, form, scope, parameterization, and performance of mmff94. Journal of computational chemistry, 17(5-6):490-519, 1996. +He, K., Chen, X., Xie, S., Li, Y., Dollar, P., and Girshick, R. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16000-16009, 2022. +Hinton, G. E. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771-1800, 2002. +Hjelm, R. D., Fedorov, A., Lavoie-Marchildon, S., Grewal, K., Bachman, P., Trischler, A., and Bengio, Y. Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670, 2018. +Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840-6851, 2020. + +Hsu, C., Verkuil, R., Liu, J., Lin, Z., Hie, B., Sercu, T., Lerer, A., and Rives, A. Learning inverse folding from millions of predicted structures. *bioRxiv*, 2022. +Hu, W., Liu, B., Gomes, J., Zitnik, M., Liang, P., Pande, V., and Leskovec, J. Strategies for pre-training graph neural networks. arXiv preprint arXiv:1905.12265, 2019. +Hu, W., Fey, M., Zitnik, M., Dong, Y., Ren, H., Liu, B., Catasta, M., and Leskovec, J. Open graph benchmark: Datasets for machine learning on graphs. arXiv preprint arXiv:2005.00687, 2020a. +Hu, W., Fey, M., Zitnik, M., Dong, Y., Ren, H., Liu, B., Catasta, M., and Leskovec, J. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118-22133, 2020b. +Huang, C.-W., Lim, J. H., and Courville, A. C. A variational perspective on diffusion-based generative models and score matching. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 22863-22876. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/c11abfd29e4d9b4d4b566b01114d8486-Paper.pdf. +Hyvärinen, A. and Dayan, P. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(4), 2005. +Irwin, J. J., Sterling, T., Mysinger, M. M., Bolstad, E. S., and Coleman, R. G. Zinc: a free tool to discover chemistry for biology. Journal of chemical information and modeling, 52(7):1757-1768, 2012. +Jiao, R., Han, J., Huang, W., Rong, Y., and Liu, Y. 3d equivariant molecular graph pretraining. arXiv preprint arXiv:2207.08824, 2022. +Jin, W., Barzilay, R., and Jaakkola, T. Hierarchical generation of molecular graphs using structural motifs. In International conference on machine learning, pp. 4839-4848. PMLR, 2020. +Jo, J., Lee, S., and Hwang, S. J. Score-based generative modeling of graphs via the system of stochastic differential equations. arXiv preprint arXiv:2202.02514, 2022. +Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, K., Bates, R., Žídek, A., Potapenko, A., et al. Highly accurate protein structure prediction with alphafold. Nature, 596(7873):583-589, 2021. + +Karras, T., Aittala, M., Aila, T., and Laine, S. Elucidating the design space of diffusion-based generative models. arXiv preprint arXiv:2206.00364, 2022. +Liu, S., Demirel, M. F., and Liang, Y. N-gram graph: Simple unsupervised representation for graphs, with applications to molecules. Advances in neural information processing systems, 32, 2019. +Liu, S., Wang, H., Liu, W., Lasenby, J., Guo, H., and Tang, J. Pre-training molecular graph representation with 3d geometry. arXiv preprint arXiv:2110.07728, 2021a. +Liu, S., Nie, W., Wang, C., Lu, J., Qiao, Z., Liu, L., Tang, J., Xiao, C., and Anandkumar, A. Multi-modal molecule structure-text model for text-based retrieval and editing. arXiv preprint arXiv:2212.10789, 2022. +Liu, S., Guo, H., and Tang, J. Molecular geometry pretraining with SE(3)-invariant denoising distance matching. In The Eleventh International Conference on Learning Representations, 2023a. URL https://openreview.net/forum?id=CjTHVoldvR. +Liu, S., Zhu, Y., Lu, J., Xu, Z., Nie, W., Gitter, A., Xiao, C., Tang, J., Guo, H., and Anandkumar, A. A text-guided protein design framework. arXiv preprint arXiv:2302.04611, 2023b. +Liu, X., Zhang, F., Hou, Z., Mian, L., Wang, Z., Zhang, J., and Tang, J. Self-supervised learning: Generative or contrastive. IEEE Transactions on Knowledge and Data Engineering, 2021b. +Liu, Y., Pan, S., Jin, M., Zhou, C., Xia, F., and Yu, P. S. Graph self-supervised learning: A survey. arXiv preprint arXiv:2103.00111, 2021c. +Nakata, M. and Shimazaki, T. Pubchemqc project: a largescale first-principles electronic structure database for data-driven chemistry. Journal of chemical information and modeling, 57(6):1300-1308, 2017. +Oord, A. v. d., Li, Y., and Vinyals, O. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. +Ramakrishnan, R., Dral, P. O., Rupp, M., and Von Lilienfeld, O. A. Quantum chemistry structures and properties of 134 kilo molecules. Scientific data, 1(1):1-7, 2014. +Rampášek, L., Galkin, M., Dwivedi, V. P., Luu, A. T., Wolf, G., and Beaini, D. Recipe for a general, powerful, scalable graph transformer. arXiv preprint arXiv:2205.12454, 2022. +Schütt, K. T., Sauceda, H. E., Kindermans, P.-J., Tkatchenko, A., and Müller, K.-R. Schnet-a deep learning architecture for molecules and materials. The Journal of Chemical Physics, 148(24):241722, 2018. + +Shi, C., Luo, S., Xu, M., and Tang, J. Learning gradient fields for molecular conformation generation. In International Conference on Machine Learning, pp. 9558-9568. PMLR, 2021. +Shi, Y., Huang, Z., Wang, W., Zhong, H., Feng, S., and Sun, Y. Masked label prediction: Unified message passing model for semi-supervised classification. arXiv preprint arXiv:2009.03509, 2020. +Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., and Ganguli, S. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pp. 2256-2265. PMLR, 2015. +Song, Y. and Ermon, S. Generative modeling by estimating gradients of the data distribution. Advances in Neural Information Processing Systems, 32, 2019. +Song, Y. and Kingma, D. P. How to train your energy-based models. arXiv preprint arXiv:2101.03288, 2021. +Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020. +Stärk, H., Beaini, D., Corso, G., Tossou, P., Dallago, C., Gunnemann, S., and Lio, P. 3d infomax improves gnns for molecular property prediction. In International Conference on Machine Learning, pp. 20479-20502. PMLR, 2022. +Sun, F.-Y., Hoffmann, J., Verma, V., and Tang, J. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. In International Conference on Learning Representations, ICLR, 2020. +Sun, R., Dai, H., and Yu, A. W. Rethinking of graph pretraining on molecular representation. 2022. +Thomas, N., Smidt, T., Kearnes, S., Yang, L., Li, L., Kohlhoff, K., and Riley, P. Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. arXiv preprint arXiv:1802.08219, 2018. +Veličković, P., Fedus, W., Hamilton, W. L., Lio, P., Bengio, Y., and Hjelm, R. D. Deep graph infomax. arXiv preprint arXiv:1809.10341, 2018. +Vincent, P. A connection between score matching and denoising autoencoders. Neural computation, 23(7):1661-1674, 2011. +Wang, H., Kaddour, J., Liu, S., Tang, J., Kusner, M., Lasenby, J., and Liu, Q. Evaluating self-supervised learning for molecular graph embeddings. arXiv preprint arXiv:2206.08005, 2022a. + +Wang, L., Zhou, Y., Wang, Y., Zheng, X., Huang, X., and Zhou, H. Regularized molecular conformation fields. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022b. URL https://openreview.net/forum?id=7XCFxnG8nGS. +Wang, Y., Wang, J., Cao, Z., and Farimani, A. B. Molecular contrastive learning of representations via graph neural networks. arXiv preprint arXiv:2102.10056, 2021. +Wang, Y., Wang, J., Cao, Z., and Barati Farimani, A. Molecular contrastive learning of representations via graph neural networks. Nature Machine Intelligence, 4(3):279-287, 2022c. +Wu, L., Lin, H., Gao, Z., Tan, C., Li, S., et al. Self-supervised on graphs: Contrastive, generative, or predictive. arXiv preprint arXiv:2105.07342, 2021. +Wu, Z., Ramsundar, B., Feinberg, E. N., Gomes, J., Geniesse, C., Pappu, A. S., Leswing, K., and Pande, V. Moleculenet: a benchmark for molecular machine learning. Chemical science, 9(2):513-530, 2018. +Xie, Y., Xu, Z., Zhang, J., Wang, Z., and Ji, S. Self-supervised learning of graph neural networks: A unified review. arXiv preprint arXiv:2102.10757, 2021. +Xu, K., Hu, W., Leskovec, J., and Jegelka, S. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018. +Xu, Z., Luo, Y., Zhang, X., Xu, X., Xie, Y., Liu, M., Dickerson, K., Deng, C., Nakata, M., and Ji, S. Molecule3d: A benchmark for predicting 3d geometries from molecular graphs. arXiv preprint arXiv:2110.01717, 2021a. +Xu, Z., Luo, Y., Zhang, X., Xu, X., Xie, Y., Liu, M., Dickerson, K. A., Deng, C., Nakata, M., and Ji, S. Molecule3d: A benchmark for predicting 3d geometries from molecular graphs, 2021b. URL https://openreview.net/forum?id=m5rEiGxOGiL. +Yao, H., Yang, X., Pan, X., Liu, S., Koh, P. W., and Finn, C. Leveraging domain relations for domain generalization. arXiv preprint arXiv:2302.02609, 2023. +You, Y., Chen, T., Sui, Y., Chen, T., Wang, Z., and Shen, Y. Graph contrastive learning with augmentations. In Advances in Neural Information Processing Systems, NeurIPS, 2020a. +You, Y., Chen, T., Sui, Y., Chen, T., Wang, Z., and Shen, Y. Graph contrastive learning with augmentations. Advances in Neural Information Processing Systems, 33: 5812-5823, 2020b. + +Zaidi, S., Schaarschmidt, M., Martens, J., Kim, H., Teh, Y. W., Sanchez-Gonzalez, A., Battaglia, P., Pascanu, R., and Godwin, J. Pre-training via denoising for molecular property prediction. arXiv preprint arXiv:2206.00133, 2022. +Zhu, J., Xia, Y., Liu, C., Wu, L., Xie, S., Wang, T., Wang, Y., Zhou, W., Qin, T., Li, H., et al. Direct molecular conformation generation. arXiv preprint arXiv:2202.01356, 2022. + +# A. Comparison to Related Works + +In Table 6, we provide a comprehensive overview of existing works on single-modal and multi-modal pretraining methods. We categorize the pretraining methods into generative and contrastive learning methods + +Table 6. Comparison between MoleculeSDE and existing graph SSL methods. + +
Pre-training2D Topology3D Conformation2D Topology and 3D Conformation
GenerativeContrastiveGenerativeContrastiveGenerativeContrastive
AttrMask (Hu et al., 2019; Liu et al., 2019)-----
InfoGraph (Veličković et al., 2018; Sun et al., 2020)-----
ContextPred (Hu et al., 2019)-----
GraphCL (You et al., 2020a)-----
Atom Type Prediction (Liu et al., 2023a)-----
Distance Prediction (Fang et al., 2021; Liu et al., 2023a)-----
Angle Prediction (Fang et al., 2021; Liu et al., 2023a)-----
3D Infagraph (Liu et al., 2023a)-----
MI-RR Prediction (Liu et al., 2023a)-----
MI-InfoNCE Prediction (Liu et al., 2023a)-----
MI-EBM-NCE Prediction (Liu et al., 2023a)-----
GeoSSL-1L (Zaidi et al., 2022)-----
GeoSSL (Liu et al., 2023a)-----
3D InfoMax (Stärk et al., 2022)-----
GraphMVP (Liu et al., 2021a)----
GraphMVP-C (Liu et al., 2021a)---
GraphMVP-G (Liu et al., 2021a)---
MoleculeSDE (ours)----
+ +# B. Group Symmetry and Local Frame + +# B.1. SE(3)/E(3) Group action and representations + +In this article, a 3D molecular graph is represented by a 3D point cloud. The corresponding symmetry group is $SE(3)$ , which consists of translations and rotations. Recall that we define the notion of equivariance functions in $\mathbf{R}^3$ in the main text through group actions. Formally, the group SE(3) is said to act on $\mathbf{R}^3$ if there is a mapping $\phi : SE(3) \times \mathbf{R}^3 \to \mathbf{R}^3$ satisfying the following two conditions: + +1. if $e \in SE(3)$ is the identity element, then + +$$ +\phi (e, \boldsymbol {r}) = \boldsymbol {r} \quad \text {f o r} \forall \boldsymbol {r} \in \mathbf {R} ^ {3}. +$$ + +2. if $g_{1}, g_{2} \in SE(3)$ , then + +$$ +\phi (g _ {1}, \phi (g _ {2}, \boldsymbol {r})) = \phi (g _ {1} g _ {2}, \boldsymbol {r}) \quad \text {f o r} \forall \boldsymbol {r} \in \mathbf {R} ^ {3}. +$$ + +Then, there is a natural $SE(3)$ action on vectors $\pmb{r}$ in $\mathbf{R}^3$ by translating $\pmb{r}$ and rotating $\pmb{r}$ for multiple times. For $g\in SE(3)$ and $\pmb {r}\in \mathbf{R}^3$ , we denote this action by $gr$ . Once the notion of group action is defined, we say a function $f:\mathbf{R}^3\to \mathbf{R}^3$ that transforms $\pmb {r}\in \mathbf{R}^3$ is equivariant if: + +$$ +f (g \boldsymbol {r}) = g f (\boldsymbol {r}), \quad \text {f o r} \forall \boldsymbol {r} \in \mathbf {R} ^ {3}. +$$ + +On the other hand, $f: \mathbf{R}^3 \to \mathbf{R}^1$ is invariant, if $f$ is independent of the group actions: + +$$ +f (g \boldsymbol {r}) = f (\boldsymbol {r}), \quad \text {f o r} \forall \boldsymbol {r} \in \mathbf {R} ^ {3}. +$$ + +For some scenarios, our problem is chiral sensitive. That is, after mirror reflecting a 3D molecule, the properties of the molecule may change dramatically. In these cases, it's crucial to include reflection transformations into consideration. More precisely, we say an $SE(3)$ -equivariant function $f$ is reflection-antisymmetric, if: + +$$ +f (\rho \boldsymbol {r}) \neq f (\boldsymbol {r}), \tag {17} +$$ + +for some reflections $\rho \in E(3)$ + +# B.2. Equivariant Frames + +Frame is a popular terminology in science areas. In physics, frame is equivalent to a coordinate system. For example, we may assign a frame to all observers, although different observers may collect different data under different frames, the underlying physics law should be the same. In other words, denote the physics law by $f$ , then $f$ should be an equivariant function. + +Since there are three orthogonal directions in $\mathbf{R}^3$ , a frame in $\mathbf{R}^3$ consists of three orthogonal vectors: + +$$ +F = (e _ {1}, e _ {2}, e _ {3}). +$$ + +Once equipped with a frame (coordinate system), we can project all geometric quantities to this frame. For example, an abstract vector $\boldsymbol{r} \in \mathbf{R}^3$ can be written as $\boldsymbol{r} = (r_1, r_2, r_3)$ under frame $F$ , if: $\boldsymbol{r} = r_1 \boldsymbol{e}_1 + r_2 \boldsymbol{e}_2 + r_3 \boldsymbol{e}_3$ . An equivariant frame further requires the three orthonormal vectors in $(e_1, e_2, e_3)$ to be equivariant. Intuitively, an equivariant frame will transform according to the global rotation or translation of the whole system. Once equipped with an equivariant frame, we can project equivariant vectors into this frame: + +$$ +\boldsymbol {r} = \tilde {r} _ {1} \boldsymbol {e} _ {1} + \tilde {r} _ {2} \boldsymbol {e} _ {2} + \tilde {r} _ {3} \boldsymbol {e} _ {3}. \tag {18} +$$ + +We call the process of $\boldsymbol{r} \rightarrow \tilde{\boldsymbol{r}} \coloneqq (\tilde{r}_1, \tilde{r}_2, \tilde{r}_3)$ the projection operation. Since $\tilde{r}_i = e_i \cdot r_i$ is expressed as an inner product between equivariant vectors, we know that $\tilde{r}$ consists of scalars. + +In this article, we assign an equivariant frame to each edge. Therefore, we call them the local frames that reflect the local geometry around each chemical bond, following (Du et al., 2022b; 2023). Consider node $i$ and one of its neighbors $j$ with positions $\pmb{x}_i$ and $\pmb{x}_j$ , respectively. The orthonormal equivariant frame $\mathcal{F}_{ij} \coloneqq (e_1^{ij}, e_2^{ij}, e_3^{ij})$ is defined with respect to $\pmb{x}_i$ and $\pmb{x}_j$ as follows: + +$$ +\left(\frac {\boldsymbol {x} _ {i} - \boldsymbol {x} _ {j}}{\| \boldsymbol {x} _ {i} - \boldsymbol {x} _ {j} \|}, \frac {\boldsymbol {x} _ {i} \times \boldsymbol {x} _ {j}}{\| \boldsymbol {x} _ {i} \times \boldsymbol {x} _ {j} \|}, \frac {\boldsymbol {x} _ {i} - \boldsymbol {x} _ {j}}{\| \boldsymbol {x} _ {i} - \boldsymbol {x} _ {j} \|} \times \frac {\boldsymbol {x} _ {i} \times \boldsymbol {x} _ {j}}{\| \boldsymbol {x} _ {i} \times \boldsymbol {x} _ {j} \|}\right). \tag {19} +$$ + +Note that this frame is translation invariant if the system's mass is zero. + +Reflection-AntiSymmetric Since we implement the cross product $\times$ for building the local frames, the third vector in the frame is a pseudo-vector. Then, the projection operation is not invariant under reflections (the inner product between a vector and a pseudo-vector change signs under reflection). Therefore, our model is able to discriminate two 3D geometries with different chirality. + +Our local frames also enable us to output equivariant vectors by multiplying scalars $(v_{1}, v_{2}, v_{3})$ with the frame: $\pmb{v} = v_{1} \cdot \pmb{e}_{1} + v_{2} \cdot \pmb{e}_{2} + v_{3} \cdot \pmb{e}_{3}$ . It's easy to check that $\pmb{v}$ is a $SE(3)$ equivariant (reflection-antisymmetric) vector. + +# C. Denoising Score Matching + +# C.1. Energy-Based Model (EBM) + +Energy-based model (EBM) is a powerful tool for modeling the data distribution. The formulation is: + +$$ +p _ {\theta} (\boldsymbol {x}) = \frac {\exp (- E (\boldsymbol {x}))}{A} = \frac {\exp (f (\boldsymbol {x}))}{A}, \tag {20} +$$ + +where the bottleneck is the intractable partition function $A = \int_{\pmb{x}} \exp(-E(\pmb{x})) d\pmb{x}$ . Recently, there have been big progress (Song & Kingma, 2021) in solving such an intractable function, including contrastive divergence (Hinton, 2002), score matching (Hyvärinen & Dayan, 2005; Song et al., 2020), and noise contrastive estimation (Gutmann & Hyvärinen, 2010). + +# C.2. Score Matching + +There exists a family of solutions called score matching (SM) to solve Equation (20). The core idea of SM is that for the generative task, we do not need to directly estimate the density, but we just need to know the score or gradient of the data distribution, i.e., $\nabla_{\pmb{x}}\log p(\pmb {x})$ . Then a Markov chain Monte Carlo (MCMC) strategy can be adopted for data generation. + +Score The score is defined as the gradient of log-likelihood w.r.t. the data $\mathbf{x}$ : + +$$ +s _ {\theta} (\boldsymbol {x}) = \nabla_ {\boldsymbol {x}} \log p _ {\theta} (\boldsymbol {x}) = - \nabla_ {\boldsymbol {x}} E (\boldsymbol {x}) - \nabla_ {\boldsymbol {x}} \log A = - \nabla_ {\boldsymbol {x}} E (\boldsymbol {x}) = \nabla_ {\boldsymbol {x}} f (\boldsymbol {x}). \tag {21} +$$ + +Thus, by taking the score, i.e., the gradient w.r.t. the data, the partition function term will disappear since it is a constant. The SM transforms the density estimation problem into a score (gradient) matching problem: if the first-order gradient of function $(s_{\theta}(x))$ can match, then the learned model distribution with EBM is able to capture the data distribution precisely. + +Explicit Score Matching (ESM) For training the model, originally, SM (Hyvärinen & Dayan, 2005) applies the Fisher divergence to measure the discrepancy between data distribution and model distribution, terms explicit score matching (ESM): + +$$ +\begin{array}{l} D _ {F} \left(p _ {\text {d a t a}} (\boldsymbol {x}) \| p _ {\theta} (\boldsymbol {x})\right) = \frac {1}{2} \mathbb {E} _ {p _ {\text {d a t a}}} \| \nabla_ {\boldsymbol {x}} \log p _ {\theta} (\boldsymbol {x}) - \nabla_ {\boldsymbol {x}} \log p _ {\text {d a t a}} (\boldsymbol {x}) \| _ {2} ^ {2} \tag {22} \\ = \frac {1}{2} \mathbb {E} _ {p _ {\text {d a t a}}} \| s _ {\theta} (\boldsymbol {x}) - \nabla_ {\boldsymbol {x}} \log p _ {\text {d a t a}} (\boldsymbol {x}) \| _ {2} ^ {2}. \\ \end{array} +$$ + +The expectation w.r.t. $p_{\mathrm{data}}(\pmb{x})$ can be approximated using Monte Carlo sampling, yet the second term of Equation (22) is intractable to compute since it needs to know $\nabla_x \log p_{\mathrm{data}}(\pmb{x})$ . There are multiple solutions to this, including Implicit Score matching (ISM) (Hyvärinen & Dayan, 2005) which rewrites Equation (22) using integration by parts; the other appealing solution is the denoising score matching (DSM). Both are introduced below. + +Implicit Score Matching (ISM) It is still impractical to calculate Equation (22) due to the $\nabla_{\pmb{x}}\log p_{\mathrm{data}}(\pmb {x})$ term. Under certain conditions (Hyvärinen & Dayan, 2005), the Fisher divergence can be rewritten using integration by parts, and we can turn it to the implicit score matching (ISM), i.e., ESM to ISM: + +$$ +\begin{array}{l} \mathbb {E} _ {p _ {\text {d a t a}}} \left[ \| \nabla_ {\boldsymbol {x}} \log p _ {\text {d a t a}} (\boldsymbol {x}) - s _ {\theta} (\boldsymbol {x}) \| ^ {2} \right] = \mathbb {E} _ {p _ {\text {d a t a}}} \left[ \frac {1}{2} \left(\nabla_ {\boldsymbol {x}} E _ {\theta} (\boldsymbol {x})\right) ^ {2} + \operatorname {t r} \left(\nabla_ {\boldsymbol {x}} ^ {2} E _ {\theta} (\boldsymbol {x})\right) \right] + C \tag {23} \\ = \mathbb {E} _ {p _ {\mathrm {d a t a}}} \left[ \frac {1}{2} \left(\| s _ {\theta} (\boldsymbol {x}) \| ^ {2} + \operatorname {t r} \left(\nabla_ {\boldsymbol {x}} s _ {\theta} (\boldsymbol {x})\right) \right] + C, \right. \\ \end{array} +$$ + +where $C$ is the constant. The drawback of Equation (23) is that it requires computing the Trace of Hessian. It is computationally expensive as the computation of the full second derivatives is quadratic in the dimensionality of data. + +**Denoising Score Matching (DSM)** Along this line, denoising score matching (DSM) (Vincent, 2011) proposes an elegant solution by connecting the SM with denoising autoencoder. It first perturbs the data with a noise distribution, i.e., $q_{\sigma}(\tilde{\boldsymbol{x}}|\boldsymbol{x})$ , and the goal is to use SM to approximate the $q_{\sigma}(\tilde{\boldsymbol{x}}) = \mathbb{E}_{p_{\mathrm{data}}(\boldsymbol{x})}[q_{\sigma}(\tilde{\boldsymbol{x}}|\boldsymbol{x})]$ with a model distribution $p_{\theta}(\tilde{\boldsymbol{x}})$ . DSM (Vincent, 2011) then calculates the Fisher divergence between the perturbed data distribution and perturbed model distribution, which leads to the following equation: + +$$ +\begin{array}{l} D _ {F} \left(q _ {\sigma} (\tilde {\boldsymbol {x}}) \| p _ {\theta} (\tilde {\boldsymbol {x}})\right) = \frac {1}{2} \mathbb {E} _ {q _ {\sigma} (\tilde {\boldsymbol {x}})} \left[ \| \nabla_ {\tilde {\boldsymbol {x}}} \log q _ {\sigma} (\tilde {\boldsymbol {x}}) - \nabla_ {\tilde {\boldsymbol {x}}} \log p _ {\theta} (\tilde {\boldsymbol {x}}) \| ^ {2} \right] \tag {24} \\ = \frac {1}{2} \mathbb {E} _ {q _ {\sigma} (\boldsymbol {x}, \tilde {\boldsymbol {x}})} \left[ \| \nabla_ {\tilde {\boldsymbol {x}}} \log q _ {\sigma} (\tilde {\boldsymbol {x}} | \boldsymbol {x}) - s _ {\theta} (\tilde {\boldsymbol {x}}) \| ^ {2} \right] + C. \\ \end{array} +$$ + +The detailed derivation of Equation (24) can be found in Appendix C.3. This is an elegant solution because under the Gaussian kernel, i.e., $q_{\sigma}(\tilde{\boldsymbol{x}} |\boldsymbol {x}) = \mathcal{N}(\tilde{\boldsymbol{x}} |\boldsymbol {x},\sigma^2 I)$ , we can have an analytical solution to $\nabla_{\tilde{\boldsymbol{x}}}\log q_{\sigma}(\tilde{\boldsymbol{x}} |\boldsymbol {x}) = \frac{1}{\sigma^2} (\boldsymbol {x} - \tilde{\boldsymbol{x}})$ . This is essentially a direction moving from $\tilde{\boldsymbol{x}}$ back to $\boldsymbol{x}$ , and DSM makes the score to match it. Finally, the objective becomes: + +$$ +\begin{array}{l} \mathcal {L} _ {\mathrm {D S M}} \approx \frac {1}{2 N} \sum_ {i = 1} ^ {N} \left[ \| \frac {\tilde {\boldsymbol {x}} _ {i} - x _ {i}}{\sigma^ {2}} + s _ {\theta} (\tilde {\boldsymbol {x}} _ {i}) \| ^ {2} \right], \\ = \frac {1}{2} \mathbb {E} _ {q _ {\sigma} (\boldsymbol {x}, \tilde {\boldsymbol {x}})} \left[ \| \frac {\boldsymbol {x} - \tilde {\boldsymbol {x}}}{\sigma^ {2}} - s _ {\theta} (\tilde {\boldsymbol {x}}) \| ^ {2} \right] \tag {25} \\ = \frac {1}{2} \mathbb {E} _ {p _ {\mathrm {d a t a}} (x)} \mathbb {E} _ {q _ {\sigma} (\tilde {\boldsymbol {x}} | \boldsymbol {x})} \left[ \| \frac {\tilde {\boldsymbol {x}} - \boldsymbol {x}}{\sigma^ {2}} + s _ {\theta} (\tilde {\boldsymbol {x}}) \| ^ {2} \right]. \\ \end{array} +$$ + +Additionally, (Vincent, 2011) also proves that DSM is equivalent to ESM. Though there exists certain drawbacks (Song & Kingma, 2021), DSM serves as a promising tool to enable the SM family as a more applicable solution to EBM. + +Noise Conditional Score Network (NCSN) Recently, (Song & Ermon, 2019) finds that perturbing data with random Gaussian noise makes the data distribution more powerful than SM model. Thus it proposes Noise Conditional Score Network (NCSN) that can perturb the data using various levels of noise and estimates scores at all levels simultaneously. More concretely, NCSN chooses the Gaussian kernel as noise distribution, i.e., $q_{\sigma}(\tilde{\boldsymbol{x}}|\boldsymbol{x}) = \mathcal{N}(\tilde{\boldsymbol{x}}|\boldsymbol{x}, \sigma^2 I)$ . With $L$ levels of noises, it extends Equation (25) as the following new objective: + +$$ +\ell (\theta ; \sigma_ {i}) = \frac {1}{2} \mathbb {E} _ {p _ {\text {d a t a}} (\boldsymbol {x})} \mathbb {E} _ {q _ {\sigma_ {i}} (\tilde {\boldsymbol {x}} | \boldsymbol {x})} \left[ \| \frac {\tilde {\boldsymbol {x}} - \boldsymbol {x}}{\sigma_ {i} ^ {2}} + s _ {\theta} (\tilde {\boldsymbol {x}}) \| ^ {2} \right] \tag {26} +$$ + +$$ +\mathcal {L} _ {\mathrm {N C S N}} = \frac {1}{L} \sum_ {l = 1} ^ {L} \lambda \left(\sigma_ {i}\right) \ell \left(\theta ; \sigma_ {i}\right), \tag {26} +$$ + +where $\lambda (\sigma_i) > 0$ is a coefficient function on $\sigma_{i}$ + +Sampling for SM For the SM family (including ESM, ISM, DSM and NCSN), once we have the score, we can sample the data by a MCMC sampling method called Langevin dynamics: + +$$ +\tilde {\boldsymbol {x}} _ {t + 1} = \tilde {\boldsymbol {x}} _ {t} + \frac {\epsilon^ {2}}{2} \nabla_ {x} \log p _ {\theta} (\boldsymbol {x} _ {T}) + \epsilon z _ {t} = \tilde {\boldsymbol {x}} _ {t} + \frac {\epsilon^ {2}}{2} s (\boldsymbol {x} _ {t}) + \epsilon z _ {t}. \tag {27} +$$ + +Discussion Till now, the SM family provides a unique solution for the generative task. It may seem likelihood-free, but recently, another track on diffusion model found that indeed these two research lines can contribute to the same formulation (Song et al., 2020). The only difference is that the diffusion model starts with a variational approximation perspective. + +# C.3. Proof of DSM + +Proof of Equation (24). + +First to put this into the ESM, we can have: + +$$ +\begin{array}{l} D _ {F} (q (\tilde {\boldsymbol {x}}) \| p _ {\theta} (\tilde {\boldsymbol {x}})) = \frac {1}{2} \mathbb {E} _ {q (\tilde {\boldsymbol {x}})} \| \nabla_ {\tilde {\boldsymbol {x}}} \log p _ {\theta} (\tilde {\boldsymbol {x}}) - \nabla_ {\tilde {\boldsymbol {x}}} \log q (\tilde {\boldsymbol {x}}) \| _ {2} ^ {2} \\ = \frac {1}{2} \mathbb {E} _ {q (\tilde {\boldsymbol {x}})} \left[ \left\| \nabla_ {\tilde {\boldsymbol {x}}} \log p _ {\theta} (\tilde {\boldsymbol {x}}) \right\| ^ {2} + 2 \cdot \left\langle \nabla_ {\tilde {\boldsymbol {x}}} \log p _ {\theta} (\tilde {\boldsymbol {x}}), \nabla_ {\tilde {\boldsymbol {x}}} \log q (\tilde {\boldsymbol {x}}) \right\rangle + \| \nabla_ {\tilde {\boldsymbol {x}}} \log q (\tilde {\boldsymbol {x}}) \| ^ {2} \right] \tag {28} \\ = \frac {1}{2} \mathbb {E} _ {q (\tilde {\boldsymbol {x}})} \left[ \| s _ {\theta} (\tilde {\boldsymbol {x}}) \| ^ {2} + 2 \cdot \langle s _ {\theta} (\tilde {\boldsymbol {x}}), \nabla_ {\tilde {\boldsymbol {x}}} \log q (\tilde {\boldsymbol {x}}) \rangle \right] + C _ {1}, \\ \end{array} +$$ + +where $C_1 = \frac{1}{2}\mathbb{E}_{q(\tilde{\pmb{x}})}\| \nabla_{\tilde{\pmb{x}}}\log q(\tilde{\pmb{x}})\|^2$ is a constant and does not depend on the model parameter $\theta$ + +Then let's take out the second term, and we can have following: + +$$ +\begin{array}{l} \mathbb {E} _ {q (\tilde {\boldsymbol {x}})} \left[ \left\langle \nabla_ {\tilde {\boldsymbol {x}}} \log q (\tilde {\boldsymbol {x}}), s _ {\theta} (\tilde {\boldsymbol {x}}) \right\rangle \right] \\ = \int_ {\tilde {x}} q (\tilde {x}) \left\langle \nabla_ {\tilde {x}} \log q (\tilde {x}), s _ {\theta} (\tilde {x}) \right\rangle d \tilde {x} \\ = \int_ {\tilde {x}} \left\langle \nabla_ {\tilde {x}} q (\tilde {x}), s _ {\theta} (\tilde {x}) \right\rangle d \tilde {x} \\ = \int_ {\tilde {\boldsymbol {x}}} \left\langle \nabla_ {\tilde {\boldsymbol {x}}} \int_ {x} q (\boldsymbol {x}) \cdot q (\tilde {\boldsymbol {x}} | x) d x, s _ {\theta} (\tilde {\boldsymbol {x}}) \right\rangle d \tilde {\boldsymbol {x}} (29) \\ = \int_ {\tilde {x}} \int_ {x} q (\boldsymbol {x}) \cdot \left\langle \nabla_ {\tilde {x}} q (\tilde {\boldsymbol {x}} | x), s _ {\theta} (\tilde {\boldsymbol {x}}) \right\rangle d x d \tilde {\boldsymbol {x}} (25) \\ = \int_ {\tilde {x}} \int_ {x} q (\boldsymbol {x}) \cdot \left\langle q (\tilde {\boldsymbol {x}} | x) \nabla_ {\tilde {x}} \log q (\tilde {\boldsymbol {x}} | x), s _ {\theta} (\tilde {\boldsymbol {x}}) \right\rangle d x d \tilde {\boldsymbol {x}} \\ = \int_ {\tilde {x}} \int_ {x} q (\boldsymbol {x}) q (\tilde {\boldsymbol {x}} | x) \cdot \left\langle \nabla_ {\tilde {x}} \log q (\tilde {\boldsymbol {x}} | x), s _ {\theta} (\tilde {\boldsymbol {x}}) \right\rangle d x d \tilde {\boldsymbol {x}} \\ = \mathbb {E} _ {q (x, \tilde {\boldsymbol {x}})} \left[ \left\langle \nabla_ {\tilde {\boldsymbol {x}}} \log q (\tilde {\boldsymbol {x}} | x), s _ {\theta} (\tilde {\boldsymbol {x}}) \right\rangle \right] \\ \end{array} +$$ + +So we can put this back to Equation (28) and let ESM be as: + +$$ +\begin{array}{l} D _ {F} \left(q (\tilde {\boldsymbol {x}}) \| p _ {\theta} (\tilde {\boldsymbol {x}})\right) = \frac {1}{2} \mathbb {E} _ {q (\tilde {\boldsymbol {x}})} \left[ \| s _ {\theta} (\tilde {\boldsymbol {x}}) \| ^ {2} + 2 \cdot \left\langle s _ {\theta} (\tilde {\boldsymbol {x}}), \nabla_ {\tilde {\boldsymbol {x}}} \log q (\tilde {\boldsymbol {x}}) \right\rangle \right] + C _ {1} \tag {30} \\ = \frac {1}{2} \mathbb {E} _ {q (x, \tilde {\boldsymbol {x}})} \left[ \| s _ {\theta} (\tilde {\boldsymbol {x}}) \| ^ {2} \right] + \mathbb {E} _ {q (x, \tilde {\boldsymbol {x}})} \left[ \langle \nabla_ {\tilde {\boldsymbol {x}}} \log q (\tilde {\boldsymbol {x}} | x), s _ {\theta} (\tilde {\boldsymbol {x}}) \rangle \right] + C _ {1}. \\ \end{array} +$$ + +And then we can get the following equivalent objective by some special reconstruction: + +$$ +\begin{array}{l} D _ {F} \left(q (\tilde {\boldsymbol {x}}) \| p _ {\theta} (\tilde {\boldsymbol {x}})\right) = \frac {1}{2} \mathbb {E} _ {q (x, \tilde {\boldsymbol {x}})} \left[ \| s _ {\theta} (\tilde {\boldsymbol {x}}) \| ^ {2} \right] + \mathbb {E} _ {q (x, \tilde {\boldsymbol {x}})} \left[ \left\langle \nabla_ {\tilde {\boldsymbol {x}}} \log q (\tilde {\boldsymbol {x}} | x), s _ {\theta} (\tilde {\boldsymbol {x}}) \right\rangle \right] + C _ {2} + \Delta \tag {31} \\ = \frac {1}{2} \mathbb {E} _ {q (x, \tilde {\boldsymbol {x}})} \left[ \| s _ {\theta} (\tilde {\boldsymbol {x}}) - \nabla_ {\tilde {\boldsymbol {x}}} \log q (\tilde {\boldsymbol {x}} | x) \| ^ {2} \right] + \Delta . \\ \end{array} +$$ + +where $C_2 = \frac{1}{2}\mathbb{E}_{q(\tilde{\boldsymbol{x}})}\| \nabla_{\tilde{\boldsymbol{x}}}\log q(\tilde{\boldsymbol{x}} |x)\| ^2$ is a re-constructed constant. + +End of proof. + +# D. Diffusion Model + +Another generative modeling track is the denoising diffusion probabilistic model (DDPM) (Sohl-Dickstein et al., 2015; Ho et al., 2020). The diffusion model is composed of two processes: a forward process that adds noise to the data and a backward process that does denoising to generate the true data. Below we give a brief summary of the Gaussian diffusion model introduced in (Ho et al., 2020). + +# D.1. Pipeline of Denoising Diffusion Probabilistic Model + +Forward process Give a data point from the real distribution $\pmb{x}_0 \sim q(\pmb{x})$ , the forward diffusion process is that we add small amount of Gaussian noise to the sample in $T$ steps, producing a sequence of noisy samples $\pmb{x}_1, \dots, \pmb{x}_T$ . The step sizes are controlled by a variance schedule $\{\beta_t \in (0, 1)\}_{t=1}^T$ : + +$$ +q \left(\boldsymbol {x} _ {T} \mid \boldsymbol {x} _ {t - 1}\right) = \mathcal {N} \left(\boldsymbol {x} _ {T}; \sqrt {1 - \beta_ {t}} \boldsymbol {x} _ {t - 1}, \beta_ {t} I\right), \tag {32} +$$ + +$$ +q \left(\boldsymbol {x} _ {1: T} \mid \boldsymbol {x} _ {0}\right) = \prod_ {t = 1} ^ {T} q \left(\boldsymbol {x} _ {T} \mid \boldsymbol {x} _ {t - 1}\right). \tag {33} +$$ + +A nice property of the forward process is that we can sample $\pmb{x}_t$ at any arbitrary timestep $t$ in a closed form using the reparameterization trick. Let $\alpha_{t} = 1 - \beta_{t}$ and $\bar{\alpha}_{t} = \prod_{i=1}^{t} \alpha_{i}$ , we have: + +$$ +q \left(\boldsymbol {x} _ {t} \mid \boldsymbol {x} _ {0}\right) = \mathcal {N} \left(\boldsymbol {x} _ {t}; \sqrt {\bar {\alpha} _ {t}} \boldsymbol {x} _ {0}, (1 - \bar {\alpha} _ {t}) I\right). \tag {34} +$$ + +Then using the Bayes theorem, $q(\pmb{x}_{t-1}|\pmb{x}_T, \pmb{x}_0)$ can be written as a Gaussian: + +$$ +\begin{array}{l} q \left(\boldsymbol {x} _ {t - 1} \mid \boldsymbol {x} _ {T}, \boldsymbol {x} _ {0}\right) = \mathcal {N} \left(\boldsymbol {x} _ {t - 1}; \tilde {\mu} \left(\boldsymbol {x} _ {T}, \boldsymbol {x} _ {0}\right), \tilde {\beta} _ {t} I\right) \\ = \mathcal {N} \left(\boldsymbol {x} _ {t - 1}; \frac {1}{\sqrt {\alpha_ {t}}} \left(\boldsymbol {x} _ {T} - \frac {\beta_ {t}}{\sqrt {1 - \bar {\alpha} _ {t}}} z _ {t}\right), \frac {1 - \bar {\alpha} _ {t - 1}}{1 - \bar {\alpha} _ {t}} \beta_ {t}\right) \tag {35} \\ \end{array} +$$ + +Reverse process Under a reasonable setting for $\beta_{t}$ and $T$ (Dhariwal & Nichol, 2021), the distribution $q(\pmb{x}_T)$ is nearly an isotropic Gaussian, and sampling $\pmb{x}_T$ is trivial. Then for the reverse process, we need $q(\pmb{x}_{t - 1}|\pmb{x}_T)$ . (Sohl-Dickstein et al., 2015) claims that as $T\to \infty$ and $\beta_t\rightarrow 0$ , $q(\pmb{x}_{t - 1}|\pmb{x}_t)$ approaches a diagonal Gaussian distribution. To this end, it is sufficient to train a neural network to predict a mean $\mu_{\theta}$ and a diagonal covariance matrix $\Sigma_{\theta}$ : + +$$ +p _ {\theta} \left(\boldsymbol {x} _ {t - 1} \mid \boldsymbol {x} _ {T}\right) = \mathcal {N} \left(\boldsymbol {x} _ {t - 1}; \mu_ {\theta} \left(\boldsymbol {x} _ {T}, t\right), \Sigma_ {\theta} \left(\boldsymbol {x} _ {T}, t\right)\right). \tag {36} +$$ + +Parameterization and variational lower bound The hidden variables are $x_{1:T}$ , and inference is to infer the latent variables, i.e., $p(x_{1:T}|\pmb{x}_0)$ . Variational inference is to use $p(x_{1:T}|\pmb{x}_0)$ to estimate the true posterior $q(x_{1:T}|\pmb{x}_0)$ . If we use + +$x_{1:T}$ as $z$ , and $\pmb{x}_0$ as $\mathbf{x}$ , then it is resemble to the VAE. Recall that + +$$ +\begin{array}{l} K L (q (z | x) | | p (z | x)) = \mathbb {E} _ {q (z | x)} \left[ \log \frac {q (z | x)}{p (z | x)} \right] \\ = \mathbb {E} _ {q (z | x)} [ \log \frac {q (z | x) p (x)}{p (x , z)} ] \tag {37} \\ = \log p (x) + \mathbb {E} _ {q (z | x)} \big [ \log \frac {q (z | x)}{p (x , z)} \big ]. \\ \end{array} +$$ + +To adapt this to the diffusion model setting, we can have: + +$$ +K L \left(q \left(x _ {1: T} \mid \boldsymbol {x} _ {0}\right) | | p \left(x _ {1: T} \mid \boldsymbol {x} _ {0}\right)\right) = \log p \left(\boldsymbol {x} _ {0}\right) + \mathbb {E} _ {q \left(x _ {1: T} \mid \boldsymbol {x} _ {0}\right)} \left[ \log \frac {q \left(x _ {1 : T} \mid \boldsymbol {x} _ {0}\right)}{p \left(x _ {0 : T}\right)} \right], \tag {38} +$$ + +and our goal becomes to maximize the variational lower bound (VLB): + +$$ +\begin{array}{l} \mathcal {L} _ {V L B} = \mathbb {E} _ {q} \left[ \log \frac {q \left(x _ {1 : T} \mid \boldsymbol {x} _ {0}\right)}{p _ {\theta} \left(x _ {0 : T}\right)} \right] \\ = \mathbb {E} _ {q} \left[ \log \frac {\prod_ {t = 1} ^ {T} q \left(\boldsymbol {x} _ {T} \mid \boldsymbol {x} _ {t - 1}\right)}{p _ {\theta} \left(\boldsymbol {x} _ {T}\right) \prod_ {t = 1} ^ {T} p _ {\theta} \left(\boldsymbol {x} _ {t - 1} \mid \boldsymbol {x} _ {T}\right)} \right] \\ = \mathbb {E} _ {q} \Big [ - \log p _ {\theta} (\boldsymbol {x} _ {T}) + \sum_ {t = 2} ^ {T} \log \frac {q (\boldsymbol {x} _ {T} | \boldsymbol {x} _ {t - 1})}{p _ {\theta} (\boldsymbol {x} _ {t - 1} | \boldsymbol {x} _ {T})} + \log \frac {q (\boldsymbol {x} _ {1} | \boldsymbol {x} _ {0})}{p _ {\theta} (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {1})} \Big ] \\ = \mathbb {E} _ {q} \left[ - \log p _ {\theta} (\boldsymbol {x} _ {T}) + \sum_ {t = 2} ^ {T} \log \frac {q (\boldsymbol {x} _ {T} | \boldsymbol {x} _ {t - 1} , \boldsymbol {x} _ {0})}{p _ {\theta} (\boldsymbol {x} _ {t - 1} | \boldsymbol {x} _ {T})} + \log \frac {q (\boldsymbol {x} _ {1} | \boldsymbol {x} _ {0})}{p _ {\theta} (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {1})} \right] \\ = \mathbb {E} _ {q} \left[ - \log p _ {\theta} (\boldsymbol {x} _ {T}) + \sum_ {t = 2} ^ {T} \log \frac {q \left(\boldsymbol {x} _ {t - 1} \mid \boldsymbol {x} _ {T} , \boldsymbol {x} _ {0}\right) \cdot q \left(\boldsymbol {x} _ {T} \mid \boldsymbol {x} _ {0}\right)}{p _ {\theta} \left(\boldsymbol {x} _ {t - 1} \mid \boldsymbol {x} _ {T}\right) \cdot q \left(\boldsymbol {x} _ {t - 1} \mid \boldsymbol {x} _ {0}\right)} + \log \frac {q \left(\boldsymbol {x} _ {1} \mid \boldsymbol {x} _ {0}\right)}{p _ {\theta} \left(\boldsymbol {x} _ {0} \mid \boldsymbol {x} _ {1}\right)} \right] \quad \text {B a y e ’ s r u l e} \\ = \mathbb {E} _ {q} \left[ - \log p _ {\theta} (\boldsymbol {x} _ {T}) + \sum_ {t = 2} ^ {T} \log \frac {q \left(\boldsymbol {x} _ {t - 1} \mid \boldsymbol {x} _ {T} , \boldsymbol {x} _ {0}\right)}{p _ {\theta} \left(\boldsymbol {x} _ {t - 1} \mid \boldsymbol {x} _ {T}\right)} + \sum_ {t = 2} ^ {T} \log \frac {q \left(\boldsymbol {x} _ {T} \mid \boldsymbol {x} _ {0}\right)}{q \left(\boldsymbol {x} _ {t - 1} \mid \boldsymbol {x} _ {0}\right)} + \log \frac {q \left(\boldsymbol {x} _ {1} \mid \boldsymbol {x} _ {0}\right)}{p _ {\theta} \left(\boldsymbol {x} _ {0} \mid \boldsymbol {x} _ {1}\right)} \right] \tag {39} \\ = \mathbb {E} _ {q} \left[ - \log p _ {\theta} (\boldsymbol {x} _ {T}) + \sum_ {t = 2} ^ {T} \log \frac {q (\boldsymbol {x} _ {t - 1} | \boldsymbol {x} _ {T} , \boldsymbol {x} _ {0})}{p _ {\theta} (\boldsymbol {x} _ {t - 1} | \boldsymbol {x} _ {0})} + \log \frac {q (\boldsymbol {x} _ {T} | \boldsymbol {x} _ {0})}{p _ {\theta} (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {1})} \right] \\ = \mathbb {E} _ {q} \left[ \log \frac {q \left(\boldsymbol {x} _ {T} \mid \boldsymbol {x} _ {0}\right)}{p _ {\theta} (\boldsymbol {x} _ {T})} + \sum_ {t = 2} ^ {T} \log \frac {q \left(\boldsymbol {x} _ {t - 1} \mid \boldsymbol {x} _ {T} , \boldsymbol {x} _ {0}\right)}{q \left(\boldsymbol {x} _ {t - 1} \mid \boldsymbol {x} _ {0}\right)} - \log p _ {\theta} \left(\boldsymbol {x} _ {0} \mid \boldsymbol {x} _ {1}\right) \right] \\ = K L [ q (\boldsymbol {x} _ {T} | \boldsymbol {x} _ {0}) | | p _ {\theta} (\boldsymbol {x} _ {T}) ] + \sum_ {t = 2} ^ {T} K L [ q (\boldsymbol {x} _ {t - 1} | \boldsymbol {x} _ {T}, \boldsymbol {x} _ {0}) | | p _ {\theta} (\boldsymbol {x} _ {t - 1} | \boldsymbol {x} _ {T}) ] - \mathbb {E} _ {q} [ \log p _ {\theta} (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {1}) ] \\ = \underbrace {K L [ q (\boldsymbol {x} _ {T} | \boldsymbol {x} _ {0}) | | p _ {\theta} (\boldsymbol {x} _ {T}) ]} _ {\mathcal {L} _ {T}} + \sum_ {t = 2} ^ {T} \underbrace {K L [ q (\boldsymbol {x} _ {t - 1} | \boldsymbol {x} _ {T} , \boldsymbol {x} _ {0}) | | p _ {\theta} (\boldsymbol {x} _ {t - 1} | \boldsymbol {x} _ {T}) ]} _ {\mathcal {L} _ {t - 1}} - \underbrace {\mathbb {E} _ {q} [ \log p _ {\theta} (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {1}) ]} _ {\mathcal {L} _ {0}} \\ \end{array} +$$ + +Thus, we want to model $q(\pmb{x}_{t-1} | \pmb{x}_T, \pmb{x}_0)$ with parameterization $p_{\theta}(\pmb{x}_{t-1} | \pmb{x}_T)$ . According to Equation (35), we can have: + +$$ +\begin{array}{l} \mathcal {L} _ {t} = \mathbb {E} _ {q} \left[ \log \frac {q \left(\boldsymbol {x} _ {t - 1} \mid \boldsymbol {x} _ {T} , \boldsymbol {x} _ {0}\right)}{p _ {\theta} \left(\boldsymbol {x} _ {t - 1} \mid \boldsymbol {x} _ {T}\right)} \right] \\ = \mathbb {E} _ {\boldsymbol {x} _ {0}, z} \left[ - \frac {1}{2 \| \Sigma_ {\theta} \| ^ {2}} \left\| \tilde {\mu} _ {t} \left(\boldsymbol {x} _ {T}, \boldsymbol {x} _ {0}\right) - \mu_ {\theta} \left(\boldsymbol {x} _ {T}, t\right) \right\| ^ {2} \right] \\ = \mathbb {E} _ {\boldsymbol {x} _ {0}, z} \left[ - \frac {1}{2 \| \Sigma_ {\theta} \| ^ {2}} \cdot \frac {1}{\alpha_ {t}} \cdot \left\| \boldsymbol {x} _ {T} - \frac {\beta_ {t}}{\sqrt {1 - \bar {\alpha} _ {t}}} z _ {t} - \boldsymbol {x} _ {T} + \frac {\beta_ {t}}{\sqrt {1 - \bar {\alpha} _ {t}}} z _ {\theta} (\boldsymbol {x} _ {T}, t) \right\| ^ {2} \right] \tag {40} \\ = \mathbb {E} _ {\boldsymbol {x} _ {0}, z} \left[ - \frac {\beta_ {t} ^ {2}}{2 \alpha_ {t} (1 - \bar {\alpha} _ {t}) \| \Sigma_ {\theta} \| ^ {2}} \cdot \| z _ {t} - z _ {\theta} (\boldsymbol {x} _ {T}, t) \| ^ {2} \right] \\ = \mathbb {E} _ {\boldsymbol {x} _ {0}, z} \Big [ - \frac {\beta_ {t} ^ {2}}{2 \alpha_ {t} (1 - \bar {\alpha} _ {t}) \| \Sigma_ {\theta} \| ^ {2}} \cdot \| z _ {t} - z _ {\theta} (\sqrt {\bar {\alpha} _ {t}} \boldsymbol {x} _ {0} + \sqrt {1 - \bar {\alpha} _ {t}} z _ {t}, t) \| ^ {2} \Big ], \\ \end{array} +$$ + +Simplification There is a nice strategy proposed in (Ho et al., 2020): the objective function in Equation (40) can be simplified by ignoring the weighting term: + +$$ +\mathcal {L} _ {t} ^ {\text {s i m p l e}} = \mathbb {E} _ {\boldsymbol {x} _ {0}, z} \left[ \| z _ {t} - z _ {\theta} \left(\sqrt {\bar {\alpha} _ {t}} \boldsymbol {x} _ {0} + \sqrt {1 - \bar {\alpha} _ {t}} z _ {t}, t\right) \| ^ {2} \right] \tag {41} +$$ + +# D.2. Important Tricks + +The DDPM (Ho et al., 2020) also adopts the following tricks in the training and inference. + +- $\Sigma_{\theta}(\pmb{x}_T,t) = \sigma_t^2 I$ . Then DDPM empirically tests $\sigma_t^2 = \beta_t$ and $\sigma_t^2 = \tilde{\beta}_t = \frac{1 - \hat{\alpha}_{t - 1}}{1 - \hat{\alpha}_t}\beta_t$ . Both have similar results. And these two are the two extreme choices corresponding to the upper and lower bounds on reverse process entropy with coordinatewise unit variance (Sohl-Dickstein et al., 2015). +- The second trick is that we model $p_{\theta}(\pmb{x}_{t - 1}|\pmb{x}_T) = \mathcal{N}(\pmb{x}_{t - 1};\mu_\theta (\pmb{x}_T,t);\sigma_t^2 I)$ . Then the loss term becomes: + +$$ +L _ {t - 1} = \mathbb {E} _ {q} \left[ \frac {1}{2 \sigma_ {t} ^ {2}} \| \tilde {\mu} \left(\boldsymbol {x} _ {T}, \boldsymbol {x} _ {0}\right) - \mu_ {\theta} \left(\boldsymbol {x} _ {T}, t\right) \| ^ {2} \right] \tag {42} +$$ + +So one straightforward way is to directly model the mean, i.e., to match $\mu_{\theta}$ and $\tilde{\mu}$ . + +- Meanwhile, during the diffusion process, we can have $\mathbf{x}_0$ and $\mathbf{x}_T$ . Thus, we can have $\tilde{\mu}(\mathbf{x}_T, \mathbf{x}_0)$ as a function of $(\mathbf{x}_T, \epsilon)$ or $(\mathbf{x}_0, \epsilon)$ . In specific, we can write it as: + +$$ +L _ {t - 1} = \mathbb {E} _ {q} \left[ \frac {1}{2 \sigma_ {t} ^ {2}} \| \frac {1}{\sqrt {\alpha_ {t}}} \left(\boldsymbol {x} _ {T} - \frac {\beta_ {t}}{\sqrt {1 - \bar {\alpha} _ {t}}} \epsilon\right) - \mu_ {\theta} \left(\boldsymbol {x} _ {T}, t\right) \| ^ {2} \right] \tag {43} +$$ + +Since $\pmb{x}_T$ can be obtained by $\pmb{x}_0$ , then we may as well model $\mu_{\theta}(\pmb{x}_T,t) = \tilde{\mu}_t(\pmb{x}_T,\pmb{x}_0(\pmb{x}_T,\epsilon_t)) = \frac{1}{\sqrt{\alpha_t}} (\pmb{x}_T - \frac{\beta_t}{\sqrt{1 - \bar{\alpha}_t}}\epsilon_\theta (\pmb{x}_T,t))$ , and the objective function becomes: + +$$ +\begin{array}{l} L _ {t - 1} = \mathbb {E} _ {q} \left[ \frac {1}{2 \sigma_ {t} ^ {2}} \| \frac {1}{\sqrt {\alpha_ {t}}} \left(\boldsymbol {x} _ {T} - \frac {\beta_ {t}}{\sqrt {1 - \bar {\alpha} _ {t}}} \epsilon\right) - \frac {1}{\sqrt {\alpha_ {t}}} \left(\boldsymbol {x} _ {T} - \frac {\beta_ {t}}{\sqrt {1 - \bar {\alpha} _ {t}}} \epsilon_ {\theta} (\boldsymbol {x} _ {T}, t)\right) \| ^ {2} \right] \tag {44} \\ = \mathbb {E} _ {q} [ \frac {\beta_ {t} ^ {2}}{2 \sigma_ {t} ^ {2} \alpha_ {t} (1 - \bar {\alpha} _ {t})} \| \epsilon - \epsilon_ {\theta} (\pmb {x} _ {T}, t) \| ^ {2} ] \\ \end{array} +$$ + +- Thus, during sampling, the mean is + +$$ +\mu_ {\theta} \left(\boldsymbol {x} _ {T}, t\right) = \frac {1}{\sqrt {\alpha_ {t}}} \left(\boldsymbol {x} _ {T} - \frac {\beta_ {t}}{\sqrt {1 - \bar {\alpha} _ {t}}} \epsilon_ {\theta} \left(\boldsymbol {x} _ {T}, t\right)\right), \tag {45} +$$ + +thus the sampling is obtained by: + +$$ +\boldsymbol {x} _ {t - 1} = \frac {1}{\sqrt {\alpha_ {t}}} \left(\boldsymbol {x} _ {T} - \frac {\beta_ {t}}{\sqrt {1 - \bar {\alpha} _ {t}}} \epsilon_ {\theta} \left(\boldsymbol {x} _ {T}, t\right)\right) + \sqrt {\beta_ {t}} \epsilon \quad / / D D P M ^ {\prime} s p a p e r \tag {46} +$$ + +- Further, if we want to model the score, i.e., the $\log_{\tilde{x}}\log q(\tilde{x} |x)$ , and then the score network defined here needs to have a shift: + +$$ +\begin{array}{l} \log_ {\tilde {x}} q (\tilde {x} | x) = \log_ {\boldsymbol {x} _ {T}} q (\boldsymbol {x} _ {T} | \boldsymbol {x} _ {0}) \\ = - \log_ {\boldsymbol {x} _ {T}} \mathcal {N} (\boldsymbol {x} _ {T}; \sqrt {\bar {\alpha} _ {t}} \boldsymbol {x} _ {0}, (1 - \bar {\alpha} _ {t}) I) \\ = - \frac {\sqrt {\bar {\alpha} _ {t}} \boldsymbol {x} _ {0}}{\sqrt {1 - \bar {\alpha} _ {t}}} \tag {47} \\ = \frac {1}{\sqrt {\alpha_ {t}}} \left(\boldsymbol {x} _ {T} + \beta_ {t} \epsilon_ {\theta} \left(\boldsymbol {x} _ {T}, t\right)\right) + \sqrt {\beta_ {t}} \epsilon . \\ \end{array} +$$ + +# E. Stochastic Differential Equation + +A more recent work (Song et al., 2020) unifies the score matching and DDPM into a unified framework, the stochastic differential equation (SDE). First let's do a quick recap on the NCSN and DDPM. + +# E.1. Review of NCSN and DDPM + +NCSN The objective function is: + +$$ +\mathcal {L} = \sum_ {t = 1} ^ {T} \sigma_ {t} ^ {2} \cdot \mathbb {E} _ {p _ {\text {d a t a}} (x)} \mathbb {E} _ {q _ {\sigma_ {t}} (\boldsymbol {x} _ {T} | x)} \left[ \left\| s _ {\theta} \left(\boldsymbol {x} _ {T}, \sigma_ {t}\right) - \nabla_ {\boldsymbol {x} _ {T}} \log q _ {\sigma_ {t}} (\tilde {x} | x) \right\| ^ {2} \right]. \tag {48} +$$ + +There is no notion of forward and backward, and the sampling is achieved by the Langevine dynamics: + +$$ +\boldsymbol {x} _ {T} ^ {m} = \boldsymbol {x} _ {T} ^ {m - 1} + \delta s _ {\theta} \left(\boldsymbol {x} _ {T} ^ {m - 1}, \sigma_ {t}\right) + \sqrt {2 \delta_ {i}} \epsilon , \quad m = 1, \dots , M, \tag {49} +$$ + +where $\delta$ is the step size and $\epsilon \sim \mathcal{N}(0, I)$ . The above is repeated with: + +From $t = T$ to $t = 1$ +- $\pmb{x}_T^0 \sim \mathcal{N}(0, \sigma_T^2 I)$ for $t = T$ . +- $\pmb{x}_T^0 = x_{t + 1}^M$ when $t < T$ . + +DDPM The forward (non-modeling) is: + +$$ +p \left(\boldsymbol {x} _ {T} \mid \boldsymbol {x} _ {t - 1}\right) = \mathcal {N} \left(\boldsymbol {x} _ {T}; \sqrt {\alpha_ {t}} \boldsymbol {x} _ {t - 1}, (1 - \alpha_ {t}) I\right). \tag {50} +$$ + +The backward (modeling part) is: + +$$ +p _ {\theta} \left(\boldsymbol {x} _ {t - 1} \mid \boldsymbol {x} _ {T}\right) = \mathcal {N} \left(\boldsymbol {x} _ {t - 1}; \frac {1}{\sqrt {\alpha_ {t}}} \left(\boldsymbol {x} _ {T} + \beta_ {t} s _ {\theta} \left(\boldsymbol {x} _ {T}, t\right)\right), \beta_ {t} I\right) \tag {51} +$$ + +The objective function is: + +$$ +\mathcal {L} = \sum_ {t = 1} ^ {T} \mathbb {E} _ {p _ {d a t a} (\boldsymbol {x} _ {0})} \mathbb {E} _ {p _ {\alpha_ {t}} (\boldsymbol {x} _ {T} | \boldsymbol {x} _ {0})} \left[ \left\| s _ {\theta} (\boldsymbol {x} _ {T}, t) - \nabla_ {\boldsymbol {x} _ {T}} \log q (\boldsymbol {x} _ {T} | \boldsymbol {x} _ {0}) \right\| ^ {2} \right] \tag {52} +$$ + +The sampling is as follows: + +$$ +\boldsymbol {x} _ {t - 1} = \frac {1}{\sqrt {\alpha_ {t}}} \left(\boldsymbol {x} _ {T} + \beta_ {t} s _ {\theta} \left(\boldsymbol {x} _ {T}, t\right)\right) + \sqrt {\beta_ {t}} \epsilon_ {t}, \quad t = T, \dots , 1, \tag {53} +$$ + +where $\pmb{x}_T \sim \mathcal{N}(0, I)$ and $\epsilon_t \sim \mathcal{N}(0, I)$ . This is called the ancestral sampling since it amounts to performing ancestral sampling from the graphical mode (Song et al., 2020). + +Comparison of NCSN and DDPM Note that in DDPM, it is modeling $KL(q(\pmb{x}_{t-1}|\pmb{x}_T,\pmb{x}_0)||p_\theta(\pmb{x}_{t-1}|\pmb{x}_T))$ , while NCSN is modeling $s_\theta(\pmb{x}_T,t) - \nabla_{\pmb{x}_T}\log p(\pmb{x}_{t-1}|\pmb{x}_T)$ directly. Essentially, these two are equivalent, because: + +$$ +- \sqrt {1 - \bar {\alpha} _ {t}} s _ {\theta} \left(\boldsymbol {x} _ {T}, t\right) = \epsilon_ {\theta} \left(\boldsymbol {x} _ {T}, t\right). \tag {54} +$$ + +# E.2. Stochastic Differential Equation + +Then we introduce the how NCSN and DDPM are solutions to Stochastic Differential Equation (SDE). The SDE is also formulated with the forward and backward processes. + +# Forward process is + +$$ +d \boldsymbol {x} = f (\boldsymbol {x}, t) d t + g (t) d w, \tag {55} +$$ + +where $f(\pmb{x}, t)$ is the vector-value drift coefficient, $g(t)$ is the diffusion coefficient, and $dw$ is the Wiener process. + +# Backward process is: + +$$ +d \boldsymbol {x} = \left[ f (\boldsymbol {x}, t) - g (t) ^ {2} \nabla_ {\boldsymbol {x}} \log p \left(\boldsymbol {x} _ {T}\right) \right] d t + g (t) d w \tag {56} +$$ + +Then the questions is how to estimate the score $\nabla_{\pmb{x}}\log p_t(\pmb {x})$ + +# E.3. Stochastic Differential Equation and Score Matching + +According to (Song et al., 2020), the objective of solutions to SDE can be written in the form of score matching: + +$$ +\mathcal {L} = \mathbb {E} _ {t} \mathbb {E} _ {\boldsymbol {x} _ {0}} \mathbb {E} _ {\boldsymbol {x} _ {T} | \boldsymbol {x} _ {0}} \left[ \lambda (t) \left\| s _ {\theta} (\boldsymbol {x} _ {T}, t) - \nabla_ {\boldsymbol {x} _ {T}} \log p (\boldsymbol {x} _ {T} | \boldsymbol {x} _ {0}) \right\| ^ {2} \right], \tag {57} +$$ + +where $\lambda (t)$ is a weighting function. With sufficient data and model capacity, the optimal $s_{\theta}(\pmb{x}_T,t)$ equals to $\nabla_x\log p(\pmb{x}_T)$ for almost all $\pmb{x}_T$ and $t$ . Then we will review how to match the NCSN and DDPM into this framework. + +NCSN and VE SDE The discretization of VE SDE yields NCSN. The forward process is: + +$$ +\boldsymbol {x} _ {T} = \boldsymbol {x} _ {t - 1} + \sqrt {\sigma_ {t} ^ {2} - \sigma_ {t - 1} ^ {2}} \epsilon_ {t - 1}, \quad / / \text {d i s c r e t e M a r k o v c h a i n} +$$ + +$$ +d x = \sqrt {\frac {d [ \sigma^ {2} (t) ]}{d t}} d w, \quad / / \text {c o n t i n u o u s S D E} \tag {58} +$$ + +Assumption: the $\{\sigma_t\}, t = 1,2,\ldots ,T$ is a geometric sequence. Then the transition kernel becomes: + +$$ +d x = \sigma_ {\min } \left(\frac {\sigma_ {\max }}{\sigma_ {\min }}\right) ^ {2} \sqrt {2 \log \frac {\sigma_ {\max }}{\sigma_ {\min }}} d w \tag {59} +$$ + +and the perturbation kernel can be obtained by + +$$ +p \left(\boldsymbol {x} _ {T} \mid \boldsymbol {x} _ {0}\right) = \mathcal {N} \left(\boldsymbol {x} _ {T}; \boldsymbol {x} _ {0}, \sigma_ {\text {m i n}} ^ {2} \left(\frac {\sigma_ {\text {m a x}}}{\sigma_ {\text {m i n}}}\right) ^ {2 t} I\right) \tag {60} +$$ + +This always gives a process with exploding variance, so this is called VE SDE. + +DDPM and VP SDE The forward process is: + +$$ +\boldsymbol {x} _ {T} = \sqrt {\alpha_ {t}} \boldsymbol {x} _ {t - 1} + \sqrt {1 - \alpha_ {t}} \epsilon_ {t - 1}, \quad / / \text {d i s c r e t e M a r k o v c h a i n} +$$ + +$$ +d x = - \frac {1}{2} \left(1 - \alpha_ {t}\right) x d t + \sqrt {\beta (t)} d w, \quad / / \text {c o n t i n u o u s S D E} \tag {61} +$$ + +If we use the arithmetic sequence for $\{\beta\}_{t=1}^{T}$ , the transition kernel for VP SDE is: + +$$ +d \boldsymbol {x} = - \frac {1}{2} \left(\beta_ {\min } + t \left(\beta_ {\max } - \beta_ {\min }\right) \boldsymbol {x} d t + \sqrt {\beta_ {\min } + t \left(\beta_ {\max } - \beta_ {\min }\right)} d w \right. \tag {62} +$$ + +and the perturbation kernel is: + +$$ +p \left(\boldsymbol {x} _ {T} \mid \boldsymbol {x} _ {0}\right) = \mathcal {N} \left(\boldsymbol {x} _ {T}; e ^ {- \frac {1}{4} t ^ {2} \left(\beta_ {\max } - \beta_ {\min } - \frac {1}{2} t \beta_ {\min }\right)} \boldsymbol {x} _ {0}, I - I e ^ {- \frac {1}{2} t ^ {2} \left(\beta_ {\max } - \beta_ {\min } - t \beta_ {\min }\right)}\right) \tag {63} +$$ + +The drift coefficient is $-\frac{1}{2} (\beta_{\mathrm{min}} + t(\beta_{\mathrm{max}} - \beta_{\mathrm{min}})x$ , and the diffusion coefficient is $\beta_{\mathrm{min}} + t(\beta_{\mathrm{max}} - \beta_{\mathrm{min}})$ . + +# F. Mutual Information and Equivalent Conditional Likelihoods + +To maximize the mutual information between variable $X, Y$ is equivalent to optimize the following equation: + +$$ +\mathcal {L} = \frac {1}{2} \mathbb {E} _ {p (\boldsymbol {x}, \boldsymbol {y})} \left[ \log p (\boldsymbol {y} | \boldsymbol {x}) + \log p (\boldsymbol {x} | \boldsymbol {y}) \right]. \tag {64} +$$ + +Proof. First we can get a lower bound of MI. Assuming that there exist (possibly negative) constants $a$ and $b$ such that $a \leq H(X)$ and $b \leq H(Y)$ , i.e., the lower bounds to the (differential) entropies, then we have: + +$$ +\begin{array}{l} I (X; Y) = \frac {1}{2} \bigl (H (X) + H (Y) - H (Y | X) - H (X | Y) \bigr) \\ \geq \frac {1}{2} (a + b - H (Y | X) - H (X | Y)) \tag {65} \\ \geq \frac {1}{2} (a + b) + \mathcal {L}, \\ \end{array} +$$ + +where the loss $\mathcal{L}$ is defined as: + +$$ +\begin{array}{l} \mathcal {L} = \frac {1}{2} (- H (Y | X) - H (X | Y)) \tag {66} \\ = \frac {1}{2} \mathbb {E} _ {p (\boldsymbol {x}, \boldsymbol {y})} \left[ \log p (\boldsymbol {x} | \boldsymbol {y}) \right] + \frac {1}{2} \mathbb {E} _ {p (\boldsymbol {x}, \boldsymbol {y})} \left[ \log p (\boldsymbol {y} | \boldsymbol {x}) \right]. \\ \end{array} +$$ + +End of proof. + +Empirically, we use energy-based models to model the distributions. The condition on the existence of $a$ and $b$ can be understood as the requirements that the two distributions $(p_x, p_y)$ are not collapsed. + +# F.1. Variational Representation Reconstruction + +The variational representation reconstruction (VRR) was first introduced in GraphMVP (Liu et al., 2021a). There are two mirroring terms in Equation (64), and here we take one term for illustration. The goal of VRR is to take the variational lower bound to maximize: + +$$ +\mathbb {E} _ {p (\boldsymbol {x}, \boldsymbol {y})} [ \log p (\boldsymbol {y} | \boldsymbol {x}) ]. \tag {67} +$$ + +The objective function to Equation (67) has a variational lower bound as: + +$$ +\log p (\boldsymbol {y} | \boldsymbol {x}) \geq \mathbb {E} _ {q \left(\boldsymbol {z} _ {\boldsymbol {x}} | \boldsymbol {x}\right)} \left[ \log p (\boldsymbol {y} | \boldsymbol {z} _ {\boldsymbol {x}}) \right] - K L \left(q \left(\boldsymbol {z} _ {\boldsymbol {x}} | \boldsymbol {x}\right) \| p \left(\boldsymbol {z} _ {\boldsymbol {x}}\right)\right). \tag {68} +$$ + +And VRR proposes a proxy solution $b$ doing the reconstruction on the representation space instead of the data space: + +$$ +\mathcal {L} _ {\mathrm {G}} = \mathcal {L} _ {\mathrm {V R R}} = \mathbb {E} _ {q (\boldsymbol {z} _ {\boldsymbol {x}} | \boldsymbol {x})} \left[ \| q _ {\boldsymbol {x}} (\boldsymbol {z} _ {\boldsymbol {x}}) - \operatorname {S G} (h _ {\boldsymbol {y}}) \| ^ {2} \right] + \beta K L (q (\boldsymbol {z} _ {\boldsymbol {x}} | \boldsymbol {x}) \| p (\boldsymbol {z} _ {\boldsymbol {x}})). \tag {69} +$$ + +# G. Implementation Details of MoleculeSDE + +In this section, we illustrate the details of our proposed MoleculeSDE, including the featurization, backbone models, hyperparameters, architectures for the score networks, etc. The solution in Appendix F.1 to Equation (64) is indeed a conditional generative method to solving the self-supervised learning (SSL). Meanwhile, it is only a proxy solution by conducting the reconstruction on the representation space. Thus, we want to explore a more accurate and explicit estimation to the generative reconstruction on the data space (i.e., the 2D topology and 3D geometry of molecules). + +# G.1. Backbone Models + +For the backbone models, we stick with the existing pretraining works (Hu et al., 2019; Wang et al., 2022c; Liu et al., 2021a; 2023a), which can better illustrate the effectiveness of our proposed methods. We use Graph Isomorphism Network (GIN) (Xu et al., 2018) for modeling the 2D topology and SchNet (Schütt et al., 2018) for 3D conformation, respectively. + +# G.2. Molecule Featurization + +The molecule featurization is an essential factor that should be taken into consideration. A recent work (Sun et al., 2022) has empirically verified that utilizing the rich atom feature. We follow this strategy and employ the featurization from MoleculeNet (Wu et al., 2018) and OGB (Hu et al., 2020a). In specific, we have the atom and bond featurization in Table 7. + +Table 7. Featurization for atoms and bonds. + +
HyperparameterValue
Atom FeaturizationAtom Type[0, 118]
Atom Chirality{unspecified, unrecognized type, tetrahedral with clockwise rotation, tetrahedral: counter-clockwise rotation}
Atom Degree[0, 10]
Formal Charge[-5, 5]
Number of Hydrogen[0, 8]
Number of Unpaired Electrons[0, 4]
Hybridization{SP, SP2, SP3, SP3D, SP3D2}
Is Aromatic{False, True}
Is In Ring{False, True}
Bond FeaturizationBond Type{single, double, triple, aromatic}
Bond Stereotype{none, Z variant, E variant, Cis, Trans, any}
Is conjugated{False, True}
+ +We also want to highlight that such an atom featurization is only available for the topological graph; while for 3D conformation, only the atom type information is available. The other atom information requires either the topology information (e.g., degree, number of Hydrogen) or chemical rules (e.g., chirality) to obtain, and they have not been utilized for the molecule geometric modeling (Schütt et al., 2018). + +# G.3. Pretraining Hyperparameters + +The pretraining pipeline is shown in Figure 1 and the objective function is Section 4.4. Below in Table 8, we illustrate the key hyperparameters used in MoleculeSDE. + +Table 8. Hyperparameter specifications for MoleculeSDE. + +
HyperparameterValue
epochs{50, 100}
learning rate 2D GNN{1e-5, 1e-6}
learning rate 3D GNN{1e-5, 1e-6}
SDE option{VE, VP}
masking ratio M{0, 0.3}
β{[0.1, 10]}
number of steps{1000}
α1{0, 1}
α2{0}
α3{0}
+ +# G.4. SE(3)-Equivariant SDE Model: From Topology to Conformation + +We list the detailed structure of the SE(3)-equivariant SDE model in Figure 3. + +![](images/027ff569e96f0a59cc6af5fd9cca3262e8da979793e9d91ace5dda0a136e9054.jpg) +(a) 01. + +![](images/08dd11bd0e398833901e679807595419c41fc60aab0edf2afee45f714d157e51.jpg) +(b) 02. + +![](images/d19c88f935d7412e082fd288e099728e6c4053da8f6f24e03890cc4969ed735f.jpg) +(c) 03. +Figure 3. Pipeline for the SE(3)-equivariant SDE model from topology to conformation. + +# G.5. SE(3)-Invariant SDE Model: From Conformation to Topology + +We list the detailed structure of the SE(3)-invariant SDE model in Figure 4. + +Similarly with the $2D \to 3D$ procedure, we first merge the 3D representation $\pmb{y}$ with the diffused atom feature $\pmb{X}_t$ : + +$$ +H _ {0} = \mathbf {M L P} (\boldsymbol {X} _ {t}) + \boldsymbol {y}. +$$ + +Since the noised $\pmb{E}_{t}$ becomes a dense adjacency matrix, we will implement a dense GCN to model $\nabla_{\pmb{X}_t}\log p_t(\pmb{x}_t|\pmb{y})$ : + +$$ +S _ {\theta} ^ {\boldsymbol {X} _ {t}} (\boldsymbol {x} _ {t}) = \mathbf {M L P} (\text {C o n c a t e} \{H _ {0} | | \dots | | H _ {L} \}), +$$ + +where $H_{i + 1} = \mathbf{GCN}(H_i,\pmb {E}_t)$ . On the other hand, $\nabla_{\pmb{E}_t}\log p_t(\pmb {x}_t|\pmb {y})\in \mathbf{R}^{n\times n}$ is modeled by an unnormalized dot product attention (without softmax): + +$$ +S _ {\theta} ^ {\boldsymbol {E} _ {t}} (\boldsymbol {x} _ {t}) = \mathbf {M L P} \left(\left\{\text {A t t e n t i o n} (H _ {i}) \right\} _ {0 \leq i \leq L}\right). +$$ + +![](images/ad8e3dd443467161aed07cb8b2a737bf152b1480275f6284c906ef4a9eb4d2dc.jpg) +(a) 01. + +![](images/54613dece69805d56866cde82b5450fa86f04be3dbe5bc78c350c662161282e1.jpg) +(b) 02. + +![](images/2789b53298e745f569fdd7ed9d5bde01280484d15fe5b2122af801de18fdeed6.jpg) +Figure 4. Pipeline for the SE(3)-invariant SDE model from conformation to topology. + +# H. Ablation Studies + +This section provides more ablation studies to verify key concepts in molecule pretraining. + +# H.1. Ablation Study on Generative SSL Pretraining + +We first provide a comprehensive comparison of the effect of the generative SSL part. + +Pretraining. In GraphMVP (Liu et al., 2021a), the generative SSL is variational representation reconstruction (VRR). In MoleculeSDE, the generative SSL is composed of two SDE models. + +Downstream. We consider both the 2D and 3D downstream tasks. We can tell that the generative SSL (SDE)) in MoleculeSDE is better than the generative SSL (VRR) in GraphMVP by a large margin on 27 out of 28 tasks. + +Table 9. Ablation studies on generative SSL comparison. Results for molecular property prediction tasks (with 2D topology only). The best results are marked in bold and bold, respectively. + +
Pre-trainingBBBP↑Tox21 ↑ToxCast ↑Sider ↑ClinTox ↑MUV ↑HIV ↑Bace ↑Avg ↑
VRR (GraphMVP)62.4±1.7173.6±1.0961.4±0.5657.2±1.1186.5±3.0275.5±1.5875.4±0.9672.7±2.1670.61
SDE-VE (MoleculeSDE)68.8±3.5376.5±0.2864.9±0.1459.2±0.4486.1±2.1577.7±2.1577.0±0.6679.6±0.6673.73
SDE-VP (MoleculeSDE)65.5±3.2575.6±0.3663.4±0.2259.8±0.2381.1±1.8380.1±1.1078.6±0.3179.0±0.7972.89
+ +Table 10. Ablation studies on generative SSL comparison. Results on 12 quantum mechanics prediction tasks from QM9. We take 110K for training, 10K for validation, and 11K for testing. The evaluation is mean absolute error. The best results are marked in bold and bold, respectively. + +
PretrainingAlpha ↓Gap ↓HOMO↓LUMO ↓Mu ↓Cv ↓G298 ↓H298 ↓R2 ↓U298 ↓U0 ↓Zpve ↓
VRR (GraphMVP)0.05844.6427.3222.500.0300.03014.9614.690.12714.3513.961.680
SDE-VE (MoleculeSDE)0.05641.8425.7921.630.0270.02911.4710.710.23311.0410.951.474
SDE-VP (MoleculeSDE)0.05642.7525.8421.520.0270.02911.9011.850.20012.0311.691.453
+ +Table 11. Ablation studies on generative SSL comparison. Results on eight force prediction tasks from MD17. We take 1K for training, 1K for validation, and 48K to 991K molecules for the test concerning different tasks. The evaluation is the mean absolute error. The best results are marked in bold and bold, respectively. + +
PretrainingAspirin ↓Benzene ↓Ethanol ↓Malonaldehyde ↓Naphthalene ↓Salicylic ↓Toluene ↓Uracil ↓
VRR (GraphMVP)1.1770.3890.5330.8280.5620.8060.5280.717
SDE-VE (MoleculeSDE)1.2470.3640.4480.7350.4830.7850.4800.575
SDE-VP (MoleculeSDE)1.0870.3580.3000.8800.5170.7880.5400.675
+ +# H.2. Ablation Study on Atom Features and Comparison with Conformation Generation Methods + +As recently discussed in (Sun et al., 2022), the atom feature plays an important role in molecule modeling, especially for 2D topological modeling. We carefully consider this in our work. + +Note that for GIN in Table 2, we are using comprehensive atom and bond features, as shown in Appendix G.2. For the ablation study in Table 5, to make it a fair comparison between GIN and SchNet, we further employed merely the atom type (the same as 3D conformation modeling) and the bond type for 2D topology modeling. We name these two as "GIN with rich features" and the GIN in Table 2 as "GIN with simple features", respectively. The results and comparison with conformation generation methods are shown in Table 12. + +Table 12. Ablation study on the effect of rich features for GIN and comparison with SchNet on conformation generation (CG) methods. + +
ModelCG MethodBBBPSiderClinToxBace
GIN with rich features-68.1±0.5957.0±1.3383.7 ±2.9376.7±2.51
GIN with simple features-64.1±1.7958.4±0.5063.1±7.2176.5±2.96
SchNetMMFF61.4±0.2959.4±0.2764.6±0.5074.3±0.66
SchNetConfGF62.7±1.9760.1±0.8764.1±2.8373.2±3.53
SchNetClofNet61.7±1.1956.0±0.1058.2±0.4462.5±0.17
SchNetMoleculeSDE65.2±0.4360.5±0.3972.9±1.0278.6±0.40
+ +Observation 1. We can tell that using rich or simple features plays an important role in GIN model. This can be observed when comparing the GIN in Table 2 and SchNet in Table 5, and we summarize them in the first two rows in Table 12. + +Observation 2. Additionally, we can tell that SchNet on MoleculeSDE can outperform GIN with simple features, showing that in terms of the MoleculeSDE can extract more useful geometric information. Meanwhile, GIN with rich features performs better on two tasks, especially a large margin in ClinTox. This reveals that the heuristic 2D topological information can also convey some information that is missing in MoleculeSDE. + +Thus, the main message we want to deliver to the audience is that MoleculeSDE is better in terms of the conformation generation, and can be combined with the 2D topology modeling for future works. + +# H.3. Ablation Study on The Effect of Contrastive Learning in MoleculeSDE + +Table 13. Ablation studies on ${\alpha }_{1}$ in MoleculeSDE. Results for molecular property prediction tasks (with 2D topology only). The best results are marked in bold for each pair of ${\alpha }_{1} \in \{ 0,1\}$ . + +
α1BBBP↑Tox21 ↑ToxCast ↑Sider ↑ClinTox ↑MUV ↑HIV ↑Bace ↑Avg ↑
VE068.8±3.5376.5±0.2864.9±0.1459.2±0.4486.1±2.1577.7±2.1577.0±0.6679.6±0.6673.73
173.2±0.4876.5±0.3365.2±0.3159.6±0.8286.6±3.7379.9±0.1978.5±0.2880.4±0.9274.98
VP065.5±3.2575.6±0.3663.4±0.2259.8±0.2381.1±1.8380.1±1.1078.6±0.3179.0±0.7972.89
171.8±0.7676.8±0.3465.0±0.2660.8±0.3987.0±0.5380.9±0.3778.8±0.9279.5±2.1775.07
+ +Table 14. Ablation studies on $\alpha_{1}$ in MoleculeSDE. Results on 12 quantum mechanics prediction tasks from QM9. We take $110\mathrm{K}$ for training, $10\mathrm{K}$ for validation, and $11\mathrm{K}$ for testing. The evaluation is the mean absolute error. The best results are marked in bold for each pair of $\alpha_{1} \in \{0,1\}$ . + +
α1Alpha ↓Gap ↓HOMO↓LUMO ↓Mu ↓Cv ↓G298 ↓H298 ↓R2 ↓U298 ↓U0 ↓Zpve ↓
VE00.05641.8425.7921.630.0270.02911.4710.710.23311.0410.951.474
10.05541.8825.6221.510.0260.02912.9112.370.14212.6812.561.608
VP00.05642.7525.8421.520.0270.02911.9011.850.20012.0311.691.453
10.05441.7725.7421.410.0260.02813.0712.050.15112.5412.041.587
+ +Table 15. Ablation studies on $\alpha_{1}$ in MoleculeSDE. Results on eight force prediction tasks from MD17. We take 1K for training, 1K for validation, and 48K to 991K molecules for the test concerning different tasks. The evaluation is the mean absolute error. The best results are marked in bold for each pair of $\alpha_{1} \in \{0,1\}$ . + +
α1Aspirin ↓Benzene ↓Ethanol ↓Malonaldehyde ↓Naphthalene ↓Salicylic ↓Toluene ↓Uracil ↓
VE01.2470.3640.4480.7350.4830.7850.4800.575
11.1120.3040.2820.5200.4550.7250.5150.447
VP01.0870.3580.3000.8800.5170.7880.5400.675
11.2440.3150.3380.4880.4320.7120.4780.468
+ +# H.4. PaiNN as Backbone + +Table 16. Results on 12 quantum mechanics prediction tasks from QM9, and the backbone model is PaiNN. We take 110K for training, 10K for validation, and 11K for testing. The evaluation is mean absolute error, and the best and the second best results are marked in bold and bold, respectively. + +
Pretrainingα↓∇E↓EHOMO↓ELUMO↓μ↓Cv↓G↓H↓R²↓U↓U0↓ZPVE↓
-0.04942.7324.4620.160.0160.0258.437.880.1698.187.631.419
Distance Prediction0.04937.2322.7518.260.0140.0309.319.350.1439.859.071.566
3D InfoGraph0.04744.2524.0618.540.0150.0528.817.970.1438.688.081.416
GeoSSL-RR0.04641.2023.9319.360.0160.0258.328.170.1747.998.201.438
GeoSSL-InfoNCE0.04539.2923.2318.400.0150.0248.348.370.1277.458.341.356
GeoSSL-EBM-NCE0.04538.8722.7117.890.0140.0828.287.350.1307.857.681.338
3D InfoMax0.04636.9721.3117.690.0140.0248.387.360.1358.607.991.453
GraphMVP0.04436.0320.7117.020.0140.0248.317.360.1327.577.341.337
GeoSSL-DDM-1L0.04536.1320.5917.260.0140.0249.458.430.1288.888.161.380
GeoSSL-DDM0.04335.5520.5716.950.0140.0248.257.420.1277.367.341.334
Uni-Mol0.27740.5621.2523.990.0140.0399.169.140.3409.318.591.433
MoleculeSDE (VE)0.04434.6720.1417.050.0130.0237.647.050.1396.886.791.273
MoleculeSDE (VP)0.04235.0920.1416.780.0130.0238.177.010.1337.307.051.315
+ +Table 17. Results on eight force prediction tasks from MD17, and the backbone model is PaiNN. We take 1K for training, 1K for validation, and 48K to 991K molecules for the test concerning different tasks. The evaluation is mean absolute error, and the best results are marked in bold and bold, respectively. + +
PretrainingAspirin ↓Benzene ↓Ethanol ↓Malonaldehyde ↓Naphthalene ↓Salicylic ↓Toluene ↓Uracil ↓
-0.5720.0530.2300.3380.1320.2880.1410.201
Distance Prediction0.4800.0530.2000.2960.1310.2650.1710.168
3D InfoGraph0.5540.0670.2490.3530.1770.3310.1790.213
GeoSSL-RR0.5590.0510.2620.3680.1460.3030.1540.202
GeoSSL-InfoNCE0.4280.0510.1970.3370.1270.2470.1360.169
GeoSSL-EBM-NCE0.4350.0480.1980.2950.1430.2450.1320.172
3D InfoMax0.4790.0520.2200.3440.1380.2670.1550.174
GraphMVP0.4650.0500.2050.3160.1190.2420.1360.168
GeoSSL-DDM-1L0.4360.0480.2090.3200.1190.2490.1320.177
GeoSSL-DDM0.4270.0470.1880.3130.1200.2400.1290.167
Uni-Mol0.4870.0480.2170.3290.1510.2990.1410.182
MoleculeSDE (VE)0.4210.0430.1950.2840.1050.2360.1230.158
MoleculeSDE (VP)0.4430.0450.1910.3010.1310.2610.1400.159
+ +# H.5. Experiments on Conformation Generation + +Notice that MoleculeSDE also supports the 2D to 3D conformation generation tasks, and we compare the following state-of-the-art models in Table 18. + +Table 18. Results on conformation generation without FF optimization. The datasets are GEOM QM9 and GEOM Drugs. + +
MethodsGEOM QM9GEOM Drugs
COV (%) ↑MAT (Å) ↓COV (%) ↑MAT (Å) ↓
MeanMedianMeanMedianMeanMedianMeanMedian
RDKit83.2690.780.34470.293560.9165.701.20261.1252
ConfGF (Shi et al., 2021)88.4994.130.26730.268562.1570.931.16291.1596
DMGC (Zhu et al., 2022)96.2399.260.20830.201496.52100.000.72200.7161
RMCF-R (Wang et al., 2022b)----82.2590.770.8390.789
RMCF-C (Wang et al., 2022b)----87.1296.260.7490.709
MoleculeSDE (ours)92.3797.210.24230.235685.4299.490.94850.9041
+ +# I. Computational Cost on Pretraining + +Table 19. Computational time on 17 pretraining algorithms. All the jobs are running on one single V100 GPU card. The four models in the first block are 2D SSL, the nine models in the second block are 3D SSL, and the four models in the last block are 2D-3D SSL. + +
Pretraining Algorithmmin / epoch
AttrMask5.5 min/epoch
ContextPred14 min/epoch
InfoGraph6 min/epoch
MolCLR10 min/epoch
Type Prediction7.75 min/epoch
Distance Prediction6.7 min/epoch
Angle Prediction8 min/epoch
3D InfoGraph7.5 min/epoch
RR9.7 min/epoch
InfoNCE10 min/epoch
EBM-NCE10.8 min/epoch
GeoSSL-1L11.2 min/epoch
GeoSSL18 min/epoch
3D InfoMax8.6 min/epoch
GraphMVP11 min/epoch
MoleculeSDE (VE)30 min/epoch
MoleculeSDE (VP)30 min/epoch
\ No newline at end of file diff --git a/agroupsymmetricstochasticdifferentialequationmodelformoleculemultimodalpretraining/images.zip b/agroupsymmetricstochasticdifferentialequationmodelformoleculemultimodalpretraining/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..bb535c92cf8c16d40deafe79e268f5c890913a8a --- /dev/null +++ b/agroupsymmetricstochasticdifferentialequationmodelformoleculemultimodalpretraining/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:865ca930b1bad2c004a2a908dd829574412e49d098bfcfae61fc120e7fc3a07c +size 2056738 diff --git a/agroupsymmetricstochasticdifferentialequationmodelformoleculemultimodalpretraining/layout.json b/agroupsymmetricstochasticdifferentialequationmodelformoleculemultimodalpretraining/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ba3c8c3df7895874e9a90974130da47bad59e6ab --- /dev/null +++ b/agroupsymmetricstochasticdifferentialequationmodelformoleculemultimodalpretraining/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:45e69878c20f48490d5e7725e2563db4755574c4988c8c560147de423003178c +size 1039377 diff --git a/ahybridquantumclassicalapproachbasedonthehadamardtransformfortheconvolutionallayer/169451f3-17f3-425a-9688-e3603a81f289_content_list.json b/ahybridquantumclassicalapproachbasedonthehadamardtransformfortheconvolutionallayer/169451f3-17f3-425a-9688-e3603a81f289_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..71464e7655f232c848e33edb47a91c40c037ec36 --- /dev/null +++ b/ahybridquantumclassicalapproachbasedonthehadamardtransformfortheconvolutionallayer/169451f3-17f3-425a-9688-e3603a81f289_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c501f042496a38cd1675d9b7e95c410f71b4e91666bd2972185efef6094d80c +size 96339 diff --git a/ahybridquantumclassicalapproachbasedonthehadamardtransformfortheconvolutionallayer/169451f3-17f3-425a-9688-e3603a81f289_model.json b/ahybridquantumclassicalapproachbasedonthehadamardtransformfortheconvolutionallayer/169451f3-17f3-425a-9688-e3603a81f289_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9489ead47371289edd115de1b1fef190ee3e574c --- /dev/null +++ b/ahybridquantumclassicalapproachbasedonthehadamardtransformfortheconvolutionallayer/169451f3-17f3-425a-9688-e3603a81f289_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aebb8af8f61ad53a67e7677d6d14d03c9e3e40ab1f2b52061f97a1733209917c +size 116799 diff --git a/ahybridquantumclassicalapproachbasedonthehadamardtransformfortheconvolutionallayer/169451f3-17f3-425a-9688-e3603a81f289_origin.pdf b/ahybridquantumclassicalapproachbasedonthehadamardtransformfortheconvolutionallayer/169451f3-17f3-425a-9688-e3603a81f289_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..20c14c54a991237208e604e1ca4a740835b99b6e --- /dev/null +++ b/ahybridquantumclassicalapproachbasedonthehadamardtransformfortheconvolutionallayer/169451f3-17f3-425a-9688-e3603a81f289_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a2c8e4d8ca40c92e6368e79a75202e8f981a4c90aed61505e64f65672ac1379 +size 1006687 diff --git a/ahybridquantumclassicalapproachbasedonthehadamardtransformfortheconvolutionallayer/full.md b/ahybridquantumclassicalapproachbasedonthehadamardtransformfortheconvolutionallayer/full.md new file mode 100644 index 0000000000000000000000000000000000000000..936c875ae579c3c30c68905577d24160ae50a0eb --- /dev/null +++ b/ahybridquantumclassicalapproachbasedonthehadamardtransformfortheconvolutionallayer/full.md @@ -0,0 +1,426 @@ +# A Hybrid Quantum-Classical Approach based on the Hadamard Transform for the Convolutional Layer + +Hongyi Pan $^{1}$ Xin Zhu $^{1}$ Salih Atici $^{1}$ Ahmet Enis Cetin $^{1}$ + +# Abstract + +In this paper, we propose a novel Hadamard Transform (HT)-based neural network layer for hybrid quantum-classical computing. It implements the regular convolutional layers in the Hadamard transform domain. The idea is based on the HT convolution theorem which states that the dyadic convolution between two vectors is equivalent to the element-wise multiplication of their HT representation. Computing the HT is simply the application of a Hadamard gate to each qubit individually, so the HT computations of our proposed layer can be implemented on a quantum computer. Compared to the regular Conv2D layer, the proposed HT-perceptron layer is computationally more efficient. Compared to a CNN with the same number of trainable parameters and $99.26\%$ test accuracy, our HT network reaches $99.31\%$ test accuracy with $57.1\%$ MACs reduced in the MNIST dataset; and in our ImageNet-1K experiments, our HT-based ResNet-50 exceeds the accuracy of the baseline ResNet-50 by $0.59\%$ center-crop top-1 accuracy using $11.5\%$ fewer parameters with $12.6\%$ fewer MACs. + +# 1. Introduction + +Recently, with the rapid progress in quantum computing hardware, implementing deep neural networks on a quantum computer such as IBM Quantum System (IBM-Q) become feasible for researchers (Matsuo et al., 2019; Koppenhöfer et al., 2020). On the microscopic scale, a quantum is indivisible. All well-known microscopic particles such as electrons and photons are manifestations of the quantum. Classical computers use 0 and 1 to store and process data, + +$^{1}$ Department of Electrical and Computer Engineering, University of Illinois Chicago, Chicago, IL, USA. Correspondence to: Hongyi Pan , Ahmet Enis Cetin . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +while quantum computers' basic computing unit, which is called the qubit, can be both 0 and 1 at the same time. It allows the co-existence of "superposition states" and thus has more powerful parallel capabilities. Therefore, a quantum computer could perform certain calculations exponentially faster than any modern classical computer. + +However, quantum neural networks (QNN) use as many qubits as the size of the input, which makes it unlikely to be implemented on current quantum computers to solve real-world problems. Therefore, researchers investigate hybrid quantum-classical neural networks instead of pure quantum domain computing. + +Fourier convolution theorem states that the convolution of two vectors in the one-dimensional (1D) time domain and the two-dimensional (2D) spatial domain is equivalent to the element-wise multiplication in the Fourier domain. However, Discrete Fourier Transform (DFT) is a complex transform and it is difficult to implement it using quantum computing. On the other hand, Hadamard Transform (HT) is a binary transform and also holds a similar convolution transform. The Hadamard convolution theorem, which will be reviewed in Section 2.2, inspires us to design a hybrid quantum-classical implementable convolutional layer for deep neural networks. The HT of a vector can be computed in $O(1)$ time using the quantum Hadamard gates. In this paper, we use a hybrid quantum-classical method that takes $O(N)$ time to compute the HT for a $N$ -length vector, while the classical fast HT algorithm takes $O(N\log_2N)$ time. + +In this work, we propose a hybrid quantum-classical neural network layer based on the HT, and the proposed layer can be used to replace the classical Conv2D layer. It reduces the computational cost significantly by producing comparable or better accuracy results than the baseline classical convolutional model. Our proposed layer is trained as the classical neural network layers using the back-propagation algorithm as it can also be implemented in a purely classical manner. + +Related works include the following: + +WHT-Based Neural Networks Walsh-Hadamard Transform (WHT) is the permuted version of the HT. In (Zhao et al., 2021), authors use WHT is used to assist their "ZerO + +Initialization" method. They apply the WHT in the skip connections, but they did not use the Hadamard convolution theorem and they did not replace the convolutional layers with multiplicative layers in the Hadamard domain. The method described in (Zhao et al., 2021) does not reduce the number of parameters or the computational cost. Their goal is to improve the accuracy of their networks. Other WHT-based neural networks including (Deveci et al., 2018; Pan et al., 2021; Park & Lee, 2022; Pan et al., 2022a) are used to reduce the computational cost. In (Deveci et al., 2018), a binary neural network with a two-stream structure is proposed, where one input is the regular image and the other is the WHT of the image, and convolutions are not implemented in the transform domain. The WHT is only applied at the beginning in (Deveci et al., 2018). In (Park & Lee, 2022), authors do not take advantage of the Hadamard convolution theorem. They compute the WHT or related binary transforms and apply the convolution in the transform domain. The WHT-based layers in (Pan et al., 2021; 2022a) also use take the element-wise multiplication in the WHT domain like this work, but they do not extract any channel-wise features. They only extract the features width-wise and height-wise, as they have a similar scaling layer to this work. On the contrary, we have channel-wise processing layers in the Hadamard domain to extract the channel-wise features and introduce the quantum-computer-based implementation of the network. + +Trainable Soft-Thresholding The soft-thresholding function is commonly used in wavelet transform domain denoising (Donoho, 1995) and as a proximal operator for the $\ell_1$ norm based optimization problems (Karakus et al., 2020). With trainable threshold parameters, soft-thresholding and its variants can be employed to remove the noise in the transform domain-based networks (Badawi et al., 2021; Pan et al., 2021; 2022a;b; 2023). + +QNN, QCNN, and QCCNN Quantum neural networks (QNNs) based on the principles of quantum mechanics were proposed in (Kak, 1995), and quantum convolutional neural networks (QCNNs) were proposed in (Cong et al., 2019). The training method for the QNNs was proposed in (Ricks & Ventura, 2003). A QCNN uses as many qubits as the size of the input, which makes it unlikely to be implemented on current quantum computers to solve real-world problems. To combine classical neural networks with the advantages of quantum information in order to develop more efficient algorithms, the hybrid quantum-classical convolutional neural networks (QCCNNs) were proposed in (Liu et al., 2021). To create a QCCNN, the hidden layers can be implemented using parameterized quantum circuits, where the rotation angles for each quantum gate are specified by the components of a classical input vector. The outputs from the previous layer are collected and used as the inputs for the parameterized circuit, and the outputs of the quantum circuits can be + +obtained from the measurement statistics. + +Hybrid Quantum-Classical WHT Hybrid quantum-classical algorithms for the Walsh-Hadamard Transform (WHT) are proposed in (Shukla & Vedula, 2023a;b), where the hybrid WHT is used to obtain the WHT in the classical domain for the digital images in the classical domain, but using the Quantum Hadamard gate for computational efficiency. Other procedures are still operated using classical methods. In (Shukla & Vedula, 2023a), authors propose an image denoising method by only changing the high-frequency components in the transform domain to 0. This change is implemented in a classical manner. Then, they apply the hybrid inverse WHT to obtain denoised images, which are essentially low-pass filtered versions of the original images. In (Shukla & Vedula, 2023b), authors use the hybrid WHT to solve differential equations. The above two papers are not related to neural networks. + +Other Transform-Based Neural Networks The Discrete Fourier Transform (DFT) and Discrete Cosine Transforms are the two of the most important signal and image processing tools. One-dimensional (1D) and two-dimensional (2D) convolutions can be implemented using element-wise multiplications in the 1D and 2D transform domains, respectively. DFT-based neural networks include (Chi et al., 2020; Mohammad & Almekkawy, 2021), and DCT-based neural networks include (Gueguen et al., 2018; dos Santos et al., 2020; dos Santos & Almeida, 2021; Xu & Nakayama, 2021; Ulicny et al., 2022). However, the DFT is a complex transformation, and Quantum Fourier Transform (QFT) is actually implemented using the Hadamard gates (Weinstein et al., 2001). On the other hand, the HT is implemented only using additions and subtractions and it can be implemented very efficiently using quantum computers. Therefore, the HT is more efficient than other transforms. + +# 2. Methodology + +# 2.1. Background: Hybrid quantum-classical approach for Hadamard Transform + +The HT is a member of generalized Fourier transforms. It can be considered as constructed from size-2 DFT's. As a result, the transform matrix only has $\pm 1$ instead of complex exponential weights as in DFT. Let $\mathbf{x} = [x_0 x_1 \ldots x_{N-1}]^T$ be a vector with $N = 2^M$ components for $M \in \mathbb{N}$ , its HT vector $\mathbf{X} = [X_0 X_1 \ldots X_{N_1}]^T$ is computed as + +$$ +X _ {k} = \sqrt {\frac {1}{N}} \sum_ {m = 0} ^ {N - 1} x _ {m} (- 1) ^ {\sum_ {i = 0} ^ {M - 1} k _ {i} m _ {i}}, \tag {1} +$$ + +where, $k_{i}$ and $m_{i}$ are the $i$ -bits in the binary representations of $k$ and $m$ , respectively. The HT can be computed as the + +matrix product between $\mathbf{X}$ and the Hadamard matrix $\mathbf{H}_N$ : + +$$ +\mathbf {X} = \mathcal {H} (\mathbf {x}) = \sqrt {\frac {1}{N}} \mathbf {H} _ {N} \mathbf {x}. \tag {2} +$$ + +The Hadamard matrix $\mathbf{H}_N$ is constructed as + +$$ +\mathbf {H} _ {N} = \left\{ \begin{array}{l l} 1, & N = 1, \\ \left[ \begin{array}{c c} 1 & 1 \\ 1 & - 1 \end{array} \right], & N = 2, \\ \left[ \begin{array}{c c} \mathbf {H} _ {\frac {N}{2}} & \mathbf {H} _ {\frac {N}{2}} \\ \mathbf {H} _ {\frac {N}{2}} & - \mathbf {H} _ {\frac {N}{2}} \end{array} \right], & N \geq 4, \end{array} \right. \tag {3} +$$ + +Alternatively, for $N \geq 4$ , $N = 2^M$ , $\mathbf{H}_N$ can also be computed using Kronecker product $\otimes$ : + +$$ +\mathbf {H} _ {N} = \mathbf {H} _ {2} \otimes \mathbf {H} _ {\frac {N}{2}} = \mathbf {H} _ {2} ^ {\otimes M}. \tag {4} +$$ + +The Walsh matrix $\mathbf{W}_N$ for the Walsh-Hadamard Transform (WHT) is the sequence-order-rearranged version of the Hadamard matrix $\mathbf{H}_N$ (Walsh, 1923; Yuen, 1972). The sequence ordering can be derived from the natural ordering (Hadamard ordering) by first applying the bit-reversal permutation and then the Gray-code permutation. + +With the squency ordering, the number of sign changes in a row is in increasing order. The more changes there are, the higher the frequency component is extracted. Therefore, the Walsh matrix is more commonly used. However, in this work, we apply element-wise multiplication in the Hadamard domain. Therefore, permutation is unnecessary in this work, so we use HT instead of WHT to simplify the implementation. + +The Hadamard matrix $\mathbf{H}_N$ is orthogonal and symmetric, as + +$$ +\mathbf {H} _ {N} \mathbf {H} _ {N} ^ {T} = N \mathbf {I} _ {N}, \mathbf {H} _ {N} = \mathbf {H} _ {N} ^ {T}, \tag {5} +$$ + +so the Inverse Hadamard Transform (IHT) can be implemented in a similar manner as the forward HT: + +$$ +\mathbf {x} = \mathcal {H} ^ {- 1} (\mathbf {X}) = \sqrt {\frac {1}{N}} \mathbf {H} _ {N} \mathbf {X} = \mathcal {H} (\mathbf {X}). \tag {6} +$$ + +In practice, we can combine two $\sqrt{\frac{1}{N}}$ normalization terms from the HT and the IHT to one $\frac{1}{N}$ to avoid the square-root operation. + +Similar to the Fast Fourier Transform (FFT), HT can be implemented in a fast way using the butterfly structures as Eq. (1) in (Fino & Algazi, 1976). In this method, the Fast Hadamard Transform (FHT) has the time complexity of $O(N \log_2 N)$ in the classical domain. + +On the other hand, $\mathbf{H} = \sqrt{\frac{1}{2}}\mathbf{H}_2 = \sqrt{\frac{1}{2}}\left[ \begin{array}{cc}1 & 1\\ 1 & -1 \end{array} \right]$ is the transformation matrix of the quantum Hadamard gate in + +a computational bias. Therefore, the Hadamard transform can be computed in $O(1)$ time in the quantum domain, as it is a quantum logic gate that can be parallelized. Let $\bar{\mathbf{x}} = [\bar{x}_0\bar{x}_1\dots \bar{x}_{N - 1}]^T$ be a normalized version of $\mathbf{x}$ , i.e., $\bar{\mathbf{x}} = \frac{1}{|\mathbf{x}|}\mathbf{x}$ , the quantum implementation of HT of $\bar{\mathbf{x}}$ involves preparing the initial state $\sum_{k = 0}^{N - 1}\bar{x}_k|k\rangle$ , and then applying quantum Hadamard gates $\mathbf{H}^{\otimes M}$ on it. It can be verified that + +$$ +\begin{array}{l} \mathbf {H} ^ {\otimes M} \left[ \sum_ {k = 0} ^ {N - 1} \bar {x} _ {k} | k \rangle \right] = \sqrt {\frac {1}{N}} \sum_ {k = 0} ^ {N - 1} \sum_ {m = 0} ^ {N - 1} \bar {x} _ {m} (- 1) ^ {\sum_ {i = 0} ^ {M - 1} k _ {i} m _ {i}} \\ = \sum_ {k = 0} ^ {N - 1} \bar {X} _ {k} | k \rangle , \tag {7} \\ \end{array} +$$ + +where, $\bar{\mathbf{X}}$ is the HT of $\bar{\mathbf{x}}, N = 2^{M}$ . Although the quantum approach for the HT on a $M$ -length vector has the computational cost of $O(1)$ , the difficulty lies in the measurement. One can only find the square of the amplitude of the HT values by carrying out the measurement. However, the HT of $\bar{\mathbf{x}}$ has both positive and negative values, but the sign information is lost during the amplitude measurement. To obtain the correct sign information for the HT, we adopt the Algorithm 1 in (Shukla & Vedula, 2023b), which is based on Lemma 2.1 and increases the computational cost to $O(N)$ . + +Lemma 2.1. Let $\mathbf{x} = [x_0 x_1 \ldots x_{N-1}]^T$ and $\mathbf{X} = \mathcal{H}(\mathbf{x}) = [X_0 X_1 \ldots X_{N-1}]^T$ . If $x_0 > \sum_{k=1}^{N-1} |x_k|$ , then $X_k > 0$ for $k = 0, 1, \ldots, N-1$ . + +The proof for Lemma 2.1 is presented in Appendix A. We can change $x_0$ to a large number to satisfy the condition in Lemma 2.1. The HT of the revised vector will have all positive entries that can be computed in the quantum domain efficiently. After that, we can revise the HT results based on the change on $x_0$ to get the correct $\mathbf{X} = \mathcal{H}(\mathbf{x})$ . + +We use Algorithm 1 in (Shukla & Vedula, 2023b) to implement the HT in our deep neural network. In (Shukla & Vedula, 2023b) they used WHT in Algorithm 1, instead, we use HT. In summary, we first change the first element of the vector $x_0$ to a large number $b$ and reconstruct the vector as $\tilde{\mathbf{x}}$ , then we normalize it by $c = ||\tilde{\mathbf{x}}||$ as $\bar{\mathbf{x}}$ . We apply such a normalization because the norm of the input vector to the quantum state must be 1. After that, we prepare the state $|\Psi\rangle = \sum_{k=0}^{N-1} \bar{x}_k |k\rangle$ and apply quantum Hadamard gates $\mathbf{H}^{\otimes M}$ on $|\Psi\rangle$ . Then we measure all the $M$ qubits to compute the probability $p_k$ of obtaining the state $|k\rangle$ for $k = 0$ to $N-1$ . These $p_k$ are the scaled version of the square amplitude of the HT of $\tilde{\mathbf{x}}$ , so we scale the $\sqrt{p_k}$ by the normalization factor $c$ . Finally, we subtract the results by $\delta = \sqrt{\frac{1}{N}} (b - x_0)$ to obtain the HT of $\mathbf{x}$ . + +In this work, we design the proposed neural network layer based on the Two-Dimensional Hadamard trans + +Algorithm 1 The hybrid quantum-classical HT algorithm. + +Input: The input vector $\mathbf{x} = [x_0 x_1 \ldots x_{N-1}]^T \in \mathbb{R}^N$ where $N = 2^M$ , $M \in \mathbb{N}$ . + +Output: The output vector $\mathbf{X} = [x_0 x_1 \dots x_{N-1}]^T$ which is the HT of the $\mathbf{x}$ . + +$b = \epsilon +\sum_{k = 0}^{N - 1}|x_k|$ , where $\epsilon$ is any positive number; + +$$ +\tilde {\mathbf {x}} = \left[ b x _ {1} x _ {2} \dots x _ {N - 1} \right] ^ {T}; +$$ + +$$ +c = \left\| \tilde {\mathbf {x}} \right\| = \sqrt {b ^ {2} + \sum_ {k = 1} ^ {N - 1} x _ {k} ^ {2}}. +$$ + +$$ +\bar {\mathbf {x}} = \left[ \tilde {x} _ {0} \tilde {x} _ {1} \dots \tilde {x} _ {N - 1} \right] ^ {T} = \tilde {\mathbf {x}} / c; +$$ + +Prepare the state $|\Psi \rangle = \sum_{k=0}^{N-1} \bar{x}_k |k\rangle$ using $M$ qubits; Apply $\mathbf{H}^{\otimes M}$ on $|\Psi \rangle$ ; + +Measure all the $M$ qubits to compute the probability $p_k$ of obtaining the state $|k\rangle$ for $k = 0$ to $N - 1$ ; + +$$ +\delta = \sqrt {\frac {1}{N}} (b - \mathbf {x} [ 0]); +$$ + +$$ +\mathbf {X} = \left[ c \sqrt {p _ {0}} - \delta c \sqrt {p _ {1}} - \delta \dots c \sqrt {p _ {N - 1}} - \delta \right] ^ {T}. +$$ + +form (HT2D). Let $\mathbf{x} = \begin{bmatrix} x_{0,0} & \dots & x_{0,N - 1}\\ \vdots & \ddots & \vdots \\ x_{N - 1,0} & \dots & x_{N - 1,N - 1} \end{bmatrix}$ + +$N = 2^{M}$ $M\in \mathbb{N}$ The HT expression $\mathbf{X} =$ + +$$ +\left[ \begin{array}{c c c} X _ {0, 0} & \dots & X _ {0, N - 1} \\ \vdots & \ddots & \vdots \\ X _ {N - 1, 0} & \dots & X _ {N - 1, N - 1} \end{array} \right] \text {i s} +$$ + +$$ +\begin{array}{l} X _ {k, l} = \frac {1}{N} \sum_ {m = 0} ^ {N - 1} \sum_ {n = 0} ^ {N - 1} x _ {m, n} (- 1) ^ {\sum_ {i = 0} ^ {M - 1} k _ {i} m _ {i} + l _ {i} n _ {i}} \\ = \frac {1}{N} \sum_ {m = 0} ^ {N - 1} \left(\sum_ {n = 0} ^ {N - 1} x _ {m, n} (- 1) ^ {\sum_ {i = 0} ^ {M - 1} l _ {i} n _ {i}}\right) (- 1) ^ {\sum_ {i = 0} ^ {M - 1} k _ {i} m _ {i}}, \tag {8} \\ \end{array} +$$ + +where, $k_{i}, l_{i}, m_{i}$ and $n_{i}$ are the $i$ -bits in the binary representations of $k, l, m$ and $n$ . Therefore, the HT2D can be obtained from the one-dimensional Hadamard transform (HT1D) in a separable manner for computational efficiency (Vetterli, 1985), as Algorithm 2 presents. The complexity of HT2D on an $N \times N$ image is $O(N^{2})$ (where $N$ is an integer power of 2) using Algorithm 2. On the other hand, the complexity of the classical two-dimension Fast Hadamard transform (FHT2D) is $O(N^{2}\log_{2}N)$ . + +# 2.2. Hadamard Convolution Theorem + +Fourier convolution theorem states that an input feature map $\mathbf{x} \in \mathbb{R}^N$ and a kernel $\mathbf{a} \in \mathbb{R}^K$ can be convolved in the Fourier domain as follows: + +$$ +\mathbf {a} * _ {c} \mathbf {x} = \mathcal {F} ^ {- 1} (\mathbf {A} \circ \mathbf {X})) , \tag {9} +$$ + +where, $\mathcal{F}(\cdot)$ stands for DFT and $\mathcal{F}^{-1}(\cdot)$ stands for IDFT. $\mathbf{A}) = \mathcal{F}(\mathbf{a}),\mathbf{X} = \mathcal{F}(\mathbf{x}).$ $\ast_{c}$ is the circular convolution + +Algorithm 2 The two-dimensional HT algorithm. $\mathbf{X}[i]$ and $\mathbf{X}^T [i]$ represent the $i$ -th column and row of $\mathbf{X}$ . + +Input: The input matrix $\mathbf{x} \in \mathbb{R}^{N_1 \times N_2}$ . + +Output: The output vector $\mathbf{X} \in \mathbb{R}^{N_1 \times N_2}$ which is the HT of the $\mathbf{x}$ . + +for $i = 0$ to $N_{1} - 1$ do + +$\mathbf{X}[i] = \mathcal{H}(\mathbf{x}[i])$ using Algorithm 1; + +end for + +for $i = 0$ to $N_{1} - 1$ do + +$\mathbf{X}^T [i] = \mathcal{H}(\mathbf{X}^T [i])$ using Algorithm 1; + +end for + +operator and $\circ$ represents the element-wise multiplication. Similar to the Fourier convolution theorem, the Hadamard convolution theorem holds as Theorem 2.2, which states the dyadic convolution in the time domain is equivalent to the element-wise multiplication in the Hadamard domain. Theorem 2.2 inspires us to design the HT-based layer which will be discussed in the following section to replace the convolutional layers in convolutional neural networks (CNNs). + +Theorem 2.2 (Hadamard convolution theorem). Let $M \in \mathbb{N}$ , $N = 2^M$ , and $\mathbf{a}, \mathbf{x} \in \mathbb{R}^N$ . The convolution $\mathbf{y} = \mathbf{a} * _d \mathbf{x} \Longleftrightarrow$ the element-wise multiplication $Y_k = A_k X_k$ for $k = 0, 1, \ldots, N-1$ , where, $\mathbf{Y} = \mathcal{H}(\mathbf{y}) = [Y_0 Y_1 \ldots Y_{N-1}]^T$ , $\mathbf{A} = \mathcal{H}(\mathbf{a}) = [A_0 A_1 \ldots A_{N-1}]^T$ , and $\mathbf{X} = \mathcal{H}(\mathbf{x}) = [X_0 X_1 \ldots X_{N-1}]^T$ . + +The proof of Theorem 2.2 is presented in Appendix B (Gulamhusein, 1973; Usakova et al., 2002a;b; Gajic & Stankovic, 2011). Although the dyadic convolution $*_{d}$ is not the same as circular convolution, we will use the HT for convolutional filtering in neural networks. This is because HT is also related to the block Haar wavelet packet transform (Cetin et al., 1993) and each Hadamard coefficient approximately represents a frequency band. As a result, applying weights onto frequency bands and computing the inverse HT is an approximate way of frequency domain filtering similar to the Fourier transform-based convolutional filtering. + +# 2.3. HT-Perceptron Layer + +The proposed HT-perceptron layer is presented in Figure 1 and Algorithm 3. The HT-perceptron layers are combined to construct deep "convolutional" neural networks as shown in Tables 8 and 9 in Appendix C. They replace the Conv2D layers in a typical CNN, therefore, we apply an HT2D along the width and height of the tensor. Similar to a Conv2D layer which contains multiple kernels, our structure has multiple parallel paths. In each path, we first apply element-wise multiplications on the tensor with a $W$ by $H$ trainable matrix $\mathbf{A}$ . This operation is equivalent to convolutional filtering and we call it scaling because the HT coefficients are scaled + +![](images/797890d5f1c66b05eac0036323c2e24b4d068712b0981a4677e01b8da26484ca.jpg) +Figure 1. Structure of the proposed HT-perceptron layer for a tensor in $\mathbb{R}^{C\times W\times H}$ . The HT2D and IHT2D are implemented using the quantum computer 2 (or the classical fast approach) and multiplications and soft-thresholding operations of the network are implemented using the classical approach. We have parallel multiple paths to increase the number of trainable parameters. Each path corresponds to a convolutional kernel. If we want to change the number of output channels, we can change the number of kernels at each channel-wise processing. + +Algorithm 3 $P$ -Path HT-Perceptron Layer. Dimensions are defined as batch size, channel, height, and width. + +Input: The input tensor $\mathbf{x} \in \mathbb{R}^{B \times C_i \times H \times W}$ . + +Output: The output tensor $\mathbf{y} \in \mathbb{R}^{B \times C_o \times H \times W}$ . + +Define: $\mathbf{A}_i, \mathbf{T}_i \in \mathbb{R}^{H \times W}, \mathbf{V}_i = \mathrm{Conv2D}$ (in channels $= C_i$ , out channels $= C_o$ , kernel size $= 1$ ), for $i = 0$ to $P$ . + +$\mathbf{X} = \mathrm{HT2D}(\mathbf{x})$ + +for $i = 0$ to $P - 1$ do + +$\mathbf{X}_i = \mathbf{X}\circ \mathbf{A}_i$ + +$\mathbf{Z}_i = \mathbf{V}_i(\mathbf{X}_i)$ + +$\mathbf{Y}_i = \mathrm{sign}(\mathbf{Z}_i)\circ \mathrm{ReLU}(|\mathbf{Z}_i| - \mathbf{T}_i);$ + +end for + +$\mathbf{Y} = \mathrm{sum}(\mathrm{stack}(\mathbf{Y}_i));$ + +$\mathbf{y} = \mathrm{IHT2D}(\mathbf{Y})$ + +as in Fourier domain filtering. In this work, we initialize $\mathbf{A}$ as random numbers from the uniform distribution of $[0,1)$ . Then, we perform channel-wise processing. This step is implemented similarly to the so-called $1\times 1$ convolution of the Conv2D layer. If we want to change the number of output channels, we can change the number of kernels at this step. After scaling and $1\times 1$ convolution we apply a soft-thresholding function as a nonlinearity instead of RELU because both positive and negative amplitudes are important in the transform domain. Soft-thresholding is widely used in wavelet denoising to remove noise from the data (Chang et al., 2000; Zhao & Cui, 2015). Parameters of the soft-thresholding nonlinearity can be trainable, i.e., they can be learned using the back-propagation algorithm. Finally, we apply an IHT2D on the summation of all paths to obtain the resulting tensor output of the HT-perceptron layer. + +![](images/b592cedc9d4c61263052f732d6f97a6fdac1619f6d0be8587ad1397cd167ce16.jpg) +Figure 2. Procedure of each path in the HT-Perceptron layer. Each entry along the width and the height is processed individually in the Hadamard domain, so we don't need to apply the permutation as the Walsh-Hadamard transform. + +Trainable soft-thresholding is applied to remove small entries in the Hadamard domain. This operation is similar to image coding and transform-domain denoising (Wallace, 1991; Le Gall, 1991). It is defined as follows: + +$$ +\mathbf {Y} = \mathcal {S} _ {\mathbf {T}} (\mathbf {X}) = \operatorname {s i g n} (\mathbf {X}) \circ (| \mathbf {X} | - \mathbf {T}) _ {+}, \tag {10} +$$ + +where, $\circ$ stands for the element-wise multiplication, $(\cdot)_{+}$ stands for the ReLU function, and $\mathbf{T}$ stands for non-negative trainable threshold parameters which are determined using the back-propagation algorithm. $\mathbf{T} \in \mathbb{R}^{W \times H}$ if $\mathbf{X} \in \mathbb{R}^{C \times W \times H}$ . In this work, we initialize $\mathbf{T}$ as random numbers from the uniform distribution of $[0, 0.1)$ . We initialize $\mathbf{T}$ with small positive values because it is the threshold parameter to remove small valued feature maps which correspond to noise. The ReLU function is not suitable because both significantly large positive and negative values are important in the Hadamard domain. For example, a completely positive vector can have both significant positive and negative values in the Hadamard domain. Furthermore, the multiplication between $\mathrm{sign}(\mathbf{X})$ and $(|\mathbf{X}| - \mathbf{T})_{+}$ can be implemented using sign-bit operations, so no multiplication operation is required for the soft-thresholding. + +In summary, an HT-perceptron layer with $P$ paths mapping from $\mathbf{x}$ to $\mathbf{y}$ is defined as: + +$$ +\mathbf {y} = \mathcal {H} ^ {- 1} \left(\sum_ {i = 0} ^ {P} \mathcal {S} _ {\mathbf {T} _ {i}} \left(\mathcal {H} (\mathbf {x}) \circ \mathbf {A} _ {i} * \mathbf {V} _ {i}\right)\right), \tag {11} +$$ + +where, $\mathcal{H}(\cdot)$ and $\mathcal{W}^{-1}(\cdot)$ stand for HT2D and IHT2D, $\mathbf{A}_i$ is the scaling matrix in the $i$ -th path, $\mathbf{V}_i$ represents the $1 \times 1$ Conv2D kernels used in the channel-wise processing in the $i$ -th path, $\mathbf{T}_i$ is the threshold parameter matrix in the soft-thresholding layer in the $i$ -th path, $\circ$ stands for the element-wise multiplication, and $\textcircled{*}$ represents channelwise processing which can be implemented using the $1 \times$ + +Table 1. Parameters of a Conv2D Layer Versus an HT-perceptron layer for a $C$ -channel $N \times N$ image. $N$ is an integer power of 2. + +
LAYER (OPERATION)PARAMETERS
K × K CONV2DK2C2
3 × 3 CONV2D9C2
HT2D0
SCALINGN2
CHANNEL-WISE PROCESSINGC2
SOFT-THRESHOLDINGN2
IHT2D0
P-PATH HT-PERCEPTRON2PN2 + PC2
1-PATH HT-PERCEPTRON2N2 + C2
3-PATH HT-PERCEPTRON6N2 + 3C2
5-PATH HT-PERCEPTRON10N2 + 5C2
+ +Table 2. Multiply-Accumulates (MACs) of a Conv2D Layer Versus an HT-perceptron layer for a $C$ -channel $N \times N$ image. $N$ is an integer power of 2. MACs from the HT2D and the IHT2D are omitted. + +
LAYER (OPERATION)MACs
K × K CONV2DK2N2C2
3 × 3 CONV2D9N2C2
SCALING, SOFT-THRESHOLDINGN2C
CHANNEL-WISE PROCESSINGN2C2
P-PATH HT-PERCEPTRONPN2C + PN2C2
1-PATH HT-PERCEPTRONN2C + N2C2
3-PATH HT-PERCEPTRON3N2C + 3N2C2
5-PATH HT-PERCEPTRON5N2C + 5N2C2
+ +1 Conv2D layer. The procedure in each path of the HT-Perceptron layer is illustrated in Figure 2. + +Table 1 shows the number of parameters in an HT-perceptron layer is $2PN^2 + PC^2$ . The input and output tensors are assumed to be in $\mathbb{R}^{C \times N \times N}$ . In each path, there are $N^2$ parameters in the scaling matrix, $N^2$ parameters in the soft-threshold matrix, and $C^2$ parameters in the channelwise $1 \times 1$ convolution. Therefore, a $P$ -path HT-perceptron layer has $2PN^2 + PC^2$ parameters. For comparison, a $K \times K$ Conv2D layer has $KC^2$ parameters. A 3-path HT-perceptron layer has the same amount of parameters as a $3 \times 3$ Conv2D layer if $C = N$ . Furthermore, in the most main-stream CNNs such as ResNets (He et al., 2016), $C$ is usually much larger than $N$ in the hidden layers, then our proposed HT-perceptron layer can save parameters for ResNet-type CNNs as discussed in Section 3. + +If the input tensor and the output tensor are in $\mathbb{R}^{C\times N\times N}$ , the computational cost of a $K\times K$ Conv2D is $O(K^2 N^2 C^2)$ . On the other hand, the classical FHT algorithm takes $O(N^{2}\log_{2}N)$ , and the hybrid quantum-classical HT al + +The algorithm reduces it to $O(N^2)$ . Compared to the $3 \times 3$ Conv2D whose complexity is $O(N^2C^2)$ , the $O(N^2C)$ from the HT and the IHT can be omitted. In each path, the complexity to compute the scaling and the soft-thresholding is $O(N^2C)$ , and the complexity of the channel-wise processing is $O(N^2C^2)$ . Therefore, the total complexity of a $P$ -path HT-perceptron layer is $O(PN^2C^2)$ . To compare the total computational cost between the HT-perceptron layer with the Conv2D layer, we use Multiply-Accumulate (MACs). 1 MAC contains 1 addition and 1 multiplication. As Table 2 states, in each path, there are $N^2C$ multiplications in scaling and $N^2C$ additions in the soft-thresholding. There is no addition in the scaling and no multiplication in the soft-thresholding. Thus, we totally need $N^2C$ MACs to compute the scaling and the soft-thresholding in each path. In each path, the channel-wise processing has $N^2C^2$ MACs because the channel-wise processing is implemented using the $1 \times 1$ Conv2D layer. Furthermore, MACs from the HT2D and IHT2D can be omitted even when we use the classical FHT approach because the HT2D and the IHT2D can be implemented without any multiplication, as the normalization factor can be computed with the scaling. Therefore, we totally need $PN^2C + PN^2C^2$ MACs to compute a $P$ -path HT-perceptron layer. As a comparison, we totally need $K^2N^2C^2$ MACs to compute a $K \times K$ Conv2D layer. Briefly speaking, the proposed HT-perceptron layer reduces some $N^2C^2$ MACs to $N^2C$ . In consequence, our proposed HT-perceptron layer is more computationally efficient than the Conv2D layer. + +# 3. Experimental Results + +Hybrid Hadamard Neural Network on MNIST We start the experimental section with a toy example of MNIST handwritten digits classification task using a hybrid quantum-classical Hadamard neural network. The MNIST experiments are carried out on the IBM Quantum Lab cloud computer using PyTorch and Qiskit. Since the MNIST image size is $28 \times 28$ , we pad 2 pixels with 0s on all borders to make the input image size $32 \times 32$ . We first convert the intensities of the raw images to the 0 - 1 range. We then normalize the MNIST images with a mean of 0.1307 and a standard deviation of 0.3081. + +As Table 3 shows, the CNN is built using two $3 \times 3$ Conv2D layers, one average pooling layer, and two linear layers. ReLU is used as the activation function in the two convolution layers and the first linear layer. Dropout with a probability of 0.2 is applied after the second Conv2D layer and the first linear layer. Bias terms are applied in all Conv2D and linear layers. The first Conv2D layer increases the number of channels from 1 to 32. We retrain the first Conv2D layer and replace the second Conv2D layer using a 3-path HT-perceptron layer. The 3-path HT struc + +Table 3. Toy CNN for the MNIST classification task. We pad all borders 2 pixels with 0s to make the image size $32 \times 32$ . We revise the layer Conv2 using the proposed HT-perceptron layer. + +
LAYEROUTPUT SHAPEIMPLEMENTATION
INPUT1 × 32 × 32-
CONV132 × 32 × 323 × 3, 32
CONV232 × 32 × 323 × 3, 32
MAXPOOL32 × 16 × 162 × 2
FLATTEN8192-
DENSE128LINEAR, 128
OUTPUT10LINEAR, 10
+ +Table 4. MNIST Experimental Results. There are 1,059,562 parameters in each neural network. + +
METHODMACS (M)ACCURACY
CNN10.8599.26%
HT-CNN (3-PATH)4.66 (57.1%↓)99.31%
+ +ture is used here because the input and output shapes of the second Conv2D layer are both $32 \times 32 \times 32$ , so we keep the number of parameters the same for the CNN and the HT-CNN. There are 1,059,592 parameters in each neural network ( $3^{2} \times 1 \times 32 + 32 = 320$ are in the first Conv2D layer, $3^{2} \times 32^{2} + 32 = 9,248$ parameters are in the second Conv2D layer, $8192 \times 128 + 128 = 1,048,704$ parameters are in the first linear layer, and 1,290 parameters are in the second linear layer). + +To train the neural networks on the MNIST dataset, we use the Adadelta optimizer (Zeiler, 2012) with an initial learning rate of 1.0. The learning rate decays by 0.7 after each epoch. Models are trained with the mini-batch size of 64 for 14 epochs. During the training, the best models are saved based on the accuracy of the MNIST test dataset, and their accuracy results are reported in Table 4. After replacing the second Conv2D layer by the 3-path HT-perceptron layer, $9 \times 32^{4} - (3 \times 32^{4} + 3 \times 32^{3}) = 6.19$ million MACs are reduced $(57.1\%)$ , while the accuracy even improves from $99.26\%$ to $99.31\%$ . + +Hadamard Network on CIFAR-10 and ImageNet-1K Experiments on the CIFAR-10 and ImageNet-1K are carried out on a workstation computer with an NVIDIA RTX3090 GPU using PyTorch. We don't use the IBM-Q cloud platform in these experiments because the cloud platform is too slow for large datasets. We use the classical Fast-HT algorithm instead of Algorithm 2. In these experiments, ResNet-20 for the CIFAR-10 classification task and ResNet50 for the ImageNet-1K classification task (He et al., 2016) are used as the backbone networks. + +To revise ResNet-20 and ResNet-50, we retain the first $3 \times 3$ + +Conv2D layer, then we replace those $3 \times 3$ Conv2D layers at the even indices (the second Conv2D in each convolutional block in ResNet-20, and Conv2_2, Conv3_2, Conv3_4, etc., in ResNet-50) with the HT-perceptron layer. We keep the $3 \times 3$ Conv2D layers at odd indices because, in this way, we use the regular $3 \times 3$ Conv2D layer and the proposed HT-perceptron layer by turns, then the network can extract features in different manners efficiently. We call those HT-perceptron-layer-revised ResNets as HT-ResNets. Tables 8 and 9 in Appendix C describe the method we revising the ResNet-20 and ResNet-50. + +To train ResNet-20 and the HT-ResNet-20s, We use the SGD optimizer with a weight decay of 0.0001 and momentum of 0.9. Models are trained with a mini-batch size of 128 for 200 epochs. The initial learning rate is 0.1, and the learning rate is reduced by 1/10 at epochs 82, 122, and 163, respectively. Data augmentation is implemented as follows: First, we pad 4 pixels on the training images. Then, we apply random cropping to get 32 by 32 images. Finally, we randomly flip images horizontally. We normalize the images with the means of [0.4914, 0.4822, 0.4465] and the standard variations of [0.2023, 0.1994, 02010]. During the training, the best models are saved based on the accuracy of the CIFAR-10 test dataset, and their accuracy numbers are reported in Table 5. + +In the CIFAR-10 experiments, we try different numbers of paths ranging from 1 to 6 in the HT-ResNet-20s. As shown in Table 5, the best accuracy (91.58%) is obtained from the 5-path structure among the HT-ResNet-20s with 1 to 6 paths. Although in the CIFAR-10 experiments, we do not obtain any better accuracy than our baseline model, we successfully reduce many parameters and MACs with producing comparable accuracy results, as the accuracy of the 5-path HT-ResNet-20 drops less than $0.08\%$ compared to the baseline ResNet-20. Our HT-ResNet-20s obtains higher accuracy results than (Pan et al., 2022a) because we extract the channel-wise features using $1 \times 1$ Conv2D in the Hadamard domain. + +In the ImageNet-1K experiments, we compare the baseline ResNet-50 model with the 3-path HT-ResNet-50. However, the most commonly used input size in the ImageNet-1K tasks is $224 \times 224$ , which makes the input tensors sizes not integers power of 2. We carry out two data pre-processing approaches: First, we still use the input size of $224 \times 224$ , but before each HT-perceptron layer, we apply zero-padding to make the input tensor size be the integers power of 2. This approach provides a comparison with other state-of-the-art HT-based works. Second, we use the input size of $256 \times 256$ instead. The number of parameters in each approach is the same, but more computational consumption is required in the second approach. As compensation, higher accuracy results are obtained using the second approach. We + +Table 5. CIFAR-10 Experimental Results. + +
METHODPARAMETERSMACS (M)ACCURACY
RESNET-20 (HE ET AL., 2016)0.27M-91.25%
WHT-BASED RESNET-20 (PAN ET AL., 2022A)133,082 (51.26%↓)-90.12%
RESNET-20 (OUR TRIAL, BASELINE)272,47441.3291.66%
HT-RESNET-20 (1-PATH)151,514 (44.39%↓)22.53 (45.5%↓)91.25%
HT-RESNET-20 (2-PATH)175,706 (35.51%↓)24.98 (39.6%↓)91.28%
HT-RESNET-20 (3-PATH)199,898 (26.64%↓)27.42 (33.6%↓)91.29%
HT-RESNET-20 (4-PATH)224,090 (17.76%↓)29.87 (27.7%↓)91.50%
HT-RESNET-20 (5-PATH)248,282 (8.88%↓)32.31 (21.8%↓)91.58%
HT-RESNET-20 (6-PATH)272,474 (0.00%↓)34.76 (15.9%↓)91.21%
+ +Table 6. ImageNet-1K center-crop accuracy with different input sizes. + +
METHODPARAMETERS (M)MACs (G)TOP-1TOP-5
RESNET-50 (TORCHVISION) (HE ET AL., 2016)25.564.1276.13%92.86%
RESNET-50 (AUGSKIP) ZERO INIT (ZHAO ET AL., 2021)25.564.1276.37%-
RESNET-50 (OUR TRIAL, BASELINE)25.564.1276.06%92.85%
HT-RESNET-50 (3-PATH)22.63 (11.5%↓)3.60 (12.6%↓)76.36%93.02%
RESNET-50 (OUR TRIAL, BASELINE, 256×256 INPUT)25.565.3876.18%92.94%
HT-RESNET-50 (3-PATH, 256×256 INPUT)22.63 (11.5%↓)4.58 (14.9%↓)76.77%93.26%
+ +NOTE: (DEVECI ET AL., 2018; AKHAURI, 2019) CONTAIN NO RESNET-50-BASED RESULT, BUT ALL OF THEIR NETWORKS PRODUCE WORSE ACCURACY RESULTS THAN THEIR BASELINE MODELS ACCORDING TO TABLE 4 IN EACH PAPER. + +Table 7. ImageNet-1K 10-crop accuracy with different input sizes. + +
METHODPARAMETERS (M)MACS (G)TOP-1TOP-5
RESNET-50 (TORCHVISION) (HE ET AL., 2016)25.564.1277.43%93.75%
RESNET-50 (OUR TRIAL, BASELINE)25.564.1277.53%93.75%
HT-RESNET-50 (3-PATH)22.63 (11.5%↓)3.60 (12.6%↓)77.79%94.02%
RESNET-50 (OUR TRIAL, BASELINE, 256×256 INPUT)25.565.3877.61%93.88%
HT-RESNET-50 (3-PATH, 256×256 INPUT)22.63 (11.5%↓)4.58 (14.9%↓)78.33%94.14%
+ +use the SGD optimizer with a weight decay of 0.0001 and momentum of 0.9. Models are trained with a mini-batch size of 128, the initial learning rate is 0.05 for 90 epochs. The learning rate is reduced by 1/10 after every 30 epochs. For data argumentation, we apply random resized crops on training images to get 224 by 224 or 256 by 256 images, then we randomly flip images horizontally. We normalize the images with the means of [0.485, 0.456, 0.406] and the standard variations of [0.229, 0.224, 0.225], respectively. We evaluate our models on the ImageNet-1K validation dataset. During the training, the best models are saved based on the center-crop top-1 accuracy on the ImageNet-1K validation dataset, and their accuracy numbers are reported in Tables 6 and 7. Figure 3 shows the center-crop top-1 error history on the ImageNet-1K validation dataset during the training phase. In the last two rows in Tables 6 and 7 the input size is $256 \times 256$ . In other rows, the input size is $224 \times 224$ . With our revision using the 3-path HT perceptron layer, $11.5\%$ parameters and more than $12\%$ MACs are reduced. In both $224 \times 224$ and $256 \times 256$ resolutions, the HT-ResNet-50 obtains higher accuracy results than the + +baseline regular ResNet-50. We think the accuracy of the HT-ResNet-50 is even better than the vanilla ResNet-50 on ImageNet is because of two reasons: First, we use the regular $3 \times 3$ Conv2D layer and the proposed HT-perceptron layer one after another, the HT-ResNet can extract features in different manners and fuses them. Second, we perform denoising in the transform domain similar to the classical denoising based on wavelet and other transform domain methods (Bruce et al., 1994). + +# 4. Conclusion + +We present a novel Hadamard Transform (HT)-based neural network layer called the HT-perceptron layer. It is a hybrid quantum-classical approach to implementing the Conv2D layer in regular CNNs. The idea is based on the HT convolution theorem which states that the dyadic convolution between two tensors is equivalent to the element-wise multiplication of their HT representations. As a result, we perform the convolutions in the HT domain using element-wise multiplications. Computing the HT is simply the application + +![](images/0a0e7912e1f1781f9be0586ffd6a93502f8586c427e8573c02a715ead631e6f1.jpg) +(a) The input size is $224 \times 224$ . +Figure 3. Training on ImageNet-1K. Curves denote the validation center-crop top-1 errors. + +![](images/c0874695c644383dec352bc5b73b9a5f95c55f95220166112e5c570a5d23b7ba.jpg) +(b) The input size is $256 \times 256$ . + +of a Hadamard gate to each qubit individually in a quantum computer. Therefore, the HT operations of the proposed layer can be implemented on a quantum computer such as an IBM-Q. Compared to the regular Conv2D layer, the proposed HT-perceptron layer is more computationally efficient as our layer requires fewer MACs than the regular Conv2D layer. We compared our proposed HT-layer-based ResNet with the regular Conv2D-based ResNets in the MNIST, CIFAR-10, and ImageNet-1K tasks, and the ResNets using the HT-perceptron layer obtain higher accuracy results on the ImageNet-1K using significantly fewer parameters and MACs than the regular networks. Our code is released at https://github.com/phy710/ICML2023-HT. + +# Acknowledgements + +This work was supported by National Science Foundation (NSF) under grant 1934915 and the University of Illinois Chicago Discovery Partners Institute Seed Funding Program. We thank Dr. Nhan Tran of Fermi Lab for introducing the use of Hadamard Transform in Quantum Computing to us. We sincerely appreciate all valuable comments and suggestions from ICML reviewers, which helped us in improving the quality of the paper. + +# References + +Akhauri, Y. Hadanets: Flexible quantization strategies for neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 0-0, 2019. +Badawi, D., Agambayev, A., Ozev, S., and Cetin, A. E. Discrete cosine transform based causal convolutional neural network for drift compensation in chemical sensors. In ICASSP 2021-2021 IEEE International Conference on + +Acoustics, Speech and Signal Processing (ICASSP), pp. 8012-8016. IEEE, 2021. +Bruce, A. G., Donoho, D. L., Gao, H.-Y., and Martin, R. D. Denoising and robust nonlinear wavelet analysis. In Wavelet applications, volume 2242, pp. 325-336. SPIE, 1994. +Cetin, A. E., Gerek, O. N., and Ulukus, S. Block wavelet transforms for image coding. IEEE Transactions on Circuits and Systems for Video Technology, 3(6):433-435, 1993. +Chang, S. G., Yu, B., and Vetterli, M. Adaptive wavelet thresholding for image denoising and compression. IEEE transactions on image processing, 9(9):1532-1546, 2000. +Chi, L., Jiang, B., and Mu, Y. Fast fourier convolution. Advances in Neural Information Processing Systems, 33: 4479-4488, 2020. +Cong, I., Choi, S., and Lukin, M. D. Quantum convolutional neural networks. Nature Physics, 15(12):1273-1278, 2019. +Deveci, T. C., Cakir, S., and Cetin, A. E. Energy efficient hadamard neural networks. arXiv preprint arXiv:1805.05421, 2018. +Donoho, D. L. De-noising by soft-thresholding. IEEE transactions on information theory, 41(3):613-627, 1995. +dos Santos, S. F. and Almeida, J. Less is more: Accelerating faster neural networks straight fromJPEG. In Iberoamerican Congress on Pattern Recognition, pp. 237-247. Springer, 2021. +dos Santos, S. F., Sebe, N., and Almeida, J. The good, the bad, and the ugly: Neural networks straight from + +jpeg. In 2020 IEEE International Conference on Image Processing (ICIP), pp. 1896-1900. IEEE, 2020. +Fino, B. J. and Algazi, V. R. Unified matrix treatment of the fast walsh-hadamard transform. IEEE Transactions on Computers, 25(11):1142-1146, 1976. +Gajic, D. B. and Stankovic, R. S. Calculation of dyadic convolution using graphics processing units and opengl matrix, 1:8, 2011. +Gueguen, L., Sergeev, A., Kadlec, B., Liu, R., and Yosinski, J. Faster neural networks straight fromJPEG. Advances in Neural Information Processing Systems, 31, 2018. +Gulamhusein, M. Simple matrix-theory proof of the discrete dyadic convolution theorem. *Electronics Letters*, 10(9): 238-239, 1973. +He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. +Kak, S. C. Quantum neural computing. Advances in imaging and electron physics, 94:259-313, 1995. +Karakus, O., Rizaev, I., and Achim, A. A simulation study to evaluate the performance of the cauchy proximal operator in despeckling sar images of the sea surface. In IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, pp. 1568-1571. IEEE, 2020. +Koppenhöfer, M., Bruder, C., and Roulet, A. Quantum synchronization on the ibm q system. Physical Review Research, 2(2):023026, 2020. +Le Gall, D. Mpeg: A video compression standard for multimedia applications. Communications of the ACM, 34(4): 46-58, 1991. +Liu, J., Lim, K. H., Wood, K. L., Huang, W., Guo, C., and Huang, H.-L. Hybrid quantum-classical convolutional neural networks. Science China Physics, Mechanics & Astronomy, 64(9):1-8, 2021. +Matsuo, A., Hattori, W., and Yamashita, S. Reducing the overhead of mapping quantum circuits to ibm q system. In 2019 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1-5. IEEE, 2019. +Mohammad, U. F. and Almekkawy, M. A substitution of convolutional layers by fft layers-a low computational cost version. In 2021 IEEE International Ultrasonics Symposium (IUS), pp. 1-3. IEEE, 2021. +Pan, H., Badawi, D., and Cetin, A. E. Fast walsh-hadamard transform and smooth-thresholding based binary layers in deep neural networks. In Proceedings of the IEEE/CVF + +Conference on Computer Vision and Pattern Recognition, pp. 4650-4659, 2021. +Pan, H., Badawi, D., and Cetin, A. E. Block walsh-hadamard transform based binary layers in deep neural networks. ACM Transactions on Embedded Computing Systems (TECS), 2022a. +Pan, H., Badawi, D., Chen, C., Watts, A., Koyuncu, E., and Cetin, A. E. Deep neural network with walsh-hadamard transform layer for embed detection during a wildfire. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 257-266, 2022b. +Pan, H., Zhu, X., Ye, Z., Chen, P.-Y., and Cetin, A. E. Real-time wireless ecg-derived respiration rate estimation using an autoencoder with a dct layer. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1-5. IEEE, 2023. +Park, J. and Lee, S. Energy-efficient image processing using binary neural networks with hadamard transforms. In Proceedings of the Asian Conference on Computer Vision, pp. 4711-4725, 2022. +Ricks, B. and Ventura, D. Training a quantum neural network. Advances in neural information processing systems, 16, 2003. +Shukla, A. and Vedula, P. A hybrid classical-quantum algorithm for digital image processing. Quantum Information Processing, 22(1):1-19, 2023a. +Shukla, A. and Vedula, P. A hybrid classical-quantum algorithm for solution of nonlinear ordinary differential equations. Applied Mathematics and Computation, 442: 127708, 2023b. +Ulicny, M., Krylov, V. A., and Dahyot, R. Harmonic convolutional networks based on discrete cosine transform. Pattern Recognition, 129:108707, 2022. +Usakova, A., Kotuliaková, J., and Zajac, M. Using of discrete orthogonal transforms for convolution. J. Electrical Eng, 53:285-288, 2002a. +Usakova, A., Kotuliaková, J., and ZAJAC, M. Walsh-hadamard transformation of a convolution. Radioengineering, 11(3):40-42, 2002b. +Vetterli, M. Fast 2-d discrete cosine transform. In ICASSP'85. IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 10, pp. 1538-1541. IEEE, 1985. +Wallace, G. K. TheJPEGstillpicturecompressionstandard. Communications of the ACM, 34(4):30-44, 1991. + +Walsh, J. L. A closed set of normal orthogonal functions. American Journal of Mathematics, 45(1):5-24, 1923. +Weinstein, Y. S., Pravia, M., Fortunato, E., Lloyd, S., and Cory, D. G. Implementation of the quantum fourier transform. Physical review letters, 86(9):1889, 2001. +Xu, Y. and Nakayama, H. Dct-based fast spectral convolution for deep convolutional neural networks. In 2021 International Joint Conference on Neural Networks (IJCNN), pp. 1-8. IEEE, 2021. +Yuen, C.-K. Remarks on ordering of walsh-functions. IEEE Transactions on Computers, 100(12):1452-1452, 1972. +Zeiler, M. D. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012. +Zhao, J., Schäfer, F., and Anandkumar, A. Zero initialization: Initializing residual networks with only zeros and ones. arXiv preprint arXiv:2110.12661, 2021. +Zhao, R.-M. and Cui, H.-m. Improved threshold denoising method based on wavelet transform. In 2015 7th International Conference on Modelling, Identification and Control (ICMIC), pp. 1-4. IEEE, 2015. + +# A. Proof of Lemma 2.1 + +Proof. For $k = 0,1,\dots ,N - 1$ , $X_{k}$ is the form of a constant multiple of $x_0\pm x_1\pm \ldots \pm x_{N - 1}$ . Therefore, if $x_0 > \sum_{k = 1}^{N - 1}|x_k|$ , then $X_{k} > 0$ for $k = 0,1,\dots ,N - 1$ . + +# B. Proof of Theorem 2.2 Hadamard convolution theorem + +Proof. The sufficient condition can be proved using mathematical induction: + +1. It is obvious that the theorem holds for $M = 0$ , as with a single entry, $\mathbf{Y} = \mathbf{y}$ , $\mathbf{A} = \mathbf{a}$ , $\mathbf{X} = \mathbf{x}$ . +2. Suppose the theorem holds for $M \geq 0$ , we prove that it also holds for $M + 1$ . Let $\mathbf{a} = [\mathbf{a}_0, \mathbf{a}_1]^T$ and $\mathbf{x} = [\mathbf{x}_0, \mathbf{x}_1]^T$ , $\mathbf{a}_0, \mathbf{a}_1, \mathbf{x}_0, \mathbf{x}_1 \in \mathbb{R}^N, N = 2^M$ . Then, because of Eq. (3), we have + +$$ +\mathbf {A} = \mathcal {H} (\mathbf {a}) = \sqrt {\frac {1}{2 N}} \left[ \begin{array}{l l} \mathbf {H} _ {N} & \mathbf {H} _ {N} \\ \mathbf {H} _ {N} & - \mathbf {H} _ {N} \end{array} \right] \left[ \begin{array}{l} \mathbf {a} _ {0} \\ \mathbf {a} _ {1} \end{array} \right], \tag {12} +$$ + +$$ +\mathbf {X} = \mathcal {H} (\mathbf {x}) = \sqrt {\frac {1}{2 N}} \left[ \begin{array}{l l} \mathbf {H} _ {N} & \mathbf {H} _ {N} \\ \mathbf {H} _ {N} & - \mathbf {H} _ {N} \end{array} \right] \left[ \begin{array}{l} \mathbf {x} _ {0} \\ \mathbf {x} _ {1} \end{array} \right]. \tag {13} +$$ + +Let $\mathbf{A}_i = \mathcal{H}(\mathbf{a}_i)$ and $\mathbf{X}_i = \mathcal{H}(\mathbf{x}_i)$ for $i = 0,1$ . Then, + +$$ +\mathbf {A} = \sqrt {\frac {1}{2}} \left[ \begin{array}{l} \mathbf {A} _ {0} + \mathbf {A} _ {1} \\ \mathbf {A} _ {0} - \mathbf {A} _ {1} \end{array} \right], \tag {14} +$$ + +$$ +\mathbf {X} = \sqrt {\frac {1}{2}} \left[ \begin{array}{l} \mathbf {X} _ {0} + \mathbf {X} _ {1} \\ \mathbf {X} _ {0} - \mathbf {X} _ {1} \end{array} \right]. \tag {15} +$$ + +Using Eq. (14), Eq. (15), and the assumption that the sufficient condition holds for $M$ , we can prove that if + +$$ +\mathbf {y} = \mathbf {a} * _ {d} \mathbf {x} = \left[ \begin{array}{l} \mathbf {a} _ {0} \\ \mathbf {a} _ {1} \end{array} \right] * _ {d} \left[ \begin{array}{l} \mathbf {x} _ {0} \\ \mathbf {x} _ {1} \end{array} \right] = \left[ \begin{array}{l} \mathbf {a} _ {0} * _ {d} \mathbf {x} _ {0} + \mathbf {a} _ {1} * _ {d} \mathbf {x} _ {1} \\ \mathbf {a} _ {0} * _ {d} \mathbf {x} _ {1} + \mathbf {a} _ {1} * _ {d} \mathbf {x} _ {0} \end{array} \right], \tag {16} +$$ + +then + +$$ +\begin{array}{l} \mathbf {Y} = \mathcal {H} (\mathbf {y}) = \sqrt {\frac {1}{2 N}} \left[ \begin{array}{c c} \mathbf {H} _ {N} & \mathbf {H} _ {N} \\ \mathbf {H} _ {N} & - \mathbf {H} _ {N} \end{array} \right] \left[ \begin{array}{c} \mathbf {a} _ {0} * _ {d} \mathbf {x} _ {0} + \mathbf {a} _ {1} * _ {d} \mathbf {x} _ {1} \\ \mathbf {a} _ {0} * _ {d} \mathbf {x} _ {1} + \mathbf {a} _ {1} * _ {d} \mathbf {x} _ {0} \end{array} \right] = \sqrt {\frac {1}{2}} \left[ \begin{array}{c} \mathbf {A} _ {0} \circ \mathbf {X} _ {0} + \mathbf {A} _ {1} \circ \mathbf {X} _ {1} + \mathbf {A} _ {0} \circ \mathbf {X} _ {1} + \mathbf {A} _ {1} \circ \mathbf {X} _ {0} \\ \mathbf {A} _ {0} \circ \mathbf {X} _ {0} + \mathbf {A} _ {1} \circ \mathbf {X} _ {1} - \mathbf {A} _ {0} \circ \mathbf {X} _ {1} - \mathbf {A} _ {1} \circ \mathbf {X} _ {0} \end{array} \right] \\ = \sqrt {\frac {1}{2}} \left[ \begin{array}{l} \left(\mathbf {A} _ {0} + \mathbf {A} _ {1}\right) \circ \left(\mathbf {X} _ {0} + \mathbf {X} _ {1}\right) \\ \left(\mathbf {A} _ {0} - \mathbf {A} _ {1}\right) \circ \left(\mathbf {X} _ {0} - \mathbf {X} _ {1}\right) \end{array} \right] = \mathbf {A} \circ \mathbf {X}. \tag {17} \\ \end{array} +$$ + +The necessary condition can be proved by writing part (2) of the proof for the sufficient condition backward. + +# C. ResNet Structures Used in Experiments + +Table 8. Revising ResNet-20 for the CIFAR-10 classification task. HT-P stands for the proposed HT-perceptron layer. Building blocks are shown in brackets, with the numbers of blocks stacked. Downsampling is performed by Conv3_1 and Conv4_1 with a stride of 2. + +
LAYEROUTPUT SHAPEIMPLEMENTATION
INPUT3 × 32 × 32-
CONV116 × 32 × 323 × 3, 16
CONV2_X16 × 32 × 32[3 × 3, 16 HT-P, 16] × 3
CONV3_X32 × 16 × 16[3 × 3, 32 HT-P, 32] × 3
CONV4_X64 × 8 × 8[3 × 3, 32 HT-P, 64] × 3
GAP64AVERAGE POOLING
OUTPUT10LINEAR, 10
+ +Table 9. Revising ResNet-50 for the ImageNet-1K classification task. HT-P stands for the proposed HT-perceptron layer. + +
LayerOutput ShapeImplementation Details
INPUT3 × 224 × 224-
Conv164 × 112 × 1127 × 7,64, STRIDE 2
MAXPOOL64 × 56 × 562 × 2, STRIDE 2
Conv2_1256 × 56 × 56[1 × 1,643 × 3,64,1 × 1,256]
Conv2_2256 × 56 × 56[1 × 1,64HT-P,641 × 1,256]
Conv2_3256 × 56 × 56[1 × 1,643 × 3,641 × 1,256]
Conv3_1512 × 28 × 28[1 × 1,1283 × 3,128, STRIDE 2]
1 × 1,512
Conv3_2512 × 28 × 28[1 × 1,128HT-P,1281 × 1,512]
Conv3_3512 × 28 × 28[1 × 1,1283 × 3,1281 × 1,512]
Conv3_4512 × 28 × 28[1 × 1,128HT-P,1281 × 1,512]
Conv4_11024 × 14 × 14[1 × 1,2563 × 3,256, STRIDE 2]
1 × 1,1024
Conv4_21024 × 14 × 14[1 × 1,256HT-P,2561 × 1,1024]
Conv4_31024 × 14 × 14[1 × 1,2563 × 3,2561 × 1,1024]
Conv4_41024 × 14 × 14[1 × 1,256HT-P,2561 × 1,1024]
Conv4_51024 × 14 × 14[1 × 1,2563 × 3,2561 × 1,1024]
Conv4_61024 × 14 × 14[1 × 1,256HT-P,2561 × 1,1024]
Conv5_12048 × 7 × 7[1 × 1,5123 × 3,512, STRIDE 2]
1 × 1,2048
Conv5_22048 × 7 × 7[1 × 1,512HT-P,5121 × 1,2048]
Conv5_32048 × 7 × 7[1 × 1,5123 × 3,5121 × 1,2048]
GAP2048GLOBAL AVERAGE POOLING
OUTPUT1000LINEAR, 1000
\ No newline at end of file diff --git a/ahybridquantumclassicalapproachbasedonthehadamardtransformfortheconvolutionallayer/images.zip b/ahybridquantumclassicalapproachbasedonthehadamardtransformfortheconvolutionallayer/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..921004231c4ea89994343eb62a02b55acb8cc487 --- /dev/null +++ b/ahybridquantumclassicalapproachbasedonthehadamardtransformfortheconvolutionallayer/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2611bbc0d27948eed5e25d1c8cc50a97c9a2371499ff4ce6dbc9142a5b7f21de +size 693655 diff --git a/ahybridquantumclassicalapproachbasedonthehadamardtransformfortheconvolutionallayer/layout.json b/ahybridquantumclassicalapproachbasedonthehadamardtransformfortheconvolutionallayer/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..dd591fc0d385b67027c1e328846da0930f123c23 --- /dev/null +++ b/ahybridquantumclassicalapproachbasedonthehadamardtransformfortheconvolutionallayer/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b74e809c7606ad268e5b9c785f91ddd689d64aa1404b7785e24fd4234361b098 +size 603602 diff --git a/akernelbasedviewoflanguagemodelfinetuning/05efdfe2-9407-4aee-acd7-1e1b46c7987c_content_list.json b/akernelbasedviewoflanguagemodelfinetuning/05efdfe2-9407-4aee-acd7-1e1b46c7987c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a9f66a81ba558b6f8996cf275a079e1023558274 --- /dev/null +++ b/akernelbasedviewoflanguagemodelfinetuning/05efdfe2-9407-4aee-acd7-1e1b46c7987c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa8574e2679407bb16c7dcb404420dd90d409d2f61a5d6a444050ecde35be9f1 +size 249557 diff --git a/akernelbasedviewoflanguagemodelfinetuning/05efdfe2-9407-4aee-acd7-1e1b46c7987c_model.json b/akernelbasedviewoflanguagemodelfinetuning/05efdfe2-9407-4aee-acd7-1e1b46c7987c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5a6b6bad68ad4b66d9e25e7e6ceef50913cfc483 --- /dev/null +++ b/akernelbasedviewoflanguagemodelfinetuning/05efdfe2-9407-4aee-acd7-1e1b46c7987c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:609948191db3fd07b82062906fa2b804dd93434635949a40f9b22aea41ccc50b +size 293625 diff --git a/akernelbasedviewoflanguagemodelfinetuning/05efdfe2-9407-4aee-acd7-1e1b46c7987c_origin.pdf b/akernelbasedviewoflanguagemodelfinetuning/05efdfe2-9407-4aee-acd7-1e1b46c7987c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..75a3136975d02b56a069973f9d1e9d38b0fab1fb --- /dev/null +++ b/akernelbasedviewoflanguagemodelfinetuning/05efdfe2-9407-4aee-acd7-1e1b46c7987c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:afff908f2d6d3e7e1c1ed2b599e1d9c3efcb00e8ef9a618aaa5b13ec838aff46 +size 827376 diff --git a/akernelbasedviewoflanguagemodelfinetuning/full.md b/akernelbasedviewoflanguagemodelfinetuning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..208f57e923c841fac1a6bac67f98d07c0508a480 --- /dev/null +++ b/akernelbasedviewoflanguagemodelfinetuning/full.md @@ -0,0 +1,1016 @@ +# A Kernel-Based View of Language Model Fine-Tuning + +Sadhika Malladi Alexander Wettig Dingli Yu Danqi Chen Sanjeev Arora + +# Abstract + +It has become standard to solve NLP tasks by fine-tuning pre-trained language models (LMs), especially in low-data settings. There is minimal theoretical understanding of empirical success, e.g., why fine-tuning a model with $10^{8}$ or more parameters on a couple dozen training points does not result in overfitting. We investigate whether the Neural Tangent Kernel (NTK)—which originated as a model to study the gradient descent dynamics of infinitely wide networks with suitable random initialization—describes fine-tuning of pre-trained LMs. This study was inspired by the decent performance of NTK for computer vision tasks (Wei et al., 2022). We extend the NTK formalism to Adam and use Tensor Programs (Yang, 2020b) to characterize conditions under which the NTK lens may describe fine-tuning updates to pretrained language models. Extensive experiments on 14 NLP tasks validate our theory and show that formulating the downstream task as a masked word prediction problem through prompting often induces kernel-based dynamics during fine-tuning. Finally, we use this kernel view to propose an explanation for the success of parameter-efficient subspace-based fine-tuning methods. $^{1}$ + +# 1. Introduction + +It is now customary to solve most supervised natural language processing (NLP) tasks such as topic classification and textual entailment by fine-tuning a pre-trained language model (e.g., (Devlin et al., 2019; Liu et al., 2020b; Clark et al., 2020; Raffel et al., 2020; Joshi et al., 2020)). We lack theoretical understanding of this fine-tuning paradigm. Why do we not see over-fitting when fine-tuning a very large + +$^{1}$ Department of Computer Science, Princeton University, Princeton, NJ, USA. Correspondence to: Sadhika Malladi . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +1Our code and pre-computed kernels are publicly available at https://github.com/princeton-nlp/LM-Kernel-FT. + +language model using a couple dozen instances of the supervised task? Why is fine-tuning so sensitive to details such as whether or not we include a prompt (e.g., adding "It was [great/terrible]" for sentiment analysis (Schick & Schütze, 2021; Gao et al., 2021)? Why does restricting optimization to a low-rank subspace of model parameters (Hu et al., 2021; Li et al., 2018; Aghajanyan et al., 2021) still result in performance comparable to full fine-tuning? Answering such questions requires understanding how the sequence of parameter updates changes in various scenarios, e.g., the addition of a prompt, or the introduction of randomly initialized parameters. The current theory of deep learning, at first sight, seems too primitive to address such questions, especially since fine-tuning has to start from a parameter initialization inherited from pre-training. + +Recently, Wei et al. (2022) suggested replacing fine-tuning with Neural Tangent Kernel (NTK), an idea invented for the study of infinite-width deep neural networks (Jacot et al., 2018; Du et al., 2019a) and previously applied to solving vision tasks with infinitely wide ConvNets (Arora et al., 2019b). They note that the NTK can be defined for any neural model $f$ and any initialization $\theta_0$ by representing an input $\xi$ by the gradient it induces $\nabla f(\xi; \theta_0)$ , which yields a kernel matrix: + +$$ +\mathcal {K} (\xi , \xi^ {\prime}) = \langle \nabla f (\xi ; \theta_ {0}), \nabla f \left(\xi^ {\prime}; \theta_ {0}\right) \rangle . \tag {1} +$$ + +This kernel is well-defined for any parameter vector $\theta_0$ . However, for an infinite-width network initialized with $\theta_0$ sampled from a suitably-scaled Gaussians, it can be shown that the kernel matrix is unchanged during gradient descent, which turns the classification task into a form of kernel regression with respect to this kernel (Jacot et al., 2018). In the fine-tuning setting, however, the initialization $\theta_0$ is inherited from the pre-trained network, and not sampled from the Gaussian distribution. Nevertheless, (Wei et al., 2022) found that kernel regression using this "empirical NTK" (eNTK) defined with the inherited $\theta_0$ performs well, achieving classification accuracy within $6\%$ absolute of actual fine-tuning on several image recognition tasks. In other words, their work hints that mathematical understanding of the fine-tuning phenomenon (e.g., its sample efficiency) could go via the theory of kernel classifiers. + +The current paper furthers an empirical and theoretical understanding of the pre-training and fine-tuning (FT) + +paradigm for NLP tasks. Our contributions are: + +1. We formally extend the standard NTK theory developed for gradient descent to characterize kernel-based dynamics when training with Adam. We propose and rigorously prove the correctness of a new kernel formula relying on the sign of the gradient to describe early-stage training (e.g., fine-tuning) with Adam (Section 4). +2. We formally extend infinite-width analysis to account for a pre-trained initialization and characterize conditions under which fine-tuning can exhibit kernel behavior. Using insights into the importance of prompting, we formally prove the existence of a rigorous mechanism through which prompt-based FT of complex architectures (e.g., Transformers) can exhibit kernel behavior (Section 5). Analysis proceeds in the context of networks whose widths go to infinity (i.e., through the Tensor Programs framework), but unlike standard NTK theory, it allows a non-random initialization (i.e., one that results from pre-training). +3. We perform an extensive empirical analysis on 14 diverse NLP tasks to reveal when and to what extent fine-tuning exhibits kernel behavior. We find that using a meaningful prompt is crucial for the eNTK to achieve good performance, suggesting that prompting induces a well-characterized optimization benefit for fine-tuning. Further experiments reveal that the trajectory of prompt-based FT can often be described by kernel-based dynamics when the eNTK succeeds (Section 6). +4. We straightforwardly apply the kernel view of FT dynamics to formally analyze the success of finetuning methods that update in a low-rank subspace of model parameters (e.g., LoRA, (Hu et al., 2021)). These results in Section 7 highlight how a kernel-based understanding of FT can aid in the practical design and theoretical analysis of efficient variants. + +# 2. Related Work + +Kernel view of training. The infinite-width limit is a well-studied theoretical model for deep network optimization. Jacot et al. (2018) introduced NTK to capture training a deep and infinitely wide neural network from a random initialization. Subsequent experiments showed that the kernels underperformed for standard tasks (Arora et al., 2019b) but performed well on small datasets (i.e., hundreds of examples) (Arora et al., 2020). Many works (Allen-Zhu et al., 2019a;b; Arora et al., 2019a; Du et al., 2019b;a; Li & Liang, 2018; Zou et al., 2018; Cao & Gu, 2019) have since applied this lens to understand the optimization and generalization + +behavior of deep networks. However, such analyses do not directly apply to the pre-training and fine-tuning framework because (1) the network trained during FT is inherited and non-random; and (2) LMs are often trained with Adam, and the NTK formula only describes training an infinitely wide network with SGD. In this work, we handle a non-random (i.e., pre-trained) initialization by assuming that the pre-training task is sufficiently related to the downstream task (Definition 5.3), and we derive new kernels to model early-stage training with Adam (Section 4). + +Theory of self-supervised learning and transfer learning. Several existing theoretical works on transfer learning study the performance of linear probing on a representation to provide guarantees on various tasks related to the original training data (Du et al., 2021; Tripuraneni et al., 2020; Wu et al., 2020). Chua et al. (2021) show that regularized fin-tuning in a meta-learning setting exhibits kernel behavior if the pre-training and downstream tasks are closely related. Along similar lines, Mu et al. (2020); Maddox et al. (2021); Achille et al. (2021) suggest through experiments and theory that gradient-based features, corresponding to a linearization of fine-tuning, can perform well on visual downstream tasks. We characterize when kernel dynamics describe fin-tuning a pre-trained masked language model on downstream language understanding tasks. + +Saunshi et al. (2021) study autoregressive language models to rigorously characterize why prompting can improve zero-shot task performance, but their analysis precludes an investigation of FT. We focus on the masked language model pretraining objective, but it is worth noting that there are many works (Saunshi et al., 2019; Tosh et al., 2021a;b; Lee et al., 2021; Tsai et al., 2021) studying transfer when pre-training with a contrastive objective. However, experiments on language modeling (Abnar et al., 2021) and contrastive learning (Saunshi et al., 2022) recently demonstrated that properties of transfer between self-supervised pre-training and supervised FT cannot be fully captured by model-agnostic analyses that directly relate the pre-training and downstream task errors. Kernel theory provides a principled optimization-and architecture-aware framework to analyze FT. + +Optimization of Transformers. Several works (Zhang et al., 2020; Liu et al., 2020a; Li et al., 2022) have documented issues with optimizing Transformer-based architectures with SGD instead of Adam. To study the unique properties of optimizing transformers with Adam, we derive a new kernel formula (Theorem 4.3) to capture early-stage training with Adam. Table 2 compares the performance of this kernel to FT with Adam and SGD. + +Variants of fine-tuning methods. A standard way of fine-tuning pre-trained LMs as introduced in (Radford et al., 2018; Devlin et al., 2019) is to add a linear classifier on top of a pre-trained encoder and update all the parameters + +together. Subsequent work (Schick & Schütze, 2021; Gao et al., 2021) formulated downstream tasks as a language modeling problem (i.e., prompt-based FT) and demonstrated empirical success in low-data scenarios (see Liu et al. (2022) for a comprehensive survey). Another line of research studies parameter-efficient fine-tuning methods in which only a subset of model parameters are updated (Lester et al., 2021; Ben Zaken et al., 2022; Li & Liang, 2021) or the parameters updates are restricted to a low-dimensional subspace (Hu et al., 2021; Aghajanyan et al., 2021). We show that good eNTK performance arises only when studying prompt-based FT in Section 6 (Figure 1) and we later show in Section 7 that subspace-based FT methods such as LoRA (Hu et al., 2021) have a simple interpretation through the kernel. + +# 3. Preliminaries + +# 3.1. Pre-Training and Fine-Tuning Paradigm + +We focus our attention on masked language models (MLMs), such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2020b), which are trained to minimize the cross-entropy loss on independently predicting masked tokens (i.e., a $|\mathcal{V}|$ -way classification task, where $\mathcal{V}$ is the vocabulary). Given a text input $s$ of length $T$ from the pre-training distribution $S_{\mathrm{PT}}$ , replace a small percentage (e.g., $15\%$ ) of tokens with [MASK] tokens. This masked input is then fed into the representation function $h: S_{\mathrm{PT}} \to T \times \mathbb{R}^n$ (e.g., a Transformer encoder) to produce a low-dimensional contextual embedding for each position in the input. The contextual embeddings are independently multiplied by a classifier head (i.e., word embeddings) $\Phi \in \mathbb{R}^{n \times |\mathcal{V}|}$ to produce logits that will be used to compute the probability of a token filling each masked position. + +Using a pre-trained model to solve downstream tasks effectively has been a highly active area of research. We focus on fine-tuning (FT) methods, which adapt the pre-trained model to a new input distribution $S_{\mathrm{FT}}$ using additional training on the $C$ -way downstream classification task. + +1. Standard FT (Devlin et al., 2019; Liu et al., 2020b): To solve a $C$ -way downstream classification task, initialize and learn $^2$ a new classifier head $\Gamma: \mathbb{R}^n \to \mathbb{R}^C$ on top of the contextual [CLS] embedding, denoted $h_{[\mathrm{CLS}]}$ . In this case, the model output $f: \mathcal{S}_{\mathrm{FT}} \to \mathbb{R}^C$ for the eNTK construction is $f(s) = \Gamma(h_{[\mathrm{CLS}]}(s))$ . +2. Prompt-based FT (Schick & Schütze, 2021; Gao + +et al., 2021): Add a natural language prompt (e.g. "This is [MASK].") in addition to the downstream task input, and use the pre-trained MLM to fill in the masked token. Compute the logits over task-relevant words (e.g., "great" and "terrible") using the corresponding columns of $\Phi$ , denoted $\tilde{\Phi} \in \mathbb{R}^{n \times C}$ . These logits will serve as surrogates to solve the downstream task. In this case, the model output $f: S_{\mathrm{FT}} \to \mathbb{R}^C$ for the eNTK construction is $f(s) = \tilde{\Phi}^\top h_{[\mathrm{MASK}]}(s)$ . + +# 3.2. Kernel Behavior + +We consider a neural network $f(\xi; \theta)$ that takes input $\xi$ and computes a scalar output using $\theta$ as the parameters. Gradient-based updates to the model parameters involve computing a loss function $\ell$ and $\frac{\partial \ell}{\partial \theta}$ , which can be decomposed by the chain rule as $\frac{\partial \ell}{\partial f} \frac{\partial f}{\partial \theta}$ . The first term is defined as the output derivative (Definition 3.1), and the second term is used to define kernel behavior (Definition 3.2). + +Definition 3.1 (Output Derivative). The output derivative $\chi (\xi ,y,\theta)$ for a network $f$ with parameters $\theta$ , loss function $\ell$ , and input $\xi$ with label $y$ is defined as $\chi (\xi ,y,\theta) = \frac{\partial\ell(f(\xi;\theta),y)}{\partial f}$ . We also define the output derivative applied at time $t$ as $\chi_{t} = \chi (\xi_{t},y_{t},\theta_{t - 1})$ , where $\xi_{t}$ is the input at time $t$ with label $y_{t}$ . For ease of notation, we often absorb $y$ into $\xi$ and write $\chi (\xi ,\theta)$ and $\chi (\xi ,f)$ interchangeably. + +Below, we adapt the definition of kernel-based learning (i.e., lazy regime in Woodworth et al. (2020)) to an arbitrary initialization. + +Definition 3.2 (Kernel Behavior). Let $\theta_t$ be the parameters after $t$ steps of training by a gradient-based optimization algorithm, and let $\xi$ be an arbitrary fixed input. We say this training process of the network demonstrates kernel behavior if the following properties are satisfied. + +1. Linearization: The change of the network can be well-approximated by its first order Taylor expansion, i.e., + +$$ +f (\xi ; \theta_ {t}) - f (\xi ; \theta_ {t - 1}) \approx \langle \nabla f (\xi ; \theta_ {t - 1}), \theta_ {t} - \theta_ {t - 1} \rangle ; +$$ + +2. Fixed Features: The gradient at step $t$ is approximately the same as before training, i.e., $\nabla f(\xi ;\theta_t)\approx \nabla f(\xi ;\theta_0)$ + +$\nabla f$ denotes the gradient of $f$ w.r.t. $\theta$ . "Closeness to kernel behavior" is quantified using the difference in the quantities on the two sides of the $\approx$ symbol. We formalize the approximations in Definition C.3. + +Past work has shown that if gradient-based training exhibits kernel behavior, then the function change can be expressed in terms of a fixed kernel (i.e., the kernel analog). + +Definition 3.3 (Kernel Analog). Suppose optimization of the parameters $\theta$ of a model $f$ using the gradient-based update algorithm $\mathcal{A}$ exhibits kernel behavior (Definition 3.2). Then, we say that a kernel $\mathcal{K}^{(\mathcal{A})}$ is the kernel analog of the optimization algorithm $\mathcal{A}$ if for every $t > 0$ , there exists $\nu_{t}$ such that for any input $\xi$ , + +$$ +f (\xi ; \theta_ {t}) - f (\xi ; \theta_ {t - 1}) \approx - \nu_ {t} \mathcal {K} ^ {(\mathcal {A})} (\xi , \xi_ {t}) \qquad (2) +$$ + +where $\xi_{t}$ is the training input of step $t$ , $\theta_{t}$ is the parameter after step $t$ . + +We illustrate the connection between the kernel analog and kernel behavior when using SGD. If SGD exhibits kernel behavior, then, for a fixed input $\xi$ , we can write + +$$ +\begin{array}{l} f (\xi ; \theta_ {t}) - f (\xi ; \theta_ {t - 1}) \approx \langle \nabla f (\xi ; \theta_ {t - 1}), \theta_ {t} - \theta_ {t - 1} \rangle \\ = \left\langle \nabla f (\xi ; \theta_ {t - 1}), - \eta \chi_ {t} \nabla f \left(\xi_ {t}; \theta_ {t - 1}\right) \right\rangle \\ \approx - \eta \chi_ {t} \mathcal {K} ^ {\mathrm {(S G D)}} (\xi , \xi_ {t}) \\ \end{array} +$$ + +where the approximations follow from the Linearization and Fixed Features property, respectively, $\eta$ is the learning rate, $\chi_{t}$ is the output derivative (Definition 3.1), and $\mathcal{K}^{(\mathrm{SGD})}$ is the kernel analog of SGD with $\nu_{t} = \eta \chi_{t}$ . Notably, $\mathcal{K}^{(\mathrm{SGD})}$ is the well-known neural tangent kernel (NTK) formula derived in (Jacot et al., 2018), which represents an input $\xi$ as the resulting gradient $\nabla f(\xi; \theta_0)$ . + +Definition 3.4 (Neural Tangent Kernel $\mathcal{K}^{(\mathrm{SGD})}$ - $\mathcal{K}^{(\mathrm{SGD})}(\xi ,\xi^{\prime}) = \langle \nabla f(\xi ;\theta_{0}),\nabla f(\xi^{\prime};\theta_{0})\rangle$ + +Given a kernel $\mathcal{K}$ , one can solve a classification task by learning $\alpha_{i}$ to minimize the empirical risk of $\sum_{i} \alpha_{i} \mathcal{K}(\cdot, \xi_{i})$ , where $\{\xi_{i}\}$ is the training data (Appendix A). If training exhibits kernel behavior and $\mathcal{K}$ is the kernel analog for the optimizer, then solving the kernel regression problem is equivalent to training the network (Jacot et al., 2018). + +In Section 4, we derive the kernel analog for SignGD (i.e., an early-stage approximation of Adam), and in Section 6, we compare its eNTK performance against Adam FT. The eNTK computation relies on two design choices for the setting: (1) what the model output $f(\xi; \theta)$ is, and (2) which optimizer $\mathcal{A}$ is used. We choose $f$ based on the FT setting (Section 3.1) and $\mathcal{A}$ as SGD or Adam. + +# 4. Kernel Derivation for Adam + +Computing the eNTK requires using the kernel analog (Definition 3.3) of the chosen optimization algorithm $\mathcal{A}$ . However, it is difficult to construct a long-term kernel analog for + +Adam, because the adaptivity causes each update to depend on the entire gradient history. Previous work has shown that in the early stages of training, full-batch (Ma et al., 2022) and mini-batch (Malladi et al., 2022) Adam with a small learning rate compute the moving averages for the moment estimates in a small neighborhood, so the Adam update reduces to coordinate-wise normalization on the gradient. This optimization algorithm is called SignGD. + +Definition 4.1 (SignGD). SignGD is a gradient-based optimization algorithm that updates parameters as $\theta_t = \theta_{t-1} - \eta \operatorname{sign}(\nabla \ell_t(\xi_t; \theta_{t-1}))$ , where $\operatorname{sign}$ is applied element-wise. + +In Table 10, we provide empirical evidence that fine-tuning with SignGD yields comparable performance to Adam. We define the sign-based kernel below and prove it to be the correct kernel analog for SignGD. + +Definition 4.2 (Asymmetric SignGD Kernel). $\mathcal{K}^{(\mathrm{A - SignGD})}(\xi ,\xi^{\prime}) = \langle \nabla f(\xi ;\theta_{0}),\mathrm{sign}(\nabla f(\xi^{\prime};\theta_{0})\rangle .$ + +Theorem 4.3 (Informal version of Theorem C.4). If a network is trained with SignGD and exhibits kernel behavior (Definition 3.2), then the training dynamics follow + +$$ +f (\xi ; \theta_ {t}) - f (\xi ; \theta_ {t - 1}) \approx - \eta \operatorname {s i g n} \left(\chi_ {t}\right) \mathcal {K} ^ {(A - S i g n G D)} (\xi , \xi_ {t}), +$$ + +where $\chi_{t}$ is the output derivative (Definition 3.1). + +Proof sketch. The Linearization property in Definition 3.2 implies that + +$$ +\begin{array}{l} f (\xi ; \theta_ {t}) - f (\xi ; \theta_ {t - 1}) \approx \langle \nabla f (\xi ; \theta_ {t}), \theta_ {t} - \theta_ {t - 1} \rangle \\ = - \eta \operatorname {s i g n} \left(\chi_ {t}\right) \langle \nabla f (\xi ; \theta_ {t}), \operatorname {s i g n} \left(\nabla f \left(\xi_ {t}; \theta_ {t - 1}\right)\right) \rangle . \\ \end{array} +$$ + +Then, by the Fixed Features property in Definition 3.2, + +$$ +\begin{array}{l} \langle \nabla f (\xi ; \theta_ {t}), \operatorname {s i g n} (\nabla f (\xi_ {t}; \theta_ {t - 1})) \rangle \approx \\ \langle \nabla f (\xi ; \theta_ {0}), \operatorname {s i g n} (\nabla f (\xi_ {t}; \theta_ {0})) \rangle = \mathcal {K} ^ {\left(\mathrm {A} - \operatorname {S i g n G D}\right)} (\xi , \xi_ {t}). \square \\ \end{array} +$$ + +We solve the asymmetric kernel regression as suggested in He et al. (2022), but the difficulties of solving the kernel regression problem with an asymmetric kernel (Appendix A.3) motivate us to also use the symmetric SignGD kernel. + +Definition 4.4 (SignGD Kernel). $\mathcal{K}^{(\mathrm{SignGD})}(\xi, \xi') = \langle \mathrm{sign}(\nabla f(\xi; \theta_0)), \mathrm{sign}(\nabla f(\xi'; \theta_0)) \rangle$ + +Unlike the standard NTK formula for SGD, the kernel analog for Adam uses the sign function because early-stage Adam dynamics are agnostic to the scales of the gradients. Concurrent work in Littwin & Yang (2023) more formally extends the Tensor Programs framework and finds that no kernel can describe general (e.g., late-stage) Adam training when batch size is large. + +# 5. Theory: Prompt-Based Fine-Tuning Can Exhibit Kernel Behavior + +We give a plausible mechanism for how prompt-based FT can exhibit kernel behavior (Definition 3.2) as the network width grows large. We start by formalizing how changing the architecture width impacts pre-training. + +Definition 5.1 (Pre-Training Scheme). A pre-training scheme $(\mathcal{X},\mathcal{A},\mathcal{F}^n)$ with width $n$ contains the dataset $\mathcal{X}$ , optimizer $\mathcal{A}$ and its hyperparameters, and a model architecture $\mathcal{F}^n$ . Let $f^n\sim (\mathcal{X},\mathcal{A},\mathcal{F}^n)$ denote a model resulting from training the architecture $\mathcal{F}^n$ on the dataset $\mathcal{X}$ with optimizer $\mathcal{A}$ . + +Remark 5.2. The reliance of the architecture on the width is given by Tensor Programs (Yang, 2020a): for example, in a Transformer, $n$ corresponds to the embedding dimension. + +We now connect pre-training to the downstream task. Analogous to Saunshi et al. (2021), we reason that prompting transforms the downstream task into a fill-in-the-blank problem, and thus the downstream task can be viewed as a sub-case of the pre-training task. We then assume that a wider pre-trained network will be better at filling in masked tokens and that an infinitely wide pre-trained network can solve the downstream task perfectly when using a suitable prompt. + +Definition 5.3 (Natural Task in the Infinite-Width Limit). A downstream task $\Xi$ is natural with respect to a pre-training scheme $(\mathcal{X},\mathcal{A},\mathcal{F}^n)$ if, for any pre-trained model $f^n\sim (\mathcal{X},\mathcal{A},\mathcal{F}^n)$ and any downstream example $(\xi ,y)\in \Xi$ + +$$ +\lim _ {n \rightarrow \infty} \chi (\xi , y, f ^ {n}) = 0. \tag {3} +$$ + +where $\chi$ is the output derivative (Definition 3.1). + +Remark 5.4. Experiments in Section 6 and Appendix B.2 suggest that the FT optimization dynamics depend on the choice of prompt. In the above notation, the prompt is included in the downstream task dataset $\Xi$ . Only tasks with a well-suited prompt can be natural in the infinite-width limit. Tasks solved by FT using a randomly initialized head cannot satisfy the condition since $\chi$ will not vanish even for an infinitely wide pre-trained network at start of FT. + +Although Definition 5.3 is asymptotic, we design a cheap empirical test using two models of different widths $n_1 \neq n_2$ and same depth resulting from otherwise identical pre-training schemes: $f^{n_1} \sim (\mathcal{X}, \mathcal{A}, \mathcal{F}^{n_1})$ and $f^{n_2} \sim (\mathcal{X}, \mathcal{A}, \mathcal{F}^{n_2})$ . We measure if $\chi$ decreases with width for every downstream example $(\xi, y) \in \Xi$ without making any gradient updates. This is necessary but not sufficient for the task to be natural in the infinite-width limit. See Appendix B.1. + +To study the behavior of FT, one also needs to make assumptions about parameters that resulted from pre-training. + +We assume that the network can be written as a Tensor Program (Yang, 2019; 2020a;b), which is sufficiently general to allow our theory to describe many complex architectures (e.g., Transformers). To allow the analysis to proceed by way of Tensor Programs, the network must be (1) stable: its output does not grow with width (i.e., the infinite-width limit is meaningful), and (2) non-trivial: its output can be updated during fine-tuning (i.e., learning can occur). + +Theorem 5.5 (Informal version of Theorem C.5). Assume the downstream task $\Xi$ is natural in the infinite-width limit with respect to a pre-training scheme $(\mathcal{X},\mathcal{A},\mathcal{F}^n)$ , and the model $f\sim (\mathcal{X},\mathcal{A},\mathcal{F}^n)$ is stable, non-trivial, and can be written as a Tensor Program. Then prompt-based FT of $f$ will exhibit the Linearization and Fixed Features properties of kernel behavior (Definition 3.2). + +The theorem formalizes the intuition that if the pre-trained network is already decent at solving the downstream task, the network needs to only mildly adapt to solve the downstream task. Notably, we extend standard NTK theory to account for an arbitrary initialization and to characterize early-stage training with Adam using results from Section 4. + +Our theoretical results in this section and Section 4 apply to autoregressive and masked language models (MLMs), but we limit our fine-tuning experiments to MLMs as they are known to perform better after fine-tuning. + +# 6. Experiments + +We compute the eNTK as described in Section 3 for different optimization algorithms and FT settings. eNTK performance being comparable to FT performance is a necessary but not sufficient condition for FT to exhibit kernel behavior (Definition 3.2), so we also directly measure if the Linearization and Fixed Features properties hold (Section 6.2). If the eNTK can solve the task, then eNTK regression provides an alternate method to use the pre-trained model to solve a downstream task, but the kernel lens only admits a theoretical analysis of FT optimization dynamics if both properties of kernel behavior are satisfied (Definition 3.2; see Section 6.2). For tasks that the eNTK cannot solve, we conjecture that the prompt is not well-designed for the task (in the sense of Definition 5.3), forcing the pre-trained model to adapt more during FT. + +Our experiments are in the few-shot setting with manual prompt templates from Gao et al. (2021). We consider 14 NLP tasks, divided into 8 single sentence and 6 sentence pair datasets, which cover: sentiment analysis (SST-2, SST-5, + +A Kernel-Based View of Language Model Fine-Tuning + +
TaskSST-2SST-5MRCRMPQASubjTRECAG NewsMNLISNLIQNLIRTEMRPCQQP
Task typesentimentpolaritysubj.topic clf.entailment-para. detect.
Num. classes C25222643322222
SGD vs. K(SGD): 16-shot
eNTK solves task
Linearization
Fixed Features
⇒ Kernel behavior
SGD vs. K(SGD): 64-shot
eNTK solves task
Linearization
Fixed Features
⇒ Kernel behavior
Adam vs. {K(SignGD), K(A-SignGD)}: 16-shot
eNTK solves task
Linearization
Fixed Features
⇒ Kernel behavior
+ +Table 1. We find that 8 out of 14 tasks consistently induce kernel behavior across 5 subsampled datasets. Each dot represents a seed (i.e. a different $k$ -shot dataset). A green dot indicates that the seed satisfies the criterion, and a red circle indicates that it does not. We say the eNTK solves the task if the kernel analog achieves at least $90\%$ of the fine-tuning performance. We say that the Linearization property holds if the linearized model improves the pre-trained model by at least $50\%$ of the amount that fine-tuning improves it. We say that Fixed Features is satisfied if the average element-wise distance between the kernels before and after fine-tuning are less than 2.0. The formal definition of kernel behavior (Definition 3.2) does not prescribe measurable numerical thresholds for these properties, so we selected them manually for ease of presentation. We urge readers to examine the data in Table 2 and Figure 2 directly for a more nuanced view. + +MR, CR); classifying an opinion's polarity (MQPA) or subjectivity (Subj) or question type (TREC) or news topic (AG News); natural language inference (MNLI, SNLI, QNLI, RTE); and paraphrase detection tasks (MRPC, QQP). For each task, we randomly sample 5 $k$ -shot datasets with $k$ training examples for each label. We show experiments for $k \in \{16, 64\}$ using a pre-trained RoBERTa-base (Liu et al., 2020b) for all FT and eNTK experiments. We consider $\mathcal{K}^{\mathrm{(SignGD)}}$ and $\mathcal{K}^{\mathrm{(A-SignGD)}}$ as kernel analogs for Adam. See Appendix A for more details and experiments on $k = 512$ . + +We summarize our results in Table 1. We find that the eNTK can consistently solve 12 out of 14 tasks comparably to prompt-based fine-tuning, out of which 8 induce kernel behavior during fine-tuning. Our results show that FT optimization dynamics depend on the downstream task and the inclusion of a meaningful prompt. + +# 6.1. Kernel Performance on Downstream Tasks + +Prompting is critical for eNTK to match FT performance. We measure the eNTK performance in the standard and prompt-based FT settings across SST-2, MR, CR, QNLI, QQP and RTE (Figure 1 and Table 6). In the standard FT setting, $\kappa^{(\mathrm{SGD})}$ and SGD-FT demonstrate a gap of up + +to $16\%$ absolute on tasks that exhibit only a $3\%$ gap in the prompt-based setting. Table 6 demonstrates that the inclusion of more data improves the eNTK performance in the unprompted setting, but kernels computed with a prompt consistently outperform the standard ones. We explore the importance of the choice of prompt format in Appendix B.2. These results agree with our theoretical analysis that tasks must use a meaningful prompt in order to induce kernel behavior (Definition 5.3). + +SGD performs comparably to Adam in prompt-based FT. Table 2 shows that Adam and SGD perform within $4\%$ absolute of each other when using a prompt, suggesting that known difficulties in optimizing transformers with SGD (Li et al., 2022; Zhang et al., 2020; Liu et al., 2020a) do not play a substantial role during prompt-based FT. Indeed, we expect that the benefit of Adam over SGD is reduced when the task is simple enough to induce kernel behavior. + +Prompt-based eNTK matches FT in most tasks. We compare SGD-FT to $\kappa^{(\mathrm{SGD})}$ and Adam-FT to $\kappa^{(\mathrm{A - SignGD})}$ in Table 2. We observe that for 10 out of 14 tasks, the kernel analog can achieve accuracy within $10\%$ of the corresponding FT performance for $k = 16$ and $k = 64$ . The + +
k-shotMethodSST-2SST-5MRCRMPQASubjTRECAG News
16SGD-FT89.0(1.5)44.6(1.4)83.2(2.4)93.3(0.2)83.3(1.3)88.5(2.6)80.3(7.2)84.2(1.1)
K(SGD)88.3(0.3)43.6(2.2)84.7(1.5)93.2(0.9)76.4(2.7)88.6(1.3)56.0(9.2)82.1(2.0)
Adam-FT88.3(1.2)45.4(2.6)81.3(6.1)93.0(1.6)82.8(2.2)87.4(2.1)79.6(6.1)84.0(1.6)
K(SignGD)88.3(0.5)42.2(3.9)84.3(1.5)93.7(0.5)76.7(3.3)89.2(2.0)58.1(6.5)82.3(1.6)
K(A-SignGD)88.3(0.4)43.7(1.7)84.9(1.1)93.4(0.5)74.6(3.5)88.6(1.8)22.7(2.8)83.6(1.0)
64SGD-FT89.7(0.4)45.8(2.1)85.6(1.1)94.3(0.5)84.8(0.8)92.9(0.5)93.2(1.0)86.8(0.7)
K(SGD)89.2(1.0)46.0(1.3)86.4(0.6)93.7(0.4)81.2(0.9)91.4(0.7)77.8(2.3)85.6(0.7)
Adam-FT89.3(0.7)48.5(2.0)86.0(0.4)93.7(0.8)84.6(0.9)92.7(0.6)92.6(1.3)86.8(1.1)
K(SignGD)89.1(0.5)49.1(1.6)85.6(1.0)93.9(0.2)79.0(5.8)92.4(0.5)82.0(1.4)85.9(0.7)
K(A-SignGD)88.9(0.9)43.6(2.2)85.6(1.0)94.0(0.3)81.8(1.1)91.8(1.1)21.0(4.3)86.2(0.3)
+ +(a) Single-sentence tasks + +
k-shotMethodMNLISNLIQNLIRTEMRPCQQP
16SGD-FT59.2(2.7)65.7(2.7)62.1(3.1)60.0(5.5)73.9(2.7)62.1(2.3)
K(SGD)53.0(3.0)57.8(2.3)60.1(3.3)60.0(4.7)73.4(5.6)58.2(0.9)
Adam-FT56.8(2.9)64.6(4.1)63.1(3.5)57.6(6.3)77.6(3.1)61.8(4.5)
K(SignGD)53.8(1.2)54.9(2.7)59.5(3.1)55.4(4.2)75.6(1.2)60.7(2.2)
K(A-SignGD)51.9(4.0)54.9(3.1)56.0(1.9)59.8(4.0)75.2(2.6)59.4(2.0)
64SGD-FT68.7(1.7)77.3(0.9)72.8(2.2)68.9(2.5)82.8(1.2)69.2(1.3)
K(SGD)60.4(1.8)65.5(1.6)67.3(1.6)66.5(2.5)79.2(2.5)66.4(1.7)
Adam-FT67.9(1.0)76.9(1.4)74.2(3.2)67.3(2.7)80.9(1.2)69.8(0.6)
K(SignGD)60.8(1.7)64.1(2.3)65.4(1.7)63.8(1.8)77.4(2.3)63.7(4.4)
K(A-SignGD)58.5(1.7)66.8(1.1)66.5(1.1)63.8(2.2)77.3(2.0)66.1(3.4)
+ +(b) Sentence-pair tasks + +Table 2. Prompt-based FT and prompt-based eNTK performance with different formulas on the LM-BFF test set (Gao et al., 2021). The kernel analog performs comparably to FT on many tasks but fails if the prompt is poorly designed (i.e., MPQA, TREC, SNLI, and MNLI). Performance is measure by average test accuracy over $5k$ -shot splits for all tasks except MRPC and QQP, where it is F1. + +difference between $\mathcal{K}^{\mathrm{(SignGD)}}$ and $\mathcal{K}^{\mathrm{(A-SignGD)}}$ is negligible on most tasks, but the non-standard solver for the asymmetric problem (Appendix A.3) may cause $\mathcal{K}^{\mathrm{(A-SignGD)}}$ to sometimes perform worse than $\mathcal{K}^{\mathrm{(SignGD)}}$ despite being the theoretically sound kernel analog (Theorem 4.3). + +# 6.2. Measuring Kernel Behavior + +The eNTK can often solve the task comparably to finetuning (Table 2), suggesting that these tasks may induce kernel behavior (Definition 3.2). However, the observed success only indicates that the gradient features can solve the downstream task and does not directly study the optimization dynamics. We take additional measurements to provide further empirical evidence that FT can be described by kernel behavior. The approximations in Definition 3.2 involve constants depending on the dataset and model architecture, so we set manual thresholds for our results. + +The Linearization property holds for all tasks the eNTK can solve. If FT exhibits kernel behavior (Definition 3.2), then the function after FT should be close to the first order + +Taylor expansion around the pre-trained model: + +$$ +f (\xi ; \theta_ {\mathrm {F T}}) \approx f (\xi ; \theta_ {\mathrm {P T}}) + \langle \nabla f (\xi ; \theta_ {\mathrm {P T}}), \theta_ {\mathrm {F T}} - \theta_ {\mathrm {P T}} \rangle +$$ + +where $\theta_{\mathrm{PT}}$ is the model parameters after pre-training, $\theta_{\mathrm{FT}}$ is the model parameters after fine-tuning on the downstream task, and $\xi$ is sampled from the test set. Figure 2 summarizes how this linearized model performs in comparison to the pre-trained and fine-tuned models. + +Pre-trained models perform significantly better than random on many single-sentence downstream tasks (e.g., SST-2, MR, and CR) but close to random on most sentence-pair tasks (e.g., QNLI, RTE, MRPC, and QQP). The linearized model recovers more than $50\%$ of the improvement from FT for all tasks the eNTK could solve (Table 2). + +The Fixed Features property holds for all tasks the eNTK can solve. We empirically test if the Fixed Features property (Definition 3.2) holds for tasks that the eNTK can solve by measuring the relative distance between $\kappa^{(\mathrm{SGD})}$ computed before and after FT (Table 7). Tasks that the eNTK + +![](images/c2ad49a3eb5b8386968cd0bdee9058f5973448ee40dc315a485c6bf2f6246eab.jpg) +Figure 1. The performance difference between SGD-FT and $\mathcal{K}^{(\mathrm{SGD})}$ performance for both the standard and the prompt-based setting (Section 3) suggests that using a prompt is important for kernel behavior (Definition 3.2) to arise. In standard FT, we initialize the new classification head (i.e., $\Gamma$ ) using the linear probing solution. The performance is shown for the 64-shot setting and measured by the average test accuracy over 5 random splits, except for MRPC and QQP, where it is F1. Results on additional settings are in Table 6. + +can solve exhibit low (i.e., less than 2) distances, indicating the Fixed Features property likely holds. + +Entailment tasks exhibit anomalous optimization characteristics. Although pre-trained models perform much better than random on MNLI and SNLI, we find that the eNTK cannot solve these tasks very well (Table 2 and Figure 2). Similarly, although the pre-trained model demonstrates near-random performance on QNLI and RTE, we find that the eNTK can solve these tasks. Moreover, although QNLI and RTE could sometimes be solved by the eNTK, the results suggest they do not induce the Linearization property of kernel behavior very strongly. Altogether, these findings suggest a deeper mystery around the fine-tuning dynamics when solving entailment tasks. + +# 6.3. Tasks without Kernel Behavior + +TREC, MNLI, SNLI, QNLI, and MPQA consistently do not induce kernel behavior (Table 1). Our theoretical analysis suggests that when the prompt and label words do not format the task as a subcase of pre-training, then the task will not be natural in the infinite-width limit (Definition 5.3) and hence will not induce kernel behavior. + +Considering the prompt templates shown in Appendix A.1, we suggest that the TREC prompt (simply a colon) provides insufficient signal to the model to perform question type classification. For MNLI and SNLI, we observe that connecting the premise and hypothesis with the label word "Maybe" for neutral examples results in ungrammatical sentences. Analogously, for QNLI, we note that the premise is often a question without a clear yes or no answer, so the label words are unnatural to place between the premise and hypothesis. The prompt used for sentiment and polarity tasks is designed + +The eNTK can consistently solve AG News although Adam-FT does not exhibit kernel behavior. This finding suggests that our theory holds for the prompt used with AG News, but the grid search over learning rates results in FT that does not exhibit kernel behavior. In particular, the success of the eNTK suggests the task can be solved with a very small learning rate, but the FT trajectory achieving the best performance uses a larger learning rate and thus exhibits more complex dynamics. + +to follow a complete sentence or substantial phrase, so it is less natural when used with MPQA examples, which are often only one or two words. See Appendix B.2 for prompt ablations. + +# 7. Efficacy of Subspace-Based Fine-Tuning Methods + +We study subspace-based fine-tuning methods, which apply updates to only a low-dimensional subspace of the high-dimensional model parameter space during fine-tuning. Although theoretical analysis of these methods seems complex, the kernel view admits a simple interpretation. We directly apply the Johnson-Lindenstrauss (JL) lemma in Johnson (1984), which guarantees inner product preservation under random projections, to suggest why LoRA (Hu et al., 2021) works. Similar analysis yields results on parameter-subspace FT methods used to study intrinsic dimension (Li et al. (2018); Aghajanyan et al. (2021), see Appendix D). + +Definition 7.1 ( $\mathcal{A}$ -LoRA FT (Hu et al., 2021)). Let $\mathcal{A}$ be a gradient-based optimization algorithm. For every weight matrix $W \in \mathbb{R}^{m \times n}$ , choose $k \ll m$ and initialize $B \in \mathbb{R}^{m \times k}$ with i.i.d. zero-mean Gaussian values and $A \in \mathbb{R}^{k \times n}$ as 0. Set the weight to be $W + BA$ . To fine-tune, fix $W$ at its pre-trained value and train only $A$ and $B$ using $\mathcal{A}$ . + +We show that if SGD FT exhibits kernel behavior, then so does SGD-LoRA FT, and SGD-LoRA FT using a sufficiently large $k$ does not modify the kernel or the dynamics. + +Theorem 7.2 (Informal version of Theorem D.5). Let $\mathcal{K}^{(SGD)}$ be the kernel analog (Definition 3.3) to SGD FT and $\mathcal{K}_{LoRA}^{(SGD)}$ be the kernel analog to SGD-LoRA FT on a downstream task $\Xi$ with $N$ examples. Then, with high probability, $\mathcal{K}_{LoRA}^{(SGD)}(i,j)\approx \mathcal{K}^{(SGD)}(i,j)$ for all $i,j\in [N]$ . + +Proof sketch. Consider an individual layer in the network and a task input $\xi \in \Xi$ . LoRA causes $\nabla_B f(\xi; \theta)$ to be a random projection of $\nabla_W f(\xi; \theta)$ , where $\nabla_B$ denotes the gradient with respect to $B$ , and $\nabla_A f(\xi; \theta) = 0$ since $B$ is initialized to zero. The rest of the proof follows from applying JL to all input pairs $\xi, \xi'$ to show the inner product + +(and thus, the kernel entry) is preserved. + +![](images/257456d46380fd3427f9865e86b4eb1b036ddc5b4809868a703faa5d12e281e6.jpg) + +Remark 7.3. Theorem 7.2 states that the kernel analog of SGD-FT is unchanged by LoRA in both prompt-based and standard FT. However, the theorem only provides an explanation for the success of $\mathcal{A}$ -LoRA FT when $\mathcal{A}$ FT exhibits kernel behavior. Therefore, as per Sections 5 and 6, we consider this theorem to only be meaningful when considering prompt-based SGD and prompt-based LoRA-SGD. + +Table 12 verifies that prompt-based SGD FT and SGD-LoRA FT achieve similar performance on several tasks, and $\kappa_{\mathrm{LoRA}}^{\mathrm{(SGD)}}$ achieves performance similar to $\kappa^{\mathrm{(SGD)}}$ . + +# 8. Conclusion + +We use NTKs to mathematically formalize the general intuition that fine-tuning pretrained language models to solve downstream tasks requires only a "small change." We derive a new kernel to describe Adam training (Section 4) and we use it in Section 5 to show how prompt-based fine-tuning can exhibit kernel behavior. Extensive experiments in Section 6 on 14 NLU tasks demonstrate that including a meaningful prompt often causes FT to exhibit kernel behavior (Figure 1) and that kernel dynamics describe prompt-based FT on tasks that the eNTK can solve (Section 6.2). We demonstrate one possible use of the kernel view to explain empirical phenomena by applying it to understand subspace-based fine-tuning methods (Section 7), and we note that the kernel has many mathematically useful properties that can aid design and study of alternate fine-tuning methods. + +Our work suggests that a kernel-based view of language model fine-tuning is plausible, but there are several limitations. First, our experiments are limited to few-shot classification tasks and a single masked language model with specific prompts. Extending to additional settings (e.g., increasing $k$ ) and models require significant computational cost because the eNTK is expensive to compute. The theoretical results also apply only to "early-stage" training with Adam, and it is not clear how well they can describe longer training schemes; concurrent work in Littwin & Yang (2023) suggests that the reduction of Adam to SignGD is crucial to observe kernel dynamics. Nevertheless, our work provides substantial empirical and theoretical evidence that fine-tuning can be analyzed in terms of kernel behavior. + +As a future direction, one can use the kernel analog to study the inductive bias of FT, as was done for gradient descent from a random initialization in the past (Allen-Zhu et al., 2019b;a; Li & Liang, 2018). For example, several works (Cao & Gu, 2019; Arora et al., 2019a; Wei et al., 2022) have shown that the spectrum of the kernel can bound the generalization ability of the trained network, giving insight into why few-shot fine-tuning does not result in catastrophic overfitting. Our experiments show some tasks + +do not induce kernel behavior during FT, suggesting that future theoretical analysis of FT needs to account for the downstream task and choice of prompt. + +# Acknowledgements + +We thank Tianyu Gao, Wei Hu, Jason Lee, Kaifeng Lyu, Abhishek Panigrahi, Nikunj Saunshi, Mengzhou Xia, and Greg Yang for their helpful comments and discussion. This work is funded by NSF, ONR, Simons Foundation, DARPA and SRC. + +# References + +Abnar, S., Dehghani, M., Neyshabur, B., and Sedghi, H. Exploring the limits of large scale pre-training, 2021. URL https://arxiv.org/abs/2110.02095. +Achille, A., Golatkar, A., Ravichandran, A., Polito, M., and Soatto, S. Lqf: Linear quadratic fine-tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15729-15739, 2021. +Aghajanyan, A., Gupta, S., and Zettlemoyer, L. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 7319-7328, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.568. URL https://aclanthology.org/2021.acl-long.568. +Allen-Zhu, Z., Li, Y., and Liang, Y. Learning and generalization in overparameterized neural networks, going beyond two layers. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019a. URL https://proceedings.neurips.cc/paper/2019/file/62dad6e273d32235ae02b7d321578ee8-Paper.pdf. +Allen-Zhu, Z., Li, Y., and Song, Z. A convergence theory for deep learning via over-parameterization. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 242-252. PMLR, 09-15 Jun 2019b. URL https://proceedings.mlr.press/v97/allen-zhu19a.html. +Arora, S., Du, S., Hu, W., Li, Z., and Wang, R. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 322-332. PMLR, 09-15 Jun 2019a. URL https://proceedings.mlr.press/v97/arora19a.html. +Arora, S., Du, S. S., Hu, W., Li, Z., Salakhutdinov, R. R., and Wang, R. On exact computation with an infinitely wide neural net. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019b. URL https://proceedings.neurips.cc/paper/2019/file/dbc4d84bfcfe2284ba11beffb853a8c4-Paper.pdf. + +Arora, S., Du, S. S., Li, Z., Salakhutdinov, R., Wang, R., and Yu, D. Harnessing the power of infinitely wide deep nets on small-data tasks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rkl8sJBYvH. +Bar Haim, R., Dagan, I., Dolan, B., Ferro, L., Giampiccolo, D., Magnini, B., and Szpektor, I. The second PASCAL recognising textual entailment challenge. 2006. URL https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.60.8552&rep=rep1&type=pdf. +Ben Zaken, E., Goldberg, Y., and Ravfogel, S. BitFit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 1-9, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-short.1. URL https://aclanthology.org/2022.acl-short.1. +Bentivogli, L., Clark, P., Dagan, I., and Giampiccolo, D. The fifth PASCAL recognizing textual entailment challenge. In TAC, 2009. URL https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.1231&rep=rep1&type=pdf. +Bowman, S. R., Angeli, G., Potts, C., and Manning, C. D. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 632-642, Lisbon, Portugal, September 2015. Association for Computational Linguistics. doi: 10.18653/v1/D15-1075. URL https://aclanthology.org/D15-1075. +Cao, Y. and Gu, Q. Generalization bounds of stochastic gradient descent for wide and deep neural networks. Advances in neural information processing systems, 32, 2019. +Chen, X., Liang, C., Huang, D., Real, E., Liu, Y., Wang, K., Hsieh, C.-J., Lu, Y., and Le, Q. V. Evolved optimizer for vision. In First Conference on Automated Machine Learning (Late-Breaking Workshop), 2022. URL https://openreview.net/forum?id=jK_eS5BxOuu. +Chua, K., Lei, Q., and Lee, J. D. How fine-tuning allows for effective meta-learning. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 8871-8884. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/4a533591763dfa743a13affab1a85793-Paper.pdf. + +Clark, K., Luong, M.-T., Le, Q. V., and Manning, C. D. Electrotra: Pre-training text encoders as discriminators rather than generators. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=r1xMH1BtvB. +Dagan, I., Glickman, O., and Magnini, B. The PASCAL recognising textual entailment challenge. In the First International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment, 2005. URL https://kdd.cs.ksu.edu/Courses/Fall-2008/CIS798/Handouts/06-dagan05pascal.pdf. +Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. In North American Chapter of the Association for Computational Linguistics (NAACL), pp. 4171-4186, 2019. +Dolan, W. B. and Brockett, C. Automatically constructing a corpus of sentential paraphrases. In the Third International Workshop on Paraphrasing (IWP2005), 2005. URL https://aclanthology.org/I05-5002.pdf. +Du, S., Lee, J., Li, H., Wang, L., and Zhai, X. Gradient descent finds global minima of deep neural networks. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 1675-1685. PMLR, 09-15 Jun 2019a. URL https://proceedings.mlr.press/v97/du19c.html. +Du, S. S., Zhai, X., Poczos, B., and Singh, A. Gradient descent provably optimizes over-parameterized neural networks. In International Conference on Learning Representations, 2019b. URL https://openreview.net/forum?id=S1eK3i09YQ. +Du, S. S., Hu, W., Kakade, S. M., Lee, J. D., and Lei, Q. Few-shot learning via learning the representation, provably. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=pW2Q2xLwIMD. +Gao, T., Fisch, A., and Chen, D. Making pre-trained language models better few-shot learners. In Association for Computational Linguistics (ACL), pp. 3816-3830, 2021. +Giampiccolo, D., Magnini, B., Dagan, I., and Dolan, B. The third PASCAL recognizing textual entailment challenge. In the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, 2007. URL https://aclanthology.org/W07-1401.pdf. + +He, H. and Zou, R. functorch: Jax-like composable function transforms for pytorch. https://github.com/pytorch/functorch, 2021. +He, M., He, F., Shi, L., Huang, X., and Suykens, J. A. K. Learning with asymmetric kernels: Least squares and feature interpretation, 2022. URL https://arxiv.org/abs/2202.01397. +Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., and Chen, W. Lora: Low-rank adaptation of large language models, 2021. URL https://arxiv.org/abs/2106.09685. +Hu, M. and Liu, B. Mining and summarizing customer reviews. In ACM SIGKDD international conference on Knowledge discovery and data mining, 2004. +Jacot, A., Gabriel, F., and Hongler, C. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/5a4be1fa34e62bb8a6ec6b91d2462f5a-Paper.pdf. +Johnson, W. B. Extensions of lipschitz mappings into a hilbert space. Contemp. Math., 26:189-206, 1984. +Joshi, M., Chen, D., Liu, Y., Weld, D. S., Zettlemoyer, L., and Levy, O. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association of Computational Linguistics (TACL), 2020. +Lee, J. D., Lei, Q., Saunshi, N., and ZHUO, J. Predicting what you already know helps: Provable self-supervised learning. In Advances in Neural Information Processing Systems, volume 34, pp. 309-323. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/02e656adee09f8394b402d9958389b7d-Paper.pdf. +Lester, B., Al-Rfou, R., and Constant, N. The power of scale for parameter-efficient prompt tuning. In Empirical Methods in Natural Language Processing (EMNLP), pp. 3045-3059, 2021. +Li, C., Farkhoor, H., Liu, R., and Yosinski, J. Measuring the intrinsic dimension of objective landscapes. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=ryup8-WCW. +Li, X. L. and Liang, P. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational + +Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582-4597, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.353. URL https://aclanthology.org/2021.acl-long.353. +Li, Y. and Liang, Y. Learning overparameterized neural networks via stochastic gradient descent on structured data. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/54fe976ba170c19ebae453679b362263-Paper.pdf. +Li, Z., Bhojanapalli, S., Zaheer, M., Reddi, S., and Kumar, S. Robust training of neural networks using scale invariant architectures. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 12656-12684. PMLR, 17-23 Jul 2022. URL https://proceedings.mlr.press/v162/1i22b.html. +Littwin, E. and Yang, G. Adaptive optimization in the $\$ \backslash$ infty $\$$ -width limit. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=zgVDqw9ZUES. +Liu, L., Liu, X., Gao, J., Chen, W., and Han, J. Understanding the difficulty of training transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 5747-5763, Online, November 2020a. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.463. URL https://aclanthology.org/2020.emnlp-main.463. +Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., and Neubig, G. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Comput. Surv., aug 2022. ISSN 0360-0300. doi: 10.1145/3560815. URL https://doi.org/10.1145/3560815. Just Accepted. +Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. Ro{bert}a: A robustly optimized {bert} pretraining approach, 2020b. URL https://openreview.net/forum?id=SyxS0T4tvS. +Logan IV, R., Balazevic, I., Wallace, E., Petroni, F., Singh, S., and Riedel, S. Cutting down on prompts and parameters: Simple few-shot learning with language models. In Findings of the Association for Computational Linguistics: ACL 2022, pp. 2824-2835, + +Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-acl.222. URL https://aclanthology.org/2022.findings-acl.222. +Ma, C., Wu, L., and E, W. A qualitative study of the dynamic behavior for adaptive gradient algorithms. In Proceedings of the 2nd Mathematical and Scientific Machine Learning Conference, volume 145 of Proceedings of Machine Learning Research, pp. 671-692. PMLR, 16-19 Aug 2022. URL https://proceedings.mlr.press/v145/ma22a.html. +Maddox, W., Tang, S., Moreno, P., Gordon Wilson, A., and Damianou, A. Fast adaptation with linearized neural networks. In Banerjee, A. and Fukumizu, K. (eds.), Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, volume 130 of Proceedings of Machine Learning Research, pp. 2737-2745. PMLR, 13-15 Apr 2021. URL https://proceedings.mlrpress/v130/maddox21a.html. +Malladi, S., Lyu, K., Panigrahi, A., and Arora, S. On the sdes and scaling rules for adaptive gradient algorithms, 2022. URL https://arxiv.org/abs/2205.10287. +Mu, F., Liang, Y., and Li, Y. Gradients as features for deep representation learning. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=BkeoaeHKDS. +Novak, R., Sohl-Dickstein, J., and Schoenholz, S. S. Fast finite width neural tangent kernel. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 17018-17044. PMLR, 17-23 Jul 2022. URL https://proceedings.mlr.press/v162/novak22a.html. +Pang, B. and Lee, L. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Association for Computational Linguistics (ACL), 2004. +Pang, B. and Lee, L. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Association for Computational Linguistics (ACL), 2005. +Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al. Improving language understanding by generative pre-training. 2018. +Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P. J., et al. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1-67, 2020. + +Rajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. SQuAD: 100,000+ questions for machine comprehension of text. In Empirical Methods in Natural Language Processing (EMNLP), 2016. URL https://aclanthology.org/D16-1264/. +Saunshi, N., Plevrakis, O., Arora, S., Khodak, M., and Khandeparkar, H. A theoretical analysis of contrastive unsupervised representation learning. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 5628-5637. PMLR, 09-15 Jun 2019. URL https://proceedings.mlr.press/v97/saunshi19a.html. +Saunshi, N., Malladi, S., and Arora, S. A mathematical exploration of why language models help solve downstream tasks. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=vVjIW3sEc1s. +Saunshi, N., Ash, J., Goel, S., Misra, D., Zhang, C., Arora, S., Kakade, S., and Krishnamurthy, A. Understanding contrastive learning requires incorporating inductive biases. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 19250-19286. PMLR, 17-23 Jul 2022. URL https://proceedings.mlr.press/v162/saunshi22a.html. +Schick, T. and Schütze, H. Exploiting cloze-questions for few-shot text classification and natural language inference. In European Chapter of the Association for Computational Linguistics (EACL), pp. 255–269, 2021. +Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C. D., Ng, A., and Potts, C. Recursive deep models for semantic compositionality over a sentiment treebank. In Empirical Methods in Natural Language Processing (EMNLP), 2013. URL https://aclanthology.org/D13-1170.pdf. +Tosh, C., Krishnamurthy, A., and Hsu, D. Contrastive learning, multi-view redundancy, and linear models. In Proceedings of the 32nd International Conference on Algorithmic Learning Theory, volume 132 of Proceedings of Machine Learning Research, pp. 1179-1206. PMLR, 16-19 Mar 2021a. URL https://proceedings.mlrpress/v132/tosh21a.html. +Tosh, C., Krishnamurthy, A., and Hsu, D. Contrastive estimation reveals topic posterior information to linear models. Journal of Machine Learning Research, 22(281): 1-31, 2021b. URL http://jmlr.org/papers/v22/21-0089.html. + +Tripuraneni, N., Jordan, M., and Jin, C. On the theory of transfer learning: The importance of task diversity. In Advances in Neural Information Processing Systems, volume 33, pp. 7852-7862. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/59587bfffec1c7846f3e34230141556ae-Paper.pdf. +Tsai, Y.-H. H., Wu, Y., Salakhutdinov, R., and Morency, L.-P. Self-supervised learning from a multi-view perspective. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=-bdp_8Itjwp. +Voorhees, E. M. and Tice, D. M. Building a question answering test collection. In the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, 2000. +Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations (ICLR), 2019. URL https://openreview.net/forum?id=rJ4km2R5t7. +Wei, A., Hu, W., and Steinhardt, J. More than a toy: Random matrix models predict how real-world neural representations generalize. In Proceedings of the 39th International Conference on Machine Learning, volume 162, pp. 23549-23588. PMLR, 17-23 Jul 2022. URL https://proceedings.mlr.press/v162/wei22a.html. +Wiebe, J., Wilson, T., and Cardie, C. Annotating expressions of opinions and emotions in language. Language resources and evaluation, 39(2-3), 2005. +Williams, A., Nangia, N., and Bowman, S. A broad-coverage challenge corpus for sentence understanding through inference. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), 2018. URL https://aclanthology.org/N18-1101.pdf. +Woodworth, B., Gunasekar, S., Lee, J. D., Moroshko, E., Savarese, P., Golan, I., Soudry, D., and Srebro, N. Kernel and rich regimes in overparametrized models. In Proceedings of Thirty Third Conference on Learning Theory, volume 125 of Proceedings of Machine Learning Research, pp. 3635-3673. PMLR, 09-12 Jul 2020. URL https://proceedings.mlr.press/v125/woodworth20a.html. +Wu, S., Zhang, H. R., and Ré, C. Understanding and improving information transfer in multi-task learning. In + +International Conference on Learning Representations, 2020. URL https://openreview.net/forum? id=SylzhkBtDB. +Yang, G. Wide feedforward or recurrent neural networks of any architecture are gaussian processes. Advances in Neural Information Processing Systems, 32, 2019. +Yang, G. Tensor programs ii: Neural tangent kernel for any architecture. arXiv preprint arXiv:2006.14548, 2020a. +Yang, G. Tensor programs iii: Neural matrix laws. arXiv preprint arXiv:2009.10685, 2020b. +Yang, G. and Hu, E. J. Tensor programs iv: Feature learning in infinite-width neural networks. In International Conference on Machine Learning, pp. 11727-11737. PMLR, 2021. +Yang, G. and Littwin, E. Tensor programs iib: Architectural universality of neural tangent kernel training dynamics. In International Conference on Machine Learning, pp. 11762-11772. PMLR, 2021. +Yang, G., Hu, E. J., Babuschkin, I., Sidor, S., Liu, X., Farhi, D., Ryder, N., Pachocki, J., Chen, W., and Gao, J. Tensor programs v: Tuning large neural networks via zero-shot hyperparameter transfer. arXiv preprint arXiv:2203.03466, 2022. +Zhang, J., Karimireddy, S. P., Veit, A., Kim, S., Reddi, S., Kumar, S., and Sra, S. Why are adaptive methods good for attention models? In Advances in Neural Information Processing Systems, volume 33, pp. 15383-15393. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/b05b57f6add810d3b7490866d74c0053-Paper.pdf. +Zhang, X., Zhao, J., and LeCun, Y. Character-level convolutional networks for text classification. In Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015. URL https://proceedings.neurips.cc/paper/2015/file/250cf8b51c773f3f8dc8b4be867a9a02-Paper.pdf. +Zou, D., Cao, Y., Zhou, D., and Gu, Q. Stochastic gradient descent optimizes over-parameterized deep relu networks, 2018. URL https://arxiv.org/abs/1811.08888. + +# A. Experimental Details + +# A.1. Datasets and Prompts + +
DatasetC#Train#TestTypePromptLabel words
SST-2267,349872sentiment<S1> It was [MASK] .{great, terrible}
SST-558,5441,000sentiment<S1> It was [MASK] .{great, good, okay, bad, terrible}
MR28,6621,000sentiment<S1> It was [MASK] .{great, terrible}
CR23,175500sentiment<S1> It was [MASK] .{great, terrible}
MPQA28,6061,000opinion polarity<S1> It was [MASK] .{great, terrible}
Subj28,0001,000subjectivity<S1> This is [MASK] .{subjective, objective}
TREC65,452500question cls.[MASK] : <S1>{Description, Expression, Entity, Human, Location, Number}
AG News4120,0007,600news topic<S1> This article is about [MASK] news.{world, sports, business, tech}
MNLI3392,7021,000NLI<S1> ? [MASK] , <S2>{Yes, Maybe, No}
SNLI3549,3671,000NLI<S1> ? [MASK] , <S2>{Yes, Maybe, No}
QNLI2104,7431,000NLI<S1> ? [MASK] , <S2>{Yes, No}
RTE22,490277NLI<S1> ? [MASK] , <S2>{Yes, No}
MRPC23,668408paraphrase<S1> [MASK] , <S2>{Yes, No}
QQP2363,8461,000paraphrase<S1> [MASK] , <S2>{Yes, No}
+ +Table 3. The statistics and prompts of the datasets we used in our experiments. The choices of prompts are adapted from (Gao et al., 2021) and include a template and a set of label words that can fill in the [MASK] token. $< S_{1} >$ and $< S_{2} >$ refer to the first and the second (if any) input sentence. + +Table 3 shows the set of downstream tasks, which are adapted from (Gao et al., 2021). We consider 8 single sentence classification datasets (SST-2 (Socher et al., 2013), SST-5 (Socher et al., 2013), MR (Pang & Lee, 2005), CR (Hu & Liu, 2004), MPQA (Wiebe et al., 2005), Subj (Pang & Lee, 2004), TREC (Voorhees & Tice, 2000), and AG News (Zhang et al., 2015)), and 6 sentence pair datasets (MNLI (Williams et al., 2018), SNLI (Bowman et al., 2015), QNLI (Rajpurkar et al., 2016), RTE (Dagan et al., 2005; Bar Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009), MRPC (Dolan & Brockett, 2005) and $\mathsf{QQP^9}$ . Our datasets represent 6/8 datasets of the GLUE benchmark (Wang et al., 2019) (SST-2, MNLI, QNLI, RTE, MRPC, QQP). + +In contrast to (Gao et al., 2021), we add AG News as an additional multi-label classification task, and make two modifications to the test sets. First, we split CR into 500 test examples and 3,175 training examples to ensure enough training examples for our 512-shot experiments and secondly, we limit the test sizes to 1,000 examples to speed up kernel evaluations. + +To generate $k$ -shot few-shot datasets, the original training data is used to randomly sample $k$ examples per label for training and another, separate $k$ examples per label for the validation set. Unless otherwise stated, we usually run experiments over 5 seeds of few-shot data sets. We directly use the 'manual' prompt templates and label words proposed by (Gao et al., 2021), which are reproduced in Table 3. We do include any demonstrations in our prompts. + +# A.2. Computing the Kernel + +We use functorch (He & Zou, 2021) to compute the eNTK for RoBERTa-base (125M parameters), using a mix of backward-mode auto-differentiation for computing the jacobians and forward-mode auto-differentiation for computing jacobian-vector products (Novak et al., 2022). Note that $\mathcal{K}^{\mathrm{(SignGD)}}$ cannot be computed via jacobian-vector products and requires substantially more memory and run-time in practice. + +# A.3. Solving the Kernel + +In the standard NTK setting, the initial output of the model $f(\cdot ;\theta_0)$ contains no information about solving the task, because $\theta_0$ is a random initialization. However, in the prompted FT setting, we expect the pre-trained model to be able to solve the downstream task well even before any fine-tuning occurs (see Table 5). So, we add the pre-trained model's output to the output from the kernel. Furthermore, we run a grid search over scaling the labels in order to take advantage of any pre-existing knowledge the model has about the downstream task. In particular, the kernel regression is based on the $\ell_{2}$ + +distance to the ground truth one-hot vector, but the pre-trained model outputs the logits which will be used for cross-entropy loss. Scaling the one-hot vector by $f_{0}$ helps align its scaling with the logits. Our hyperparameter grid for $f_{0}$ can be found in Table 4, where $\infty$ corresponds to not using the pre-trained model logits when solving the kernel. + +Solving Multi-Class Tasks There are several options for how to solve $C$ -way classification tasks ( $C > 2$ ). We perform the most general one, which scales with $C^2$ . Each logit is treated as an independent output of the network, essentially scaling the size $N$ of the original dataset by a factor of $C$ . With $CN$ examples, the kernel now has shape $CN \times CN$ . The labels are also scaled up to treat the multi-class problem as many binary classification problems. Solving the multi-class task this way allows the kernel regression model to view relationships between different logits. + +Symmetric Kernel Given a symmetric kernel $\mathcal{K} \in \mathbb{R}^{N \times N}$ , we solve the kernel regression problem. In particular, we use the representer theorem to write that the empirical risk minimizer of the loss can be expressed as a linear combination of the kernel features computed on the train set. + +$$ +h ^ {*} (\cdot) = \underset {h \in \mathcal {H} _ {\mathcal {K}}} {\arg \min} \frac {1}{N} \sum_ {i = 1} ^ {N} \ell (h (x _ {i}), y _ {i}) \quad \leftrightarrow \quad h ^ {*} (\cdot) = \sum_ {i = 1} ^ {N} \alpha_ {i} \mathcal {K} (\cdot , x _ {i}) +$$ + +for a given loss function $\ell$ . The symmetric SignGD and SGD kernels train $\alpha_{i}$ via gradient descent to minimize a regularized logistic loss on the downstream task. We search over a grid of regularization strengths chosen proportional to $\|\mathcal{K}\|_{\mathrm{op}}$ , see Table 4. For a test input $x$ , the kernel outputs the prediction $h(x) = \sum_{i} \alpha_{i} \mathcal{K}(x, x_{i})$ . + +Asymmetric Kernel We write how to solve the kernel regression problem with an asymmetric kernel, developed in (He et al., 2022), here. Consider the augmented linear system: + +$$ +\left[ \begin{array}{c c} I / \gamma & H \\ H ^ {\top} & I / \gamma \end{array} \right] \left[ \begin{array}{c} \alpha \\ \beta \end{array} \right] = \left[ \begin{array}{c} 1 \\ 1 \end{array} \right] +$$ + +where $H_{ij} = y_i\phi_s(x_i)^\top \phi_t(x_j)y_j$ with $\phi_s$ and $\phi_t$ as the two different feature maps and $y_i$ as the label for the $i$ th example. In our setting, $\phi_s$ is the gradient of the datapoint, and $\phi_t$ is the sign of the gradient. Define $\omega^*$ and $\nu^*$ as + +$$ +\omega^ {*} = \sum_ {i} \beta_ {i} ^ {*} y _ {i} \phi_ {t} (x _ {i}) +$$ + +$$ +\nu^ {*} = \sum_ {i} \alpha_ {i} ^ {*} y _ {i} \phi_ {s} (x _ {i}) +$$ + +Solving this system yields two discriminant functions: + +$$ +f _ {s} (x) = K (x, X) \left(\beta^ {*} \odot Y\right) +$$ + +$$ +f _ {t} (x) = K (X, x) \left(\alpha^ {*} \odot Y\right) +$$ + +where $K(x_{i},x_{j}) = \langle \phi_{s}(x_{i}),\phi_{t}(x_{j})\rangle$ + +We can thus create one discriminant function as $c f_{s}(x) + (1 - c)f_{t}(x)$ where $c \in [0,1]$ is some hyperparameter. When $\phi_s = \phi_t$ , we see that $f_{s} = f_{t}$ and we reduce to the standard kernel problem (though with repeated equations). Note that per He et al. (2022), this system is only meaningful in terms of stationary points when training $\alpha$ and $\beta$ using the least squares loss. + +We now leverage some specific knowledge about the NTK setting. In particular, we know that we should only use $f_{s}$ as the predictor in order to correctly represent a new test input in the kernel analog for SignGD. + +Hyperparameters and Implementation We follow (Gao et al., 2021) in using the few-shot validation set to search over hyperparameters and finding the best hyperparameter per few-shot dataset. We use value ranges given by (Gao et al., 2021) and (Hu et al., 2021), and search over a wider range of values for SGD. Table 4 shows the hyperparameter grids for fine-tuning and the kernel method. We fine-tune without weight decay and a learning rate schedule with a linear decay and no warmup. + +
ExperimentHyperparametersValues
SGD FTBatch size{2,4,8} ×
Learning rate{1e-4,5e-4,1e-3,5e-3,1e-2}
SGD-LoRA FTBatch size{4,16} ×
Learning rate{1e-4,1e-3,1e-2} ×
(rLoRA,αLoRA){(8,16)}
Adam FTBatch size{2,4,8} ×
Learning rate{1e-5,2e-5,5e-5}
Adam-LoRA FTBatch size{4,16} ×
Learning rate{1e-5,4e-5,4e-4} ×
(rLoRA,αLoRA){(8,16)}
K(SGD), K(SignGD)Kernel regularization{0,0.001,0.01,0.1,1} ×
f0 scaling{10,100,1000,10000,∞}
K(A-SignGD)Kernel regularization{0,0.001,0.01,0.1,1} ×
f0 scaling{10,100,1000,10000,∞} ×
Kernel γ{0.01,0.1,1,10} ×
Kernel c{1}
+ +Table 4. The hyperparameter grids used in our experiments. + +Gao et al. (2021) train for 1000 steps in the 16-shot setting, and validate the performance every 100 steps to take the best checkpoints. As we consider varying values of $k$ , we use the formula of training for $32kC$ steps and validating every $4kC$ steps, where $C$ is the number of classes in the dataset. This gives a comparable number of training and validation steps for binary tasks in the 16-shot setting. + +# B. Additional Experimental Results + +Tables 6 and 5 contain the numerical results corresponding to Figures 1 and 2 respectively, and also report results for $k = 64$ . Table 7 measures how well the Fixed Features property holds for different tasks. A smaller value suggests that the condition for kernel behavior (Definition 3.2) is satisfied more strongly. + +# B.1. Solvable Task Experiments + +We run a preliminary empirical test to verify if various tasks are solvable in the infinite-width limit (see Definition 5.3). Intuitively, the assumption states that wider models (with all other architecture and pre-training hyperparameters fixed) will solve the downstream task better in a zero-shot fashion, and in the limit, an infinitely wide model will solve the task perfectly. The cheap empirical test involves measuring the average output derivative $\chi$ of the loss w.r.t. the model output (see Definition 3.1 for a definition of $\chi$ ) over the entire dataset for two models of different widths. We note that our paper uses RoBERTa-base ( $n = 768$ ) for experiments, so a natural choice for a wider model would be RoBERTa-large ( $n = 1024$ ). However, RoBERTa-large is also deeper than RoBERTa-base, and indeed, in general, it is difficult to find two publicly available pre-trained models with different widths and fixed depth. We nevertheless present the table of $\chi$ values for several downstream tasks measured on RoBERTa-base and RoBERTa-large in Table 8. + +# B.2. Robustness to Choice of Prompt + +We explore different choices of prompt and label words in Table 9. When using the results of the prompt and label search from Gao et al. (2021), we find that the kernel approximation matches fine-tuning well. However, the choice of prompt does matter and $\mathcal{K}^{(\mathrm{SGD})}$ performs poorly with the minimal "null prompts" from Logan IV et al. (2022) on sentiment classification datasets, where the prompt is merely $< S_{1}>$ [MASK] and the label words remain {great, terrible}. We hypothesize this failure is because the task is no longer solvable in the infinite width limit (Definition 5.3). + +![](images/10de0401ca5484da0981ad4b51c759f11f9782722a68527a7ef29b042c71163d.jpg) + +![](images/a7450ffed313c7357160d0e0ddabc2b160ad8d14c983bde75aebe89e6b88602a.jpg) +(b) Adam-FT +Figure 2. Accuracies of zero-shot pre-trained model (PT), linearized model (Lin., see Definition 3.2) and fine-tuned model (FT). Tasks that induce the Linearization property of kernel behavior (Definition 3.2) will show that Lin. performance recovers a substantial amount of the performance of SGD-FT and Adam-FT respectively. We plot the median and range of the test accuracies across 5 seeds and data splits for $k = 64$ . + +
k-shotSST-2SST-5MRCR
Lin.FTLin.FTLin.FTLin.FT
079.0—32.6—71.9—86.2—
1687.5(1.3)88.3(1.2)41.8(4.1)45.4(2.6)84.3(1.8)81.3(6.1)93.3(0.6)93.0(1.6)
6488.6(0.4)89.3(0.7)42.9(2.2)48.5(2.0)85.0(0.2)86.0(0.4)94.0(0.5)93.7(0.8)
k-shotMQPASubjTRECAG News
Lin.FTLin.FTLin.FTLin.FT
068.2—54.6—27.4—68.7—
1675.6(3.1)82.8(2.2)82.9(4.7)87.4(2.1)30.4(7.2)79.6(6.1)57.8(18.3)84.0(1.6)
6475.6(2.3)85.0(0.2)78.9(14.0)92.7(0.6)31.2(13.0)92.6(1.3)67.5(12.2)86.8(1.1)
+ +(a) Single-sentence tasks. + +
k-shotMNLISNLIQNLI
Lin.FTLin.FTLin.FT
0—48.1——49.8——51.2—
1643.6(6.4)56.8(2.9)47.2(9.3)64.6(4.1)57.5(2.3)63.1(3.5)
6455.1(4.8)67.9(1.0)56.9(5.7)76.9(1.4)60.4(5.3)74.2(3.2)
k-shotRTEMRPCQQP
Lin.FTLin.FTLin.FT
0—53.1——41.7——42.7—
1655.4(6.7)57.6(6.3)57.7(11.6)68.9(2.4)57.5(10.3)61.7(6.5)
6459.6(2.9)67.3(2.7)64.2(2.2)73.8(1.7)61.7(9.4)72.7(1.8)
+ +(b) Sentence-pair tasks. + +Table 5. Accuracies of pre-trained model (0-shot), linearized model (Lin., see Definition 3.2) and fine-tuned model (FT). Tasks that exhibit the Linearization property of kernel behavior (Definition 3.2) during fine-tuning will show that Lin. performance recovers a substantial amount of the gain in performance achieved by performing fine-tuning with Adam. Accuracies are averaged across 5 fine-tuning seeds for each value of $k$ and measured on the test set. This table corresponds to the bar chart in Figure 2b. + +
k-shotPromptMethodSST-2SST-5MRCRMPQASubjTRECAG News
16PromptSGD FT89.0(1.5)44.6(1.4)83.4(2.5)93.3(0.2)83.3(1.3)88.5(2.6)80.3(7.2)84.2(1.1)
K(SGD)88.3(0.3)43.6(2.2)84.7(1.5)93.2(0.9)76.4(2.7)88.6(1.3)56.0(9.2)82.1(2.0)
Adam FT88.3(1.2)45.4(2.6)81.3(6.1)93.0(1.6)82.8(2.2)87.4(2.1)79.6(6.1)84.0(1.6)
K(SignGD)88.3(0.5)42.2(3.9)84.3(1.5)93.7(0.5)76.7(3.3)89.2(2.0)58.1(6.5)82.3(1.6)
K(A-)SignGD)88.3(0.4)43.7(1.7)84.9(1.1)93.4(0.5)74.6(3.5)88.6(1.8)20.7(4.2)83.6(1.0)
StandardSGD FT79.7(4.5)36.1(3.7)64.8(5.2)86.6(2.6)69.1(6.8)89.2(0.7)62.7(3.8)82.3(0.4)
K(SGD)62.3(6.4)32.0(1.5)61.2(4.0)67.5(2.3)62.7(2.3)86.7(1.5)58.7(6.0)81.3(1.5)
Adam FT79.3(1.9)37.9(5.2)69.0(6.0)83.9(5.2)69.5(6.8)89.5(1.0)74.4(2.4)82.7(2.1)
K(SignGD)61.3(8.6)32.2(2.2)61.4(4.0)72.6(3.1)60.9(3.6)87.8(1.7)63.5(3.8)81.6(1.2)
K(A-)SignGD)59.1(11.4)31.9(2.0)58.3(8.8)72.4(4.1)60.7(4.6)87.7(1.7)64.6(4.1)81.1(1.5)
64PromptSGD FT89.7(0.4)45.8(2.1)85.8(1.0)94.3(0.5)84.8(0.8)92.9(0.5)93.2(1.0)86.8(0.7)
K(SGD)89.2(1.0)46.0(1.3)86.4(0.6)93.7(0.4)81.2(0.9)91.4(0.7)77.8(2.3)85.6(0.7)
Adam FT89.3(0.7)48.5(2.0)86.0(0.4)93.7(0.8)84.6(0.9)92.7(0.6)92.6(1.3)86.8(1.1)
K(SignGD)89.1(0.5)49.1(1.6)85.6(1.0)93.9(0.2)79.0(5.8)92.4(0.5)82.0(1.4)85.9(0.7)
K(A-)SignGD)88.9(0.9)43.6(2.2)85.6(1.0)94.0(0.3)81.8(1.1)91.8(1.1)22.8(2.9)86.2(0.3)
StandardSGD FT85.6(3.6)41.1(2.1)83.4(1.7)92.7(1.2)83.5(2.1)92.6(0.4)86.8(1.8)86.8(0.8)
K(SGD)77.7(2.8)35.8(0.7)73.6(2.0)82.6(4.4)74.9(2.2)90.1(1.0)81.9(2.0)85.6(0.6)
Adam FT86.2(2.3)41.0(1.7)83.9(1.9)92.6(1.0)83.5(1.8)92.9(0.5)91.5(1.4)87.5(0.6)
K(SignGD)79.6(1.7)35.3(3.1)75.8(2.0)83.0(4.7)75.0(2.1)90.9(1.0)82.5(1.8)85.9(1.0)
K(A-)SignGD)78.7(2.3)36.8(2.3)76.5(3.2)85.6(3.8)75.2(1.9)91.1(1.1)84.6(1.5)86.2(0.8)
512PromptSGD FT92.0(0.9)53.5(1.5)88.8(0.0)94.3(0.4)88.5(0.1)95.4(0.1)97.2(0.4)89.9(0.7)
K(SGD)91.0(0.2)49.8(0.4)88.0(0.9)94.4(0.2)84.4(0.9)93.5(0.1)88.2(0.8)88.4(0.5)
StandardSGD FT91.4(0.2)50.2(1.6)88.8(0.4)95.4(0.3)88.1(0.5)95.0(0.7)97.2(0.6)90.1(0.4)
K(SGD)85.9(1.6)45.4(1.0)83.1(1.1)92.2(0.9)83.4(0.5)92.3(0.1)93.3(1.5)89.1(0.2)
+ +(a) Single-sentence tasks + +
k-shotPromptMethodMNLISNLIQNLIRTEMRPCQQP
16PromptSGD FT59.2(2.7)65.7(2.7)62.1(3.1)60.0(5.5)73.9(2.7)62.1(2.3)
K(SGD)53.0(3.0)57.8(2.3)60.1(3.3)60.0(4.7)73.4(5.6)58.2(0.9)
Adam FT56.8(2.9)64.6(4.1)63.1(3.5)57.6(6.3)77.6(3.1)61.8(4.5)
K(SignGD)53.8(1.2)54.9(2.7)59.5(3.1)55.4(4.2)75.6(1.2)60.7(2.2)
K(A-)SignGD)51.9(4.0)54.9(3.1)56.0(1.9)59.8(4.0)75.2(2.6)59.4(2.0)
StandardSGD FT35.2(1.3)41.3(2.2)52.5(5.4)50.2(2.1)73.7(6.3)55.3(5.2)
K(SGD)34.9(1.8)39.6(3.3)50.3(1.4)48.7(2.0)69.2(6.9)50.8(5.0)
Adam FT38.7(3.5)42.9(3.2)57.6(4.2)51.1(3.8)75.6(7.1)58.2(6.5)
K(SignGD)36.1(1.3)41.7(2.4)51.9(1.5)48.2(3.4)73.3(5.3)52.4(5.1)
K(A-)SignGD)34.9(1.4)41.7(2.5)52.6(2.5)48.2(2.5)73.8(6.2)50.8(8.8)
64PromptSGD FT68.7(1.7)77.3(0.9)72.8(2.2)68.9(2.5)82.8(1.2)69.2(1.3)
K(SGD)60.4(1.8)65.5(1.6)67.3(1.6)66.5(2.5)79.2(2.5)66.4(1.7)
Adam FT67.9(1.0)76.9(1.4)74.2(3.2)67.3(2.7)80.9(1.2)69.8(0.6)
K(SignGD)60.8(1.7)64.1(2.3)65.4(1.7)63.8(1.8)77.4(2.3)63.7(4.4)
K(A-)SignGD)58.5(1.7)66.8(1.1)66.5(1.1)63.8(2.2)77.3(2.0)66.1(3.4)
StandardSGD FT50.0(5.0)61.9(4.5)65.4(4.2)53.6(2.5)78.7(1.1)64.8(3.5)
K(SGD)42.6(1.7)50.1(1.7)54.4(1.5)50.0(4.4)72.2(5.8)48.4(19.3)
Adam FT58.0(2.6)67.8(2.0)67.9(7.2)53.9(4.2)80.1(1.4)66.8(3.1)
K(SignGD)41.7(2.1)50.5(2.1)56.6(1.9)52.7(3.8)77.6(4.2)61.3(2.0)
K(A-)SignGD)42.8(1.7)49.1(2.9)55.3(3.7)52.9(4.5)74.5(2.5)62.3(1.9)
512PromptSGD FT78.4(0.3)83.9(0.3)81.9(1.2)76.3(0.6)89.2(0.1)75.2(1.1)
K(SGD)67.4(0.2)74.6(0.3)76.1(0.9)74.2(1.2)80.7(1.7)72.0(0.9)
StandardSGD FT77.8(1.1)82.9(0.6)81.0(0.5)70.9(1.7)90.2(0.7)75.7(0.9)
K(SGD)57.6(3.6)67.0(1.2)68.4(0.4)55.7(1.7)78.7(2.2)69.1(1.3)
+ +(b) Sentence-pair tasks + +Table 6. Fine-tuning performance in the standard FT setting, where the contextual embedding of the [CLS] token is used for classification, and the prompt-based FT setting, where a prompt is added and the embedding for the [MASK] token is used (see Section 3). In standard FT, we initialize the new classification head (i.e., $\Gamma$ ) using the linear probing solution. This table gives the figures in Figure 1, and also relates SGD fine-tuning performance to the more common fine-tuning with Adam. We report F1 for MRPC and QQP and accuracy otherwise, and average the metrics over 5 seeds for 16-shot and 64-shot, and 3 seeds for 512-shot. + +
Methodk-shotSST-2SST-5MRCRMPQASubjTRECAG News
K(SGD)160.39(0.14)0.70(0.35)0.14(0.09)0.32(0.03)0.56(0.12)0.60(0.31)2.87(1.27)3.52(4.44)
640.66(0.31)0.97(0.55)0.37(0.18)0.66(0.43)0.44(0.09)1.04(0.19)9.63(13.36)1.74(0.60)
K(SignGD)160.45(0.11)0.61(0.17)0.33(0.08)0.35(0.13)0.48(0.06)0.40(0.21)1.33(0.14)1.50(0.56)
640.34(0.09)0.77(0.03)0.43(0.08)0.36(0.04)0.50(0.17)0.54(0.07)1.38(0.12)1.44(0.15)
+ +(a) Single-sentence tasks. + +
Methodk-shotMNLISNLIQNLIRTEMRPCQQP
K(SGD)161.26(0.20)0.58(0.17)0.67(0.14)0.40(0.25)0.65(0.32)0.79(0.39)
641.62(0.19)0.75(0.04)0.89(0.42)1.04(0.16)1.41(0.53)1.00(0.14)
K(SignGD)160.52(0.09)0.68(0.16)0.47(0.09)0.48(0.13)0.48(0.07)0.58(0.07)
640.59(0.03)0.62(0.04)0.55(0.04)0.54(0.02)0.60(0.08)0.56(0.02)
+ +(b) Sentence-pair tasks. +Table 7. Average element-wise relative distance of $\mathcal{K}^{(\mathrm{SGD})}$ and $\mathcal{K}^{(\mathrm{SignGD})}$ computed on the pre-trained and best model fine-tuned with SGD and Adam respectively. A smaller value indicates a higher likelihood that the Fixed Features property of kernel behavior (Definition 3.2) holds when performing fine-tuning. Distances are averaged across 5 seeds for each value of $k$ and measured on the held-out test set. + +
Model sizeSST-2MRCRMPQASubjQNLIRTEMRPCQQP
Base (n = 768)0.320.320.260.380.430.480.480.560.49
Large (n = 1024)0.320.250.250.400.460.480.470.520.52
+ +Table 8. We measure the average output derivative (Definition 3.1) in the prompt-based FT setting for RoBERTa-base and RoBERTa-large. + +
k-shotPrompt + label formatMethodSST-2MRCRQNLIRTEQQP
16Manual (Gao et al., 2021)Adam-FT88.3(1.2)81.3(6.1)93.0(1.6)63.1(3.5)57.6(6.3)61.8(4.5)
SGD-FT89.0(1.5)83.2(2.4)93.3(0.2)62.1(3.1)60.0(5.5)62.1(2.3)
K(SGD)88.3(0.3)84.7(1.5)93.2(0.9)60.1(3.3)60.0(4.7)58.2(0.9)
Prompt + label search (Gao et al., 2021)Adam-FT88.1(0.8)81.6(3.8)92.8(0.4)56.3(3.8)58.6(4.6)58.6(4.5)
SGD-FT89.2(1.2)80.1(1.8)93.2(0.5)58.7(4.8)61.6(2.6)59.0(1.4)
K(SGD)88.6(1.1)78.5(1.2)93.5(0.7)56.7(1.7)57.4(5.5)60.2(2.0)
Null prompts (Logan IV et al., 2022)Adam-FT87.6(0.9)82.6(0.6)92.8(0.6)59.0(2.9)56.4(4.7)57.5(5.2)
SGD-FT88.1(0.7)82.8(3.6)93.4(0.7)59.0(3.4)54.1(1.6)57.6(5.5)
K(SGD)78.3(4.3)78.7(1.8)91.7(0.8)55.8(2.7)55.5(2.3)57.4(1.8)
+ +Table 9. We experiment with different prompt formats and label words: using the top result of an automatic prompt search performed on RoBERTa-large (Table E.1 in Gao et al. (2021)); and minimal null prompts (Table A3, Logan IV et al. (2022)), which add no additional text to the prompt. We find that our observations are robust to the choice of prompt, with the exception of the more unnatural "null prompts" on sentiment tasks (SST-2, MR, CR), which show a substantial gap between $\kappa^{(\mathrm{SGD})}$ and fine-tuning. We report F1 for QQP and accuracy otherwise, and average the metrics over 5 seeds. + +
k-shotMethodSST-2SST-5MRCRMPQASubjTRECAG News
16SignGD-FT87.6(3.6)43.4(3.9)84.4(1.1)92.8(1.4)82.4(1.5)90.3(1.8)85.4(4.0)85.2(1.4)
Adam-FT88.3(1.2)45.4(2.6)81.3(6.1)93.0(1.6)82.8(2.2)87.4(2.1)79.6(6.1)84.0(1.6)
K(SignGD)88.3(0.5)42.2(3.9)84.3(1.5)93.7(0.5)76.7(3.3)89.2(2.0)58.1(6.5)82.3(1.6)
K(A-SignGD)88.3(0.4)43.7(1.7)84.9(1.1)93.4(0.5)74.6(3.5)88.6(1.8)22.7(2.8)83.6(1.0)
64SignGD-FT87.6(2.5)47.3(2.7)86.2(1.2)93.7(1.7)85.3(1.7)92.1(2.0)93.7(0.5)87.5(0.6)
Adam-FT89.3(0.7)48.5(2.0)86.0(0.4)93.7(0.8)84.6(0.9)92.7(0.6)92.6(1.3)86.8(1.1)
K(SignGD)89.1(0.5)49.1(1.6)85.6(1.0)93.9(0.2)79.0(5.8)92.4(0.5)82.0(1.4)85.9(0.7)
K(A-SignGD)88.9(0.9)43.6(2.2)85.6(1.0)94.0(0.3)81.8(1.1)91.8(1.1)21.0(4.3)86.2(0.3)
+ +(a) Single-sentence tasks + +
k-shotMethodMNLISNLIQNLIRTEMRPCQQP
16SignGD-FT62.1(4.1)67.7(2.7)64.0(4.8)60.9(5.8)78.4(3.0)66.2(2.1)
Adam-FT56.8(2.9)64.6(4.1)63.1(3.5)57.6(6.3)77.6(3.1)61.8(4.5)
K(SignGD)53.8(1.2)54.9(2.7)59.5(3.1)55.4(4.2)75.6(1.2)60.7(2.2)
K(A-SignGD)51.9(4.0)54.9(3.1)56.0(1.9)59.8(4.0)75.2(2.6)59.4(2.0)
64SignGD-FT69.3(1.2)77.4(1.0)76.8(2.2)66.4(2.9)84.1(1.3)69.9(0.8)
Adam-FT67.9(1.0)76.9(1.4)74.2(3.2)67.3(2.7)80.9(1.2)69.8(0.6)
K(SignGD)60.8(1.7)64.1(2.3)65.4(1.7)63.8(1.8)77.4(2.3)63.7(4.4)
K(A-SignGD)58.5(1.7)66.8(1.1)66.5(1.1)63.8(2.2)77.3(2.0)66.1(3.4)
+ +(b) Sentence-pair tasks + +Table 10. Comparing the performance SignGD-FT to Adam-FT, $\mathcal{K}^{\mathrm{(SignGD)}}$ and $\mathcal{K}^{\mathrm{(A-SignGD)}}$ in the prompt-based setting on the LM-BFF test set (Gao et al., 2021). SignGD fine-tuning applies the sign function coordinate-wise to gradients before taking gradient steps, and leads to surprisingly strong results, especially on sentence-pair tasks. We search over the same hyperparameter gird as for Adam-FT, see Table 4, and we do not use momentum. Performance is measured by average test accuracy over $5k$ -shot splits for all tasks except MRPC and QQP, where it is F1. + +# C. Kernel Behavior and the Parametrization + +Neural network training can exhibit either kernel behavior or feature learning behavior. These were described in (Woodworth et al., 2020) as the lazy regime and active regime, respectively, when training from a random initialization. Kernel behavior provides a tractable tool to study the training of neural networks, but it is not believed to be a complete description of practical deep learning settings. In particular, kernel behavior implies the feature (i.e., gradient) of the neural networks remains unchanged in the overparameterized setting, which is not true in practical pre-training of large models. + +(Yang & Hu, 2021) showed how the initialization variance, multiplier, and learning rate for each parameter can move training from the kernel behavior to the feature learning behavior. They further developed the Maximal Update Parametrization (abbreviated MUP or $\mu \mathrm{P}$ ) where every parameter is updated maximally (in terms of scaling with width) while keeping the network stable. (Yang et al., 2022) then extends $\mu \mathrm{P}$ to Transformers with Adam optimization, and showed empirically that for pre-training of large language models using $\mu \mathrm{P}$ , the optimal hyperparameters remain the same when increasing width. It allows more comprehensive hyperparameter searches on a smaller model and direct transfer of the resulting optimal hyperparameters to the larger model, resulting in markedly improved pre-training performance. + +This section discusses two of our formal results: Theorems 4.3 and 5.5. In general, we consider the overparameterized setting in which the width of the network goes to infinity. Additionally, we assume that when initializing a weight matrix of the model, each entry of the matrix is drawn from i.i.d. Gaussian distribution. In particular, we model a pre-trained model as a non-random initialization that arose from training starting at a random initialization. We use Tensor Programs (Yang, 2020b) for our formal results. + +This section is organized as follows. In Appendix C.1, we introduce the basic notation and ideas around Tensor Programs as well as the assumptions we need to make in order for an infinite-width limit to be interesting to study. Then, Appendix C.2 gives the formal proof for the kernel analog to SignGD (Theorem 4.3). In Appendix C.3, we provide a formal proof of how fine-tuning can exhibit kernel behavior (Theorem 5.5). The proof relies heavily on Tensor Programs, so we additionally provide a more accessible and intuitive sketch on linear networks in Appendix C.5. + +# C.1. Preliminaries + +Notations Let $\xi \in \mathbb{R}^{d_{in}}$ be the input of the network. Let $n$ be the hidden dimension of the network and $d_{out}$ be the output dimension of the network. We define the network as a function of the following form: + +$$ +f (\xi ; \{U ^ {i} \} _ {i}, \{W ^ {j} \} _ {j}, V) = V ^ {\top} h (\xi ; \{U ^ {i} \} _ {i}, \{W ^ {j} \} _ {j}), +$$ + +where $\xi$ is the input, $U^i\in \mathbb{R}^{n\times d_{in}}$ are the input weight matrices, $W^{j}\in \mathbb{R}^{n\times n}$ are hidden weight matrices, $V\in \mathbb{R}^{n\times d_{out}}$ is the output weight matrix, and $h(\xi ;\{U^i\}_i,\{W^j\}_j)\in \mathbb{R}^n$ is the input of last layer (readout layer).10 We write $\mathcal{M}$ as the set of weight matrices, i.e., $\mathcal{M} = \{U^i\}_i\cup \{W^j\}_j\cup \{V\}$ . For $M\in \mathcal{M}$ , let $\nabla_Mf(\xi)$ be the gradient of $f$ w.r.t. $M$ at input $\xi$ . + +To simplify the notation, we assume $d_{in} = 1$ in this section. We will note when an extension to $d_{in} > 1$ requires a non-trivial step. For any weight matrix $M \in \mathcal{M}$ , let $\gamma_M$ be the multiplier of $M$ , such that $M$ is multiplied by $\gamma_M$ before performing matrix multiplication. Let $\eta_M$ be the learning rate of the weight $M$ . Let $\sigma_M^2$ be the variance of entries of $M$ at initialization, so each entry of $M$ is drawn $\mathcal{N}(0, \sigma_M^2)$ independently. Since our focus is the prompt-based fine-tuning, we assume no change is made to the network at the beginning of fine-tuning, and the learning rates for pre-training and fine-tuning are the same unless otherwise noted. + +Because we are considering the infinite-width limit, $f(\xi; \{U^i\}_i, \{W^j\}_j, V)$ actually represents a series of increasingly wide networks $\{f^n(\xi; \{U^{i,n}\}_i, \{W^{j,n}\}_j, V^n)\}_{n > 0}$ of the same architecture, but $f^n$ has a hidden dimension $n$ . We use the notation $f$ to include the model architecture, the training optimizer of the model, and $\gamma_M, \eta_M, \sigma_M$ for every weight matrix $M$ in the model. + +Let $M_{t}$ be the weight matrix at time step $t$ of training. If the network is pre-trained, we let $M_{-1}$ be the weight matrix before pre-training, and $M_0$ be the parameters right after pre-training. Let $\Delta M_t = M_t - M_{t-1}$ be the change each training step induces. Let $f_{t}$ be the network at step $t$ that + +$$ +f _ {t} (\xi) = f (\xi ; \{U _ {t} ^ {i} \} _ {i}, \{W _ {t} ^ {j} \} _ {j}, V _ {t}). +$$ + +Let $\xi_t, y_t$ be the training input and target at step $t$ , and let the loss function at step $t$ be $\ell(f_{t-1}(\xi_t), y_t)$ . For ease of notation, we often absorb $y_t$ into $\ell$ and denote $\ell_t(f_{t-1}(\xi_t)) \triangleq \ell(f_{t-1}(\xi_t), y_t)$ . Let $\chi_t = \ell_t' (f_{t-1}(\xi_t))$ be the derivative of the loss function, as defined in Definition 3.1. We assume $\ell_t''$ (second derivative of $\ell_t$ ) is bounded11, which is satisfied when $\ell$ is mean square loss or cross entropy loss. + +Big-O Notation For a series of scalar random variables $c = \{c^n\}_{n > 0}$ and a function $e:\mathbb{N}\to \mathbb{R}$ , we say $c = \Theta (e(n))$ if there exist $A,B$ such that for sufficiently large $n$ , $|c^n |\in [Ae(n),Be(n)]$ almost surely. For a series of vector random variables $x = \{x^{n}\}_{n > 0}$ , we say that $x$ is coordinate-wise $\Theta (n^a)$ , or $x = \Theta (e(n))$ if this series of scalar random variables $\{\| x^n\| _2 / \sqrt{n}\}_{n\geq 0}$ is $\Theta (e(n))$ . Similarly for the notation $O(e(n)),\Omega (e(n))$ , and $o(e(n))$ . For convenience, we assume every $e(n)$ in this section is equal to $n^a$ for some $a$ . + +Tensor Programs We refer reader to see Section 7 of (Yang & Hu, 2021) for detailed explanation and full definition of Tensor Programs. Here, we provide a simple overview of Tensor Programs: + +Definition C.1 (Definition 7.1 of (Yang & Hu, 2021)). A Tensor Program is a sequence of $\mathbb{R}^n$ -vectors and $\mathbb{R}$ -scalars inductively generated via one of the following ways from an initial set $\mathcal{C}$ of random scalars, $\mathcal{V}$ of random $\mathbb{R}^n$ vectors, and a set $\mathcal{W}$ of random $\mathbb{R}^{n\times n}$ matrices. + +MatMul Given $W \in \mathbb{R}^{n \times n}$ and $x \in \mathbb{R}^n$ , we can generate $Wx \in \mathbb{R}^n$ or $W^\top x \in \mathbb{R}^n$ . + +Nonlin Given $\phi :\mathbb{R}^k\times \mathbb{R}^l\to \mathbb{R}$ , previous scalar $\theta_{1},\ldots ,\theta_{l}\in \mathbb{R}$ and vector $x^{1},\dots,x^{k}\in \mathbb{R}^{n}$ , we can generate a new vector + +$$ +\phi (x ^ {1}, \dots , x ^ {k}; \theta_ {1}, \dots , \theta_ {l}) \in \mathbb {R} ^ {n} +$$ + +where $\phi(-;\theta_1,\ldots,\theta_l)$ applies coordinate-wise to each “ $\alpha$ -slice” $(x_{\alpha}^{1},\ldots,x_{\alpha}^{k})$ . + +Moment Given the same setup as above, we can also generate a new scalar + +$$ +\frac {1}{n} \sum_ {\alpha = 1} ^ {n} \phi (x _ {\alpha} ^ {1}, \ldots , x _ {\alpha} ^ {k}; \theta_ {1}, \ldots , \theta_ {l}) \in \mathbb {R}. +$$ + +Yang (2019; 2020a); Yang & Littwin (2021); Yang et al. (2022) show that Tensor Programs can express the computation, SGD/Adam optimization, and the kernel of almost any general architecture. + +The key result of the Tensor Programs is that we can represent the coordinates of any vector $x$ in the Tensor Program with a random variable $Z^x$ , and represent any scalar $\theta$ with a deterministic scalar $\mathring{\theta}$ . There is a way to define all $\mathring{\theta}$ and $Z^x$ correspond to the Tensor Program (cf. Definition 7.3 in (Yang & Hu, 2021)), and the Master Theorem of the Tensor Program shows that $\theta \rightarrow \mathring{\theta}$ when $n \rightarrow \infty$ (cf. Theorem 7.4 in (Yang & Hu, 2021)). + +Although it is in general hard to compute $Z^x$ and $\hat{\theta}$ , it allows us to reason about the scales of vectors in the training of a network. + +Assumptions Related to Tensor Programs. Since we are studying the infinite width limit and using Tensor Programs as our framework, there are some mild assumptions that we need in order to apply Tensor Programs and results in (Yang & Hu, 2021). + +Assumption C.2. We assume the network $f$ satisfies the following + +a) The forward pass of $f$ in the infinite-width limit can be written as Tensor Programs. +b) The hidden vectors have $\Theta(1)$ coordinates at initialization. +c) The hidden vectors have $O(1)$ coordinates during training. +d) For any training scheme and any constant $t$ and any input $\xi$ , $f_{t}(\xi) = O(1)$ . + +e) There exist a training scheme and some constant $t$ and input $\xi$ such that $f_{t}(\xi) - f_{0}(\xi) = \Theta (1)$ . +f) The activation function of $f$ is tanh or $\sigma$ -gelu for a small enough $\sigma$ (so it approximates ReLU), where + +$$ +\sigma \text {-} g e l u (x) = \frac {1}{2} x \operatorname {e r f} (\sigma^ {- 1} x) + \sigma \frac {e ^ {- \sigma^ {- 2} x ^ {2}}}{2 \sqrt {\pi}} + \frac {x}{2}. +$$ + +Furthermore, we have two assumptions on SignGD: + +g) SignGD is approximated as the sign function being replaced with $\epsilon$ -sign for small enough $\epsilon$ when updating parameters, where $\epsilon$ -sign $(x) = \frac{x}{|x| + \epsilon}$ is smoothed version of sign. We assume using different $\epsilon$ when computing the sign of $\nabla_M f$ , so that $\epsilon$ for $\nabla_M f$ match the maximum scale of $\nabla_M f$ . +h) The ratio between the learning rate of SignGD in prompt-based fine-tuning and the learning rate of pre-training matches the maximum $\chi$ after pre-training. That is, we assume $\eta_{M} = \Theta (\eta_{M}^{\mathrm{PT}}\cdot \chi_{\max})$ where $\eta_M^{\mathrm{PT}}$ is learning rate of pre-training for SignGD, and $\chi_{\mathrm{max}} = \max_{(\xi ,y)\in \Xi}\chi (\xi ,y,f_0)$ . + +b), c), d) and e) in Assumption C.2 together recover the definition of nontrivial stable network in (Yang & Hu, 2021). b) and c) ensure that the pre-activations in the network are not too large, so that activation functions (e.g., tanh) are not trivialized to always output $\pm 1$ . b) ensures that the pre-activations in the network are not too small at initialization, so the activation function is not trivialized to its first-order Taylor expansion. d) ensures the network output is bounded. e) ensures that the network is not frozen during training (i.e., learning can occur). + +f) and g) in Assumption C.2 assures all non-linear functions that appear in the Tensor Programs is pseudo-Lipschitz, which is required for the Master Theorem of Tensor Programs. g) also assures that $\epsilon$ -sign is not trivialize to 0 or sign when $\nabla_M f \neq \Theta(1)$ . +h) in Assumption C.2 assures when $\chi = o(1)$ , updates of SignGD in fine-tuning is not of bigger scale than SGD. It is also observed in practice that the optimal learning rate for fine-tuning is smaller than the learning rate for pre-training. + +# C.2. SignGD Kernel Derivation + +Definition C.3 (Formal Definition of Kernel Behavior). We say that this network training process demonstrates kernel behavior if the following properties are satisfied. + +1. Linearization: The change of the network can be approximated by its first order Taylor expansion, i.e., + +$$ +\lim _ {n \to \infty} \frac {f _ {t} (\xi) - f _ {t - 1} (\xi)}{\chi_ {\mathrm {m a x}}} = \lim _ {n \to \infty} \sum_ {M \in \mathcal {M}} \left\langle \nabla_ {M} f _ {t - 1} (\xi), \frac {\Delta M _ {t}}{\chi_ {\mathrm {m a x}}} \right\rangle ; +$$ + +where $\chi_{\mathrm{max}} = \max_{(\xi ,y)\in \Xi}\chi (\xi ,y,f_0),$ $\Xi$ is the training dataset. + +2. Fixed Features: The gradients at step $t$ are approximately the same as before training, i.e., + +$$ +\forall M \in \mathcal {M}, \lim _ {n \to \infty} \frac {\| \nabla_ {M} f _ {t} (\xi) - \nabla_ {M} f _ {0} (\xi) \| _ {2} ^ {2}}{\max _ {\xi^ {\prime}} \| \nabla_ {M} f _ {0} (\xi^ {\prime}) \| _ {2} ^ {2}} = 0. +$$ + +Note that we define Linearization with both LHS and RHS divided by $\chi_{\mathrm{max}}$ so it is meaningful for the case of $\chi = o(1)$ . We do the same thing in the following theorem. + +Theorem C.4 (SignGD Kernel). If SignGD training of $f$ demonstrates kernel behavior, then under Assumption C.2, + +$$ +\lim _ {n \to \infty} \frac {f _ {t} (\xi) - f _ {t - 1} (\xi)}{\chi_ {\mathrm {m a x}}} = \lim _ {n \to \infty} \sum_ {M \in \mathcal {M}} - \tilde {\eta} _ {M} \left\langle \nabla_ {M} f _ {0} (\xi), \epsilon \text {- s i g n} (\nabla_ {M} f _ {0} (\xi_ {t})) \right\rangle , +$$ + +where $\tilde{\eta}_M = \eta_M\mathrm{sign}(\chi_t) / \chi_{\mathrm{max}}$ + +Note if $\eta_{M} = \eta$ , the RHS of the equation above equals to + +$$ +- \frac {\eta \operatorname {s i g n} (\chi_ {t})}{\chi_ {\max }} \langle \nabla f _ {0} (\xi), \epsilon - \operatorname {s i g n} (\nabla f _ {0} (\xi_ {t}) \rangle \approx - \frac {\eta \operatorname {s i g n} (\chi_ {t})}{\chi_ {\max }} \mathcal {K} ^ {(\mathrm {A} - \operatorname {S i g n G D})} (\xi , \xi_ {t}), +$$ + +where the approximation comes from the difference between $\epsilon$ -sign and sign. + +Proof. By the update rule of $\mathrm{SignGD}$ , $\frac{\Delta M_t}{\chi_{\mathrm{max}}} = -\tilde{\eta}_M\epsilon$ -sign $(\nabla_Mf_{t-1})$ . It suffices to prove + +$$ +\tilde {\eta} _ {M} \left\langle \nabla_ {M} f _ {t} (\xi), \epsilon \text {- s i g n} \left(\nabla_ {M} f _ {t} \left(\xi_ {t}\right)\right) \right\rangle = \tilde {\eta} _ {M} \left\langle \nabla_ {M} f _ {0} (\xi), \epsilon \text {- s i g n} \left(\nabla_ {M} f _ {0} \left(\xi_ {t}\right)\right) \right\rangle +$$ + +when $n\to \infty$ + +Since + +$$ +\tilde {\eta} _ {M} \langle \nabla_ {M} f _ {t} (\xi), \epsilon \text {- s i g n} (\nabla_ {M} f _ {t} (\xi_ {t})) \rangle - \tilde {\eta} _ {M} \langle \nabla_ {M} f _ {0} (\xi), \epsilon \text {- s i g n} (\nabla_ {M} f _ {0} (\xi_ {t})) \rangle +$$ + +$$ += \tilde {\eta} _ {M} \left\langle \nabla_ {M} f _ {t} (\xi) - \nabla_ {M} f _ {0} (\xi), \epsilon - \operatorname {s i g n} \left(\nabla_ {M} f _ {t} \left(\xi_ {t}\right)\right) \right\rangle + \tag {4} +$$ + +$$ +\tilde {\eta} _ {M} \left\langle \nabla_ {M} f _ {t} (\xi), \epsilon - \operatorname {s i g n} \left(\nabla_ {M} f _ {t} \left(\xi_ {t}\right)\right) - \epsilon - \operatorname {s i g n} \left(\nabla_ {M} f _ {0} \left(\xi_ {t}\right)\right) \right\rangle + \tag {5} +$$ + +$$ +\tilde {\eta} _ {M} \left\langle \nabla_ {M} f _ {t} (\xi) - \nabla_ {M} f _ {0} (\xi), \epsilon - \operatorname {s i g n} \left(\nabla_ {M} f _ {t} \left(\xi_ {t}\right)\right) - \epsilon - \operatorname {s i g n} \left(\nabla_ {M} f _ {0} \left(\xi_ {t}\right)\right) \right\rangle , \tag {6} +$$ + +we only need to prove Equations (4) to (6) are all 0 when $n\to \infty$ + +Let $\xi^{*} = \arg \max_{\xi^{\prime}}\| \nabla_{M}f_{0}(\xi^{\prime})\|_{2}^{2}$ be the input of maximum gradient scale, then by Fixed Features, we have + +$$ +\frac {\left\| \nabla_ {M} f _ {t} (\xi) - \nabla_ {M} f _ {0} (\xi) \right\| _ {2}}{\left\| \nabla_ {M} f _ {0} \left(\xi^ {*}\right) \right\| _ {2}} = o (1). \tag {7} +$$ + +Since $\epsilon$ -sign $(x) - \epsilon$ -sign $(y) \leq |x - y| / \epsilon$ + +$$ +\| \epsilon \text {- s i g n} (\nabla_ {M} f _ {t} (\xi)) - \epsilon \text {- s i g n} (\nabla_ {M} f _ {0} (\xi)) \| _ {2} \leq \| \nabla_ {M} f _ {t} (\xi) - \nabla_ {M} f _ {0} (\xi) \| _ {2} / \epsilon . \tag {8} +$$ + +Combined with $\| \nabla_M f_0(\xi^*)\| _2 / \sqrt{N} = \Theta (\epsilon)$ ( $N$ is the number of entries of $M$ , this is by g) of Assumption C.2), we have + +$$ +\begin{array}{l} \frac {\| \epsilon \text {- s i g n} (\nabla_ {M} f _ {t} (\xi)) - \epsilon \text {- s i g n} (\nabla_ {M} f _ {0} (\xi)) \| _ {2}}{\| \epsilon \text {- s i g n} (\nabla_ {M} f _ {0} (\xi^ {*})) \| _ {2}} \\ \leq \frac {\left\| \nabla_ {M} f _ {t} (\xi) - \nabla_ {M} f _ {0} (\xi) \right\| _ {2} / \epsilon}{\left\| \epsilon \text {- s i g n} \left(\nabla_ {M} f _ {0} \left(\xi^ {*}\right)\right) \right\| _ {2}} \quad \text {b y} (8) \\ = \frac {\left\| \nabla_ {M} f _ {t} (\xi) - \nabla_ {M} f _ {0} (\xi) \right\| _ {2}}{\left\| \nabla_ {M} f _ {0} (\xi^ {*}) \right\| _ {2}} \cdot \frac {\left\| \nabla_ {M} f _ {0} (\xi^ {*}) \right\| _ {2} / \sqrt {N}}{\epsilon \| \epsilon - \operatorname {s i g n} \left(\nabla_ {M} f _ {0} (\xi^ {*})\right) \| _ {2} / \sqrt {N}} \\ = \frac {\left\| \nabla_ {M} f _ {t} (\xi) - \nabla_ {M} f _ {0} (\xi) \right\| _ {2}}{\left\| \nabla_ {M} f _ {0} \left(\xi^ {*}\right) \right\| _ {2}} \cdot \Theta (1) = o (1). (9) \\ \end{array} +$$ + +By d) in Assumption C.2, and consider the training scheme that sets $\xi_{1} = \xi^{*}$ and the loss function $\ell_t$ so $\chi_{1} = \Theta(1)$ , then + +$$ +\frac {f _ {1} (\xi^ {*}) - f _ {0} (\xi^ {*})}{\chi_ {1}} = - \frac {\eta_ {M} \mathrm {s i g n} (\chi_ {1})}{\chi_ {1}} \langle \nabla_ {M} f _ {0} (\xi^ {*}), \epsilon \mathrm {- s i g n} (\nabla_ {M} f _ {0} (\xi^ {*})) \rangle = O (1). +$$ + +By h) in Assumption C.2, the scale of $\tilde{\eta}_M$ is identical across different training scheme, so we have + +$$ +- \tilde {\eta} _ {M} \left\langle \nabla_ {M} f _ {0} \left(\xi^ {*}\right), \epsilon - \text {s i g n} \left(\nabla_ {M} f _ {0} \left(\xi^ {*}\right)\right) \right\rangle = O (1). +$$ + +And it is easy to see that $\tilde{\eta}_M\| \nabla_Mf_0(\xi^*)\| _2\| \epsilon \text{-sign} (\nabla_Mf_0(\xi^*))\| _2$ has the same scale as $\tilde{\eta}_M\langle \nabla_Mf_0(\xi^*),\epsilon \text{-sign} (\nabla_Mf_0(\xi^*))\rangle$ , which is $O(1)$ + +Given Equations (7) and (9), we are about to prove Equations (4) to (6) divided by $\tilde{\eta}_M\| \nabla_Mf_0(\xi^*)\| _2\| \epsilon \text{-sign} (\nabla_Mf_0(\xi^*))\| _2$ are all 0 when $n\to \infty$ . Provided that $\tilde{\eta}_M\| \nabla_Mf_0(\xi^*)\| _2\| \epsilon \text{-sign} (\nabla_Mf_0(\xi^*))\| _2 = O(1)$ , it will imply Equations (4) to (6) are all 0 when $n\to \infty$ , thus conclude our whole proof. + +For Equation (4), + +$$ +\begin{array}{l} \frac {\tilde {\eta} _ {M} \left\langle \nabla_ {M} f _ {t} (\xi) - \nabla_ {M} f _ {0} (\xi) , \epsilon \text {- s i g n} \left(\nabla_ {M} f _ {t} (\xi_ {t})\right) \right\rangle}{\tilde {\eta} _ {M} \left\| \nabla_ {M} f _ {0} (\xi^ {*}) \right\| _ {2} \| \epsilon \text {- s i g n} \left(\nabla_ {M} f _ {0} (\xi^ {*})\right) \| _ {2}} \\ \leq \frac {\| \nabla_ {M} f _ {t} (\xi) - \nabla_ {M} f _ {0} (\xi) \| _ {2} \| \epsilon \text {- s i g n} (\nabla_ {M} f _ {t} (\xi_ {t})) \| _ {2}}{\| \nabla_ {M} f _ {0} (\xi^ {*}) \| _ {2} \| \epsilon \text {- s i g n} (\nabla_ {M} f _ {0} (\xi^ {*})) \| _ {2}} \\ = \frac {\left\| \nabla_ {M} f _ {t} (\xi) - \nabla_ {M} f _ {0} (\xi) \right\| _ {2}}{\left\| \nabla_ {M} f _ {0} \left(\xi^ {*}\right) \right\| _ {2}} = o (1). \tag {7} \\ \end{array} +$$ + +Similarly, for Equation (5), + +$$ +\begin{array}{l} \frac {\tilde {\eta} _ {M} \left\langle \nabla_ {M} f _ {t} (\xi) , \epsilon \text {- s i g n} \left(\nabla_ {M} f _ {t} (\xi_ {t})\right) - \epsilon \text {- s i g n} \left(\nabla_ {M} f _ {0} (\xi_ {t})\right) \right\rangle}{\tilde {\eta} _ {M} \| \nabla_ {M} f _ {0} (\xi^ {*}) \| _ {2} \| \epsilon \text {- s i g n} \left(\nabla_ {M} f _ {0} (\xi^ {*})\right) \| _ {2}} \\ \leq \frac {\left\| \epsilon - \operatorname {s i g n} \left(\nabla_ {M} f _ {t} (\xi)\right) - \epsilon - \operatorname {s i g n} \left(\nabla_ {M} f _ {0} (\xi)\right) \right\| _ {2}}{\left\| \epsilon - \operatorname {s i g n} \left(\nabla_ {M} f _ {0} \left(\xi^ {*}\right)\right) \right\| _ {2}} = o (1), \quad \text {b y} \\ \end{array} +$$ + +and for Equation (6), + +$$ +\begin{array}{l} \frac {\tilde {\eta} _ {M} \left\langle \nabla_ {M} f _ {t} (\xi) - \nabla_ {M} f _ {0} (\xi) , \epsilon \text {- s i g n} \left(\nabla_ {M} f _ {t} (\xi_ {t})\right) - \epsilon \text {- s i g n} \left(\nabla_ {M} f _ {0} (\xi_ {t})\right) \right\rangle}{\tilde {\eta} _ {M} \| \nabla_ {M} f _ {0} (\xi^ {*}) \| _ {2} \| \epsilon \text {- s i g n} \left(\nabla_ {M} f _ {0} (\xi^ {*})\right) \| _ {2}} \\ \leq \frac {\| \epsilon - \operatorname {s i g n} (\nabla_ {M} f _ {t} (\xi)) - \epsilon - \operatorname {s i g n} (\nabla_ {M} f _ {0} (\xi)) \| _ {2}}{\| \epsilon - \operatorname {s i g n} (\nabla_ {M} f _ {0} (\xi^ {*})) \| _ {2}} \cdot \frac {\| \nabla_ {M} f _ {t} (\xi) - \nabla_ {M} f _ {0} (\xi) \| _ {2}}{\| \nabla_ {M} f _ {0} (\xi^ {*}) \| _ {2}} \\ = o (1). \quad \text {b y} \\ \end{array} +$$ + +# C.3. Prompt-based Fine-Tuning + +Prompt-based fine-tuning uses the pre-trained network directly without substituting or adding any parameters. Therefore, without any additional assumptions, the behaviors of fine-tuning and pre-training are the same from the perspective of the Tensor Programs. We thus adopt the assumption that $\chi = o(1)$ before fine-tuning (Definition 5.3). Without the assumption, the fine-tuning of $f$ will not exhibit kernel behavior if the pre-training is in feature learning regime. Intuitively, this assumption is believable because wider pre-trained networks can solve downstream tasks better. In this section, we prove that prompt-based fine-tuning exhibits kernel behavior when this assumption holds. + +Theorem C.5. If the downstream task $\Xi$ is natural for network $f$ , that is, + +$$ +\chi_ {\mathrm {m a x}} \triangleq \max _ {(\xi , y) \in \Xi} \chi (\xi , y, f _ {0}) = o (1), +$$ + +then under Assumption C.2, the fine-tuning of $f$ exhibits kernel behavior (Definition C.3). + +Below we provide a proof that is heavily based on Tensor Programs and the analysis in (Yang & Hu, 2021). For readers who are not familiar with Tensor Programs, we provide intuitive examples in the next few subsections, where we focus on a three-layer linear network parameterized with $\mu \mathrm{P}$ + +Proof. The high-level proof consists of two parts: 1) we prove after each step, the update of the function $f$ is $O(\chi_t)$ . Combined $\ell_t''$ always bounded by some constant $C$ , we can inductively prove $\chi_t \leq \chi(\xi_t, y_t, f_0) + C \cdot |f_{t-1}(\xi_t) - f_0(\xi_t)| = O(\chi_{\max})$ for all $t$ . 2) Given $\chi_t = O(\chi_{\max}) = o(1)$ , we show the fine-tuning exhibits kernel behavior. + +We first prove the theorem under the assumption that the network is a multilayer perceptron and the optimizer is SGD, which is the same setting as (Yang & Hu, 2021). We will later extend this to more general cases. + +Consider the following $L$ -hidden-layer perceptron: + +$$ +h ^ {1} (\xi) = U \xi , +$$ + +and + +$$ +x ^ {l} (\xi) = \phi (h ^ {l} (\xi)), \quad h ^ {l + 1} (\xi) = W ^ {l + 1} x ^ {l} (\xi), \text {f o r} l = 1, \dots , L - 1, +$$ + +and + +$$ +f (\xi) = V x ^ {L} (\xi). +$$ + +Following (Yang & Hu, 2021), we let the learning rate for every parameter equal to $\eta n^{-c}$ . Let $W^{1} = U$ and $W^{L + 1} = V$ , and for $l = 1,\dots ,L + 1$ , we parametrize $W^l$ as $W^{l} = \gamma_{l}w^{l}$ for actual trainable parameter $w^{l}$ , and we initialize each coordinate $w^{l}$ i.i.d. from $\mathcal{N}(0,\sigma_l^2)$ . The setting covers all possible parameterizations based on Lemma C.6. For convenience, we assume $\gamma_{l} = n^{-a_{l}}$ and $\sigma_{l} = n^{-b_{l}}$ . Without loss of generality, we further assume that $\chi_{\mathrm{max}} = \Theta (n^{-d})$ . Below, we will also inductively show $\chi_t = O(n^{-d})$ by showing $|f_{t + 1} - f_t| = O(n^{-d})$ . + +By Theorem 3.3 of (Yang & Hu, 2021), stable network implies + +$$ +r \triangleq \min (a _ {L + 1} + b _ {L + 1}, 2 a _ {L + 1} + c) + c - 1 + \min _ {l = 1} ^ {L} [ 2 a _ {l} + \mathbb {I} (l = 1) ] \geq 0. +$$ + +Also by Theorem 3.8 of (Yang & Hu, 2021), for nontrivial stable network (included in Assumption C.2), if $r > 0$ then there exists a kernel $\mathcal{K}$ such that + +$$ +f _ {t + 1} (\xi) = f _ {t} (\xi) - \eta \chi_ {t} \mathcal {K} (\xi , \xi_ {t}), +$$ + +which is very close to our definition of kernel behavior. In fact, we will prove that they are equivalent in the fine-tuning case. + +Since $\chi_t = O(n^{-d})$ for fine-tuning, it is equivalent to set the learning rate to $\eta n^{-c - d}$ and replace $\chi_t$ with $\hat{\chi}_t = n^d\chi_t = O(1)$ . Formally, we are considering the following training scheme: at the pre-training stage, $r\geq 0$ (so it could demonstrate feature learning or kernel behavior); at the fine-tuning stage, $c$ is increased to $c'\triangleq c + d > c$ , thus, the corresponding $r$ is increased to be strictly greater than 0. Therefore, it suggests kernel behavior with following caveats. + +Do we handle the case of different learning rates during pre-training and fine-tuning? The answer is effectively YES, because the above scheme is equivalent to training from scratch with learning rate $\eta n^{c - d}$ . First of all, the scale of the update on $W^l$ , $h^l$ , $x^l$ and $f$ are all multiplied by $n^{-d}$ when switching from the pre-training stage ( $\eta n^{-c}$ learning rate) to the fine-tuning stage ( $\eta n^{-c - d}$ learning rate). The scales are exactly the same as training from scratch with $\eta n^{-c - d}$ learning rate except $b_{L + 1}$ needs to be changed to $b_{L + 1}^{\prime} \triangleq \min(b_{L + 1}, a_{L + 1} + c)$ . Note this change of $b_{L + 1}$ does not affect the fact that $r$ is updated to $r^{\prime} \triangleq r + d > 0$ . + +Does $r' > 0$ formally imply our definition of kernel behavior (Definition C.3)? The answer is YES. We first prove Fixed Features in Definition C.3. The gradient of matrix $W^l$ is equal to outer product between $\nabla_{h^l} f$ (gradient w.r.t. $h^l$ ) and $x^{l-1}$ . Let $dh_t^l$ be the normalized gradient w.r.t. $h^l$ at step $t$ (so $dh_t^l = \Theta(1)$ ), and $x_t^l$ be the $x^l$ at step $t$ ( $x_t^l = \Theta(1)$ without normalization). It suffices to prove $dh_t^l - dh_0^l = O(1)$ and $x_t^l - x_0^l = o(1)$ . The later was proved by Proposition H.27 of (Yang & Hu, 2021). To prove $dh_t^l - dh_0^l = O(1)$ , we let $dx_t^l$ be the normalized gradient w.r.t. $x^l$ at step $t$ , and compute the scale of $dh_t^l - dh_{t-1}^l$ and $dx_t^l - dx_{t-1}^l$ inductively from $l = L$ to $l = 1$ . We obtain that they both have the same scale of + +$$ +n ^ {- \min (2 a _ {L + 1} + c - a _ {L + 1} - b _ {L + 1} ^ {\prime}, a _ {L + 1} + b _ {L + 1} + c ^ {\prime} - 1 + \min _ {m = l + 1} ^ {L} 2 a _ {m})} \leq n ^ {- \min (0, r ^ {\prime})} = 1, +$$ + +the inequality is because $b_{L + 1}' \leq a_{L + 1} + c$ and $r' \leq a_{L + 1} + b_{L + 1} + c' - 1 + \min_{m = l + 1}^{L} 2a_{m}$ . + +Second, we prove Linearization in Definition C.3. We need to first make a slight modification to the Tensor Program in (Yang & Hu, 2021), that is, changing the computation of $f_{t}(\xi) - f_{t-1}(\xi)$ to $n^{d}(f_{t}(\xi) - f_{t-1}(\xi))$ . By Theorem H.32 of (Yang & Hu, 2021) and its definition of $\Sigma$ , we can show that + +$$ +\begin{array}{l} \lim _ {n \rightarrow \infty} n ^ {d} (f _ {t} (\xi) - f _ {t - 1} (\xi)) = \lim _ {n \rightarrow \infty} \sum_ {l = 1} ^ {L + 1} \eta n ^ {- c} \frac {\chi_ {t}}{n ^ {- d}} \langle \nabla_ {W ^ {l}} f _ {t - 1} (\xi), \nabla_ {W ^ {l}} f _ {t - 1} (\xi_ {t}) \rangle \\ = \lim _ {n \rightarrow \infty} \sum_ {l = 1} ^ {L + 1} \left\langle \nabla_ {W ^ {l}} f _ {t - 1} (\xi), \frac {\Delta W _ {t} ^ {l}}{n ^ {- d}} \right\rangle . \\ \end{array} +$$ + +This is exactly Linearization in Definition C.3 if we multiply $n^{-d} / \chi_{\max}$ on both side. Meanwhile, it also implies $f_{t}(\xi) - f_{t-1}(\xi) = O(n^{-d})$ . + +From SGD to SignGD. Since $\mathrm{sign}(xy) = \mathrm{sign}(x)\mathrm{sign}(y)$ , the update of matrix $W^{l}$ can still be written as outer product of two vectors, i.e., $\Delta W_{t}^{l} = \eta n^{-c - d}\mathrm{sign}(\chi_{t})\mathrm{sign}(\nabla_{h^{l}}f_{t - 1})\otimes \mathrm{sign}(x_{t - 1}^{l - 1})$ . After applying sign, the scale of vector changes. If the parametrization is the same, the scales of vectors using SignGD will be different from those using SGD. This can be easily resolved by changing learning rates for each parameter (as in Assumption C.2), so the scaling change brought by sign is corrected. Furthermore, as also mentioned in Assumption C.2, we need to approximate sign by a smoothed version $\epsilon$ -sign so the Master Theorem of Tensor Programs can still apply. + +Extension to universal architectures. The theorem can apply to any network whose first forward pass can be written as Tensor Programs. Given this condition, the forward pass, backward pass, and kernel of any step can be written as Tensor Programs (Yang, 2020a;b). To analyse the scaling of the Tensor Program will need the following steps: + +1. Extension to general computation graph. We can still inductively reason about the scale of preactivations and activations by the topological order of the computation graph; and similarly reason about the gradient by the reverse topological order. +2. Extension to weight sharing. We may use weights multiple times in a forward pass. The preactivations, activations and their gradients will not be affected. Only the update of a weight is now a sum of several vector outer product depending on the number of occurrence of the weight. + +![](images/e6e77eadef25125cc85ff55ab87533d49f1d8b29d5f1be0b5df8283ad41f3a65.jpg) + +# C.4. $\mu \mathbf{P}$ for SGD and SignGD + +In the following subsections, we provide more intuition for Theorem C.5. Although we consider all types of pre-trained models, we are mostly interested in models with feature learning behavior, because it is likely not true that gradients can be approximated as fixed throughout the entirety of pre-training. For pre-trained models with kernel behavior, it is obvious that fine-tuning with the same settings as pre-training (i.e., prompt-based FT) will also exhibit kernel behavior. Furthermore, Theorem H.17 of (Yang & Hu, 2021) proved that if the last layer is replaced with a freshly initialized layer (i.e., standard FT), fine-tuning from a pre-trained models with kernel behavior is the same as training on the downstream task from scratch. + +Among all the pre-training schemes that exhibit feature learning behavior, $\mu \mathrm{P}$ is special because each parameter (except the last layer) can on its own push the model to perform feature learning. Therefore, to build an intuitive description of fine-tuning behavior, we assume that the model was pre-trained by $\mu \mathrm{P}$ . We note again that our main result does not require this assumption. + +The formulation of $\mu \mathrm{P}$ contains three sets of hyperparameters: initial variance of $M$ , multiplier of $M$ and learning rate of $M$ for $M \in \{U^i\}_i \cup \{W^j\}_j \cup \{V\}$ . However, even if we restrict these three hyperparameters to be in the form of $n^\alpha$ , $\mu \mathrm{P}$ is not unique, because there is one degree of freedom for each weight according to the following lemma. + +Lemma C.6 (Lemma J.1 of (Yang et al., 2022)). Consider a weight matrix $M$ with learning rate $C$ , initialized as $M \sim \mathcal{N}(0, B^2)$ , and with a multiplier $A$ . Then for any $\gamma > 0$ , $f_t(\xi)$ stays fixed for all $t$ and $\xi$ if we set + +- $A \gets A\gamma, B \gets B / \gamma, C \gets C / \gamma^2$ if training with SGD. +- $A \gets A\gamma, B \gets B / \gamma, C \gets C / \gamma$ if training with Adam. + +Note the conclusion about Adam in Lemma C.6 also extends to SignGD. + +With Lemma C.6, we can always set the multiplier of any weight matrix $M$ to be 1, which leaves us only the initialization variance $\sigma_{M}^{2}$ and learning rate $\eta_{M}$ . Furthermore, in terms of the scale at initialization and the scale of updates, $\mu \mathrm{P}$ for SGD and SignGD are entirely the same. The only difference would be learning rate. We provide details in Table 11 (recall $M_{-1}$ is the weight $M$ at initialization of pre-training, $\Delta M_0 = M_0 - M_{-1}$ is the overall change of weight in pre-training. We further assume $\chi_t = \Theta (n^{-d})$ for all $t$ , thus $\eta_{M}n^{d}$ is the scale of learning rate for SignGD in pre-training). + +Since we have different learning rate for different $M$ , the kernel that we care is defined as + +$$ +\mathcal {K} (\xi , \xi^ {\prime}) = \sum_ {M \in \mathcal {M}} \eta_ {M} ^ {\prime} \left\langle \nabla_ {W} f (\xi), \phi (\nabla_ {W} f (\xi^ {\prime})) \right\rangle , +$$ + +
coordinate-wise scaleM=UiM=WjM=V
M-1Θ(1)Θ(1/√n)Θ(1/n)
ΔM0Θ(1)Θ(1/n)Θ(1/n)
ηM for SGDΘ(n)Θ(1)Θ(1/n)
ηM·ndfor SignGD/AdamΘ(1)Θ(1/n)Θ(1/n)
+ +Table 11. Scales of initialization, update and learning rate for $\mu \mathrm{P}$ in pre-training. + +where $\phi$ is identity if the algorithm is SGD, $\phi = \mathrm{sign}$ if the algorithm is SignGD, $\eta_M' = \eta_M$ for SGD, $\eta_M' = \eta_M n^d$ for SignGD. We use $\eta_M'$ to keep $\mathcal{K}(\xi, \xi') = \Theta(1)$ . + +And we want to prove the dynamic of the network follows + +$$ +\frac {f _ {t} (\xi) - f _ {t - 1} (\xi)}{n ^ {- d}} \rightarrow - \tilde {\chi} _ {t} \mathcal {K} (\xi , \xi_ {t}) \quad \text {w h e n} n \rightarrow \infty , +$$ + +where $\tilde{\chi}_t = n^{-d}\chi_t$ for SGD, and $\tilde{\chi}_t = \mathrm{sign}(\chi_t)$ for SignGD. In any case, $\tilde{\chi}_t = \Theta(1)$ . + +# C.5. Prompt-based Fine-Tuning: A Linear Example + +As an intuitive example, we consider a three-layer linear network + +$$ +f (\xi ; U, W, V) = V ^ {\top} W U \xi . +$$ + +For simplicity, we train the network with SGD, and freeze $V$ so $\eta_V = 0$ . Then we have $\nabla_U f = W^\top V \xi^\top$ and $\nabla_W f = V(U\xi)^\top$ . We assume $|\langle \xi, \xi' \rangle| > 0$ for any $\xi, \xi'$ . + +In what follows, we will prove that for pre-training $f$ cannot be written as the first-order Taylor expansion (i.e., it exhibits feature learning). Then we will prove that it is the opposite for fine-tuning. In fact, if we only look at one gradient step, the only higher order term equals to $\eta_W \eta_U \chi_t^2 \| V \|^2 \langle \xi_t, \xi \rangle f_{t-1}(\xi) = \Theta (\chi_t^2 f_{t-1}(\xi))$ , where $f_{t-1}(\xi)$ is mostly $\Theta(1)$ , $\chi_t$ is mostly $\Theta(1)$ in pre-training and $o(1)$ in fine-tuning (by Definition 5.3). + +Zero step (Pre-training) We model the pre-training of $f$ as one step of training with $\chi_0 = \Theta(1)$ . Then we have $\Delta U_0 = -\eta_U \chi_0 W_{-1}^\top V \xi_0^\top$ , and $\Delta W_0 = -\eta_W \chi_0 V (U_{-1} \xi_0)^\top$ . Since $W_{-1}^\top$ is independent from $V$ , we have $W_{-1}^\top V = \Theta(1/n)$ , thus $\Delta U_0 = \Theta(1)$ matching Table 11. On the other hand, it is obvious that $\Delta W_0 = \Theta(1/n)$ because $V = \Theta(1/n)$ and $U = \Theta(1)$ , also matching Table 11. + +Then the function is now + +$$ +\begin{array}{l} f _ {0} (\xi) = V ^ {\top} \left(W _ {- 1} + \Delta W _ {0}\right) \left(U _ {- 1} + \Delta U _ {0}\right) \xi \\ = V ^ {\top} (W _ {- 1} - \eta_ {W} \chi_ {0} V (U _ {- 1} \xi_ {0}) ^ {\top}) (U _ {- 1} \xi - \eta_ {U} \chi_ {0} W _ {- 1} ^ {\top} V \langle \xi_ {0}, \xi \rangle) \\ = V ^ {\top} W _ {- 1} U _ {- 1} \xi - \eta_ {U} \chi_ {0} \| W _ {- 1} ^ {\top} V \| _ {2} ^ {2} \langle \xi_ {0}, \xi \rangle - \eta_ {W} \chi_ {0} \| V \| ^ {2} \langle U _ {- 1} \xi_ {0}, U _ {- 1} \xi \rangle \\ + \eta_ {W} \eta_ {U} \chi_ {0} ^ {2} \| V \| ^ {2} \langle \xi_ {0}, \xi \rangle V ^ {\top} W _ {- 1} U _ {- 1} \xi . \\ \end{array} +$$ + +It is not difficult to see that $\eta_{U}\chi_0\| W_{-1}^\top V\| _2^2\langle \xi_0,\xi \rangle$ $\eta_W\chi_0\| V\| ^2\langle U_{-1}\xi_0,U_{-1}\xi \rangle$ , and $\eta_W\eta_U\chi_0^2\| V\| ^2\langle \xi_0,\xi \rangle$ are all $\Theta (1)$ Unfortunately, here $V^{\top}W_{-1}U_{-1}\xi = f_{-1}(\xi) = o(1)$ in the infinite-width limit, but if we train one more step, it is easy to see that all four terms of $f_{0}$ is $\Theta (1)$ . Therefore, pre-training with $\mu \mathrm{P}$ exhibits feature learning. + +First step At the first step of fine-tuning, we have $\Delta U_{1} = -\eta_{U}\chi_{1}W_{0}^{\top}V\xi_{1}^{\top}$ and $\Delta W_{1} = -\eta_{W}\chi_{1}V(U_{0}\xi_{1})^{\top}$ . The function can be written as + +$$ +f _ {1} (\xi) = V ^ {\top} \left(W _ {0} + \Delta W _ {1}\right) \left(U _ {0} + \Delta U _ {1}\right) \xi , +$$ + +and + +$$ +f _ {1} (\xi) - f _ {0} (\xi) = V ^ {\top} \Delta W _ {1} U _ {0} \xi + V ^ {\top} W _ {0} \Delta U _ {1} \xi + V ^ {\top} \Delta W _ {1} \Delta U _ {1} \xi . \tag {10} +$$ + +Note that the sum of the first and second terms is exactly $-\chi_{1}\mathcal{K}(\xi ,\xi_{1})$ + +Plug in $\Delta W_{1} = -\eta_{W}\chi_{1}V(U_{0}\xi_{1})^{\top}$ into the first term of Equation (10), + +$$ +V ^ {\top} \Delta W _ {1} U _ {0} \xi = - \eta_ {W} \chi_ {1} V ^ {\top} V (U _ {0} \xi_ {1}) ^ {\top} U _ {0} \xi = \Theta (\chi_ {1}), +$$ + +because + +$$ +\begin{array}{l} \left(U _ {0} \xi_ {1}\right) ^ {\top} U _ {0} \xi = \left(U _ {- 1} \xi_ {1} + \Delta U _ {0} \xi_ {1}\right) ^ {\top} \left(U _ {- 1} \xi + \Delta U _ {0} \xi\right) \\ = \left\langle U _ {- 1} \xi_ {1}, U _ {- 1} \xi \right\rangle - \eta_ {U} \chi_ {0} \left\langle \xi_ {1}, \xi_ {0} \right\rangle f _ {- 1} (\xi) - \eta_ {U} \chi_ {0} \left\langle \xi , \xi_ {0} \right\rangle f _ {- 1} (\xi_ {1}) + \| \Delta U _ {0} \| ^ {2} \left\langle \xi_ {1}, \xi \right\rangle \\ = \Theta (n). \\ \end{array} +$$ + +Plug in $\Delta U_{1} = -\eta_{U}\chi_{1}W_{0}^{\top}V\xi_{1}^{\top}$ into the second term of Equation (10), we have + +$$ +V ^ {\top} W _ {0} \Delta U _ {1} \xi = - \eta_ {U} \chi_ {1} V ^ {\top} W _ {0} W _ {0} ^ {\top} V \xi_ {1} ^ {\top} \xi = \Theta (\chi_ {1}) +$$ + +because + +$$ +\begin{array}{l} V ^ {\top} W _ {0} W _ {0} ^ {\top} V = \left\| \left(W _ {- 1} + \Delta W _ {0}\right) ^ {\top} V, \left(W _ {- 1} + \Delta W _ {0}\right) ^ {\top} V \right\| _ {2} ^ {2} \\ = \| W _ {- 1} ^ {\top} V \| _ {2} ^ {2} + \eta_ {W} ^ {2} \chi_ {0} ^ {2} \| V \| _ {2} ^ {4} \| U _ {- 1} \xi_ {0} \| _ {2} ^ {2} - 2 \eta_ {W} \chi_ {0} \| V \| _ {2} ^ {2} f _ {- 1} (\xi_ {0}) = \Theta (1 / n). \\ \end{array} +$$ + +The third term of Equation (10) equals + +$$ +\eta_ {U} \eta_ {W} \chi_ {1} ^ {2} V ^ {\top} V (U _ {0} \xi_ {1}) ^ {\top} W _ {0} ^ {\top} V \xi_ {1} ^ {\top} \xi = \eta_ {U} \eta_ {W} \chi_ {1} ^ {2} \| V \| ^ {2} \langle \xi_ {1}, \xi \rangle f _ {0} (\xi_ {1}) = \Theta (\chi_ {1} ^ {2}), +$$ + +because $f_0(\xi_1) = \Theta(1)$ unlike $f_{-1}(\xi)$ in the "zero step" analysis. Therefore, $\frac{f_1(\xi) - f_0(\xi)}{\chi_1} \to -\mathcal{K}(\xi, \xi_1)$ . + +Second step At the second step of fine-tuning, we have $\Delta U_{2} = -\eta_{U}\chi_{1}W_{1}^{\top}V\xi_{2}^{\top}$ , and $\Delta W_{2} = -\eta_{W}\chi_{1}V(U_{1}\xi_{2})^{\top}$ and + +$$ +f _ {2} (\xi) - f _ {1} (\xi) = V ^ {\top} \Delta W _ {2} U _ {1} \xi + V ^ {\top} W _ {1} \Delta U _ {2} \xi + V ^ {\top} \Delta W _ {2} \Delta U _ {2} \xi . \tag {11} +$$ + +Assuming $\chi_{2}$ and $\chi_{1}$ share the same order, then when $n\to \infty$ + +$$ +\begin{array}{l} \frac {f _ {2} (\xi) - f _ {1} (\xi)}{\chi_ {2}} \rightarrow V ^ {\top} \Delta W _ {2} U _ {1} \xi / \chi_ {2} + V ^ {\top} W _ {1} \Delta U _ {2} \xi / \chi_ {2} \\ = - \eta_ {W} V ^ {\top} V (U _ {1} \xi_ {2}) ^ {\top} U _ {1} \xi - \eta_ {U} V ^ {\top} W _ {1} W _ {1} ^ {\top} V \xi_ {2} ^ {\top} \xi \\ \rightarrow - \eta_ {W} V ^ {\top} V (U _ {0} \xi_ {2}) ^ {\top} U _ {0} \xi - \eta_ {U} V ^ {\top} W _ {0} W _ {0} ^ {\top} V \xi_ {2} ^ {\top} \xi \\ = - \mathcal {K} (\xi , \xi_ {2}). \\ \end{array} +$$ + +tth step Same as the second step by noting $\Delta U_{t}$ , $\Delta W_{t}$ always have smaller order than $\Delta U_{0}$ and $\Delta W_{0}$ . + +# C.6. LoRA FT Exhibits Kernel Behavior + +Note Theorem C.5 works for any architecture, including LoRA. In order to apply the theorem to LoRA FT, we need to set the initialization and learning rate of the matrices $A$ and $B$ in LoRA correctly so that they satisfy Assumption C.2. + +Here we provide a relatively straightforward way to accomplish this (assuming only intermediate layers use LoRA): + +- Let $k = \alpha n$ where $\alpha$ is a small constant irrelevant to $n$ . +- Let the initialization scale of $A$ be $\Theta(1/\sqrt{n})$ . +- Let the learning rate of $A$ and $B$ be $\Theta(1)$ for SGD, $\Theta(n^{-1 - d})$ for SignGD / Adam. + +In short words, the initialization and learning rate follows $\mu \mathrm{P}$ as in Table 11 by treating $A$ and $B$ as one of $W^{j}$ . This setup easily generalizes to the case where $U$ and $V$ also use LoRA. + +# D. Subspace-Based Fine-Tuning Methods + +Experimental results related to LoRA FT are presented in Table 12. These results show that SGD-FT and SGD-LoRA FT perform similarly in the few-shot setting for many tasks, although the original experiments in Hu et al. (2021) focused on Adam. The closeness of $\mathcal{K}^{(\mathrm{SGD})}$ and $\mathcal{K}_{\mathrm{LoRA}}^{(\mathrm{SGD})}$ to their respective fine-tuning methods suggests that FT and LoRA FT can be described by kernel dynamics. Moreover, we show that $\mathcal{K}^{(\mathrm{SGD})}$ and $\mathcal{K}_{\mathrm{LoRA}}^{(\mathrm{SGD})}$ achieve similar performance to each other, providing empirical evidence for the claim in Theorem 7.2 that LoRA preserves the kernel. + +
k-shotMethodSST-2MRCRQNLIRTEQQP
16SGD-FT89.0(1.5)83.2(2.4)93.3(0.2)62.1(3.1)60.0(5.5)62.1(2.3)
SGD-LoRA FT89.1(0.6)82.7(2.0)92.6(0.8)57.1(3.3)58.2(2.9)59.8(3.0)
K(SGD)88.3(0.3)84.7(1.5)93.2(0.9)60.1(3.3)60.0(4.7)58.2(0.9)
K(SGD)LoRA88.1(0.4)84.9(1.4)93.1(1.0)59.4(3.7)56.2(5.8)58.2(3.2)
64SGD-FT89.7(0.4)85.6(1.1)94.3(0.5)72.8(2.2)68.9(2.5)69.2(1.3)
SGD-LoRA FT90.0(0.2)85.7(1.2)93.9(0.7)73.8(2.7)69.1(1.8)68.3(2.4)
K(SGD)89.2(1.0)86.4(0.6)93.7(0.4)67.3(1.6)66.5(2.5)66.4(1.7)
K(SGD)LoRA89.2(0.7)85.7(1.5)93.6(0.4)66.0(1.6)63.5(3.5)63.9(4.5)
+ +Table 12. Performance of prompt-based SGD FT and prompt-based SGD-LoRA FT, along with their kernel analogs $\mathcal{K}^{(\mathrm{SGD})}$ and $\mathcal{K}_{\mathrm{LoRA}}^{(\mathrm{SGD})}$ , on a subset of tasks. SGD FT and SGD-LoRA FT achieve comparable performance, and $\mathcal{K}^{(\mathrm{SGD})}$ and $\mathcal{K}_{\mathrm{LoRA}}^{(\mathrm{SGD})}$ also achieve comparable performance to each other. We report F1 for QQP and accuracy otherwise, and average the metrics over 5 seeds. These experiments support Theorem 7.2. + +# D.1. IntrinsicDimension FT + +We discuss IntrinsicDimension FT (Li et al., 2018; Aghajanyan et al., 2021) here. When analyzed through the kernel, IntrinsicDimension FT and LoRA FT induce similar transformations in the optimization dynamics, but the former was originally proposed as a way to measure the difficulty of downstream tasks, and the latter was proposed as an alternative fine-tuning method. + +Definition D.1 (A-IntrinsicDimension FT (Li et al., 2018; Aghajanyan et al., 2021)). Let $\theta \in \mathbb{R}^M$ be the model parameters and fix a random projection $\Pi \in \mathbb{R}^{M\times k}$ . Set $\theta$ to $\theta +\Pi \hat{\theta}$ , where $\hat{\theta}\in \mathbb{R}^k$ . To fine-tune, fix $\theta$ at its pre-trained value and only train $\hat{\theta}$ . + +We show a similar result for IntrinsicDimension FT as for LoRA FT: using a sufficiently large $k \geq \Theta (\log N / \epsilon^2)$ ensures that each element of the kernel is relatively unchanged. + +Theorem D.2 (IntrinsicDimension FT preserves $\mathcal{K}^{(\mathrm{SGD})}$ ). Let $\Pi$ be a random matrix with each entry draw i.i.d from $\mathcal{N}(0,1 / k)$ . Let $\mathcal{K}_{ID}^{(SGD)}\in \mathbb{R}^{N\times N}$ be the kernel analog to SGD-IntrinsicDimension FT (Definition D.1) on a downstream task $\Xi$ . Additionally, assume $\mathcal{K}^{(SGD)}(i,j)\leq c$ for any $i,j\in [N]$ . Then, + +$$ +\operatorname * {P r} \left[ \exists i, j \in [ N ], | \mathcal {K} _ {I D} ^ {(S G D)} (i, j) - \mathcal {K} ^ {(S G D)} (i, j) | \geq c \epsilon \right] \leq 4 N ^ {2} \exp (- (\epsilon^ {2} - \epsilon^ {3}) k / 4). +$$ + +# D.2. Proofs + +A key step of the proof is to show that if $\mathcal{A}$ FT exhibits kernel behavior, then so does $\mathcal{A}$ -LoRA FT. We show this step in Appendix C.6, since it invokes the Tensor Programs framework again. Now that we know FT follows kernel dynamics, we can move to showing how LoRA and IntrinsicDimension FT modify the kernel. + +We restate the Johnson-Lindenstrauss lemma, which preserves inner products under random projection. + +Lemma D.3 (Corollary of Johnson-Lindenstrauss, (Johnson, 1984)). Let $u, v \in \mathbb{R}^d$ such that $\| u\| ^2 \leq c$ and $\| v\| ^2 \leq c$ . Let $h(x) = \frac{1}{\sqrt{k}} Ax$ , where $A \in \mathbb{R}^{k\times d}$ with each entry sampled i.i.d. from $\mathcal{N}(0,1)$ or $\mathcal{U}(-1,1)$ . Then, + +$$ +\operatorname * {P r} [ | \boldsymbol {u} \cdot \boldsymbol {v} - h (\boldsymbol {u}) \cdot h (\boldsymbol {v}) | \geq c \epsilon ] \leq 4 \exp (- (\epsilon^ {2} - \epsilon^ {3}) k / 4) +$$ + +Proof for Theorem D.2. Note $\nabla_{\hat{\theta}}f = \Pi^{\top}\nabla_{\theta}f$ , and + +$$ +\mathcal {K} _ {\mathrm {I D}} ^ {(\mathrm {S G D})} (i, j) - \mathcal {K} ^ {(\mathrm {S G D})} (i, j) = \langle \nabla_ {\hat {\theta}} f (\xi_ {i}; \theta), \nabla_ {\hat {\theta}} f (\xi_ {j}; \theta) \rangle - \langle \nabla_ {\theta} f (\xi_ {i}; \theta), \nabla_ {\theta} f (\xi_ {j}; \theta) \rangle . +$$ + +The rest follows Lemma D.3 by setting $u = \nabla_{\theta}f(\xi_j;\theta)$ , $v = \nabla_{\theta}f(\xi_i;\theta)$ , and union bounding all $i,j$ pairs. + +We can now look at LoRA (Hu et al., 2021) for a simple fully connected layer. The construction modifies each layer independently and only acts on fully connected layers, so this is the only part of the kernel that can change when parametrizing updates as in LoRA. For ease of notation, for any parameter or hidden vector $w$ , we use $dw$ to denote $\nabla_{w}f(\xi ;\theta)$ , $dw(i)$ to denote $\nabla_{w}f(\xi_{i};\theta)$ , and $w_{i}$ denotes the resulting $w$ when input is $\xi_{i}$ . + +Lemma D.4 (LoRA SGD Kernel). Let $h = Wx + BAx$ as defined in the paper, where $x \in \mathbb{R}^n$ , $W \in \mathbb{R}^{m \times n}$ , $B \in \mathbb{R}^{m \times k}$ , and $A \in \mathbb{R}^{k \times n}$ with $k \ll n$ . $B$ is initialized to 0 and $A$ is initialized with i.i.d. zero-mean Gaussian samples. SGD Training with LoRA (i.e., fixing $W$ and allowing $A$ and $B$ to be updated) yields the kernel $\mathcal{K}_{\text{LoRA}}^{(\text{SGD})}$ , whereas full FT with SGD yields the kernel $\mathcal{K}$ : + +$$ +\mathcal {K} _ {L o R A} ^ {(S G D)} = d H d H ^ {\top} \odot (X A ^ {\top} A X ^ {\top}) \qquad \mathcal {K} ^ {(S G D)} = d H d H ^ {\top} \odot (X X ^ {\top}) +$$ + +where $dH\in \mathbb{R}^{N\times m}$ has $dh(i)$ in the ith row and $X\in \mathbb{R}^{N\times d}$ has $x_{i}$ in the ith row. + +Proof. We start by noting the well-known fact that $dW = dh \otimes x$ , where $dh$ is the gradient to $h$ and $\otimes$ is the cross product. Thus, $K = dHdH^{\top} \odot (XX^{\top})$ . In the LoRA setting, $dA = 0$ and $dB = dh \otimes Ax$ . Because we are in the kernel setting, $B = 0$ and thus, $dA = 0$ , throughout training. So, + +$$ +\mathcal {K} _ {\mathrm {L o R A}} (i, j) = \langle d B (i), d B (j) \rangle = \langle d h (i), d h (j) \rangle \langle A x _ {i}, A x _ {j} \rangle . +$$ + +Analogous reasoning yields + +$$ +\mathcal {K} ^ {\mathrm {(S G D)}} (i, j) = \langle d h (i), d h (j) \rangle \langle x _ {i}, x _ {j} \rangle . +$$ + +Theorem D.5 ( $\mathcal{K}_{\mathrm{LoRA}}^{\mathrm{(SGD)}}$ is likely not far from $\mathcal{K}^{\mathrm{(SGD)}}$ ). Let $\mathcal{K}_{\mathrm{LoRA}}^{\mathrm{(SGD)}} \in \mathbb{R}^{N \times N}$ and $\mathcal{K}^{\mathrm{(SGD)}} \in \mathbb{R}^{N \times N}$ be defined as in Lemma D.4. Additionally, assume that $\| dh\|^2 \leq c$ , $\| x\|^2 \leq c$ for any $\xi$ in the downstream dataset. Then, + +$$ +\operatorname * {P r} \left[ \exists i, j \in [ N ], | \mathcal {K} _ {L o R A} ^ {(S G D)} (i, j) - \mathcal {K} ^ {(S G D)} (i, j) | \geq c ^ {2} \epsilon \right] \leq 4 N ^ {2} \exp (- (\epsilon^ {2} - \epsilon^ {3}) k / 4). +$$ + +Proof. By Lemma D.4, + +$$ +\begin{array}{l} \left| \mathcal {K} _ {\text {L o R A}} ^ {(\text {S G D})} (i, j) - \mathcal {K} ^ {(\text {S G D})} (i, j) \right| = | \langle d h (i), d h (j) \rangle (\langle A x _ {i}, A x _ {j} \rangle - \langle x _ {i}, x _ {j} \rangle) | \\ \leq c | \langle A x _ {i}, A x _ {j} \rangle - \langle x _ {i}, x _ {j} \rangle |. \\ \end{array} +$$ + +The rest of the proof follows from Lemma D.3 and union bound. + +Remark D.6. Theorem D.5 shows when $k \geq 20c^4 \log N / \epsilon^2$ , with high probability, the difference between the two kernels is smaller than $\epsilon$ . Although Theorem D.5 focuses on a simple fully connected layer, the conclusion easily extends to the case where LoRA is applied $L$ times in the model because LoRA components are independent of each other: + +$$ +\operatorname * {P r} \left[ \exists i, j \in [ N ], | \mathcal {K} _ {\mathrm {L o R A}} ^ {(\mathrm {S G D})} (i, j) - \mathcal {K} ^ {(\mathrm {S G D})} (i, j) | \geq L c ^ {2} \epsilon \right] \leq 4 N ^ {2} \exp (- L (\epsilon^ {2} - \epsilon^ {3}) k / 4). +$$ + +The requirement of $k$ becomes $k \geq \Theta(Lc^4 \log N / \epsilon^2)$ . \ No newline at end of file diff --git a/akernelbasedviewoflanguagemodelfinetuning/images.zip b/akernelbasedviewoflanguagemodelfinetuning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e70d57ad5e570d1a45e64847de1e62c7576625b6 --- /dev/null +++ b/akernelbasedviewoflanguagemodelfinetuning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ceca4efc295422af95281fd747a69a5c6db825cedbb160b56f8920d2d110f642 +size 2051204 diff --git a/akernelbasedviewoflanguagemodelfinetuning/layout.json b/akernelbasedviewoflanguagemodelfinetuning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..60c4ffc5212b60a63f9c68cdb4e89ed6a6849aac --- /dev/null +++ b/akernelbasedviewoflanguagemodelfinetuning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:12135b1d4f10256fd3b491ef5b36c95f5122092d391df621fda5cda846e00095 +size 1487337 diff --git a/akernelizedsteindiscrepancyforbiologicalsequences/412a2418-bcd2-47cb-bdb3-d03b20be00b2_content_list.json b/akernelizedsteindiscrepancyforbiologicalsequences/412a2418-bcd2-47cb-bdb3-d03b20be00b2_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0cb46baf343ad5f50560221e7c3df978ab8ce924 --- /dev/null +++ b/akernelizedsteindiscrepancyforbiologicalsequences/412a2418-bcd2-47cb-bdb3-d03b20be00b2_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8389a360d0fe033b08f7dac64cb9d00b3de2f609c621f057a975cbb20d456339 +size 389841 diff --git a/akernelizedsteindiscrepancyforbiologicalsequences/412a2418-bcd2-47cb-bdb3-d03b20be00b2_model.json b/akernelizedsteindiscrepancyforbiologicalsequences/412a2418-bcd2-47cb-bdb3-d03b20be00b2_model.json new file mode 100644 index 0000000000000000000000000000000000000000..71be7e12ff64b6a528bd5f07c8ad6c33c7e93493 --- /dev/null +++ b/akernelizedsteindiscrepancyforbiologicalsequences/412a2418-bcd2-47cb-bdb3-d03b20be00b2_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d43c79791647ba54a9ad302c486a185ee11d90be9e46a76865d61a1e947ea8c9 +size 448527 diff --git a/akernelizedsteindiscrepancyforbiologicalsequences/412a2418-bcd2-47cb-bdb3-d03b20be00b2_origin.pdf b/akernelizedsteindiscrepancyforbiologicalsequences/412a2418-bcd2-47cb-bdb3-d03b20be00b2_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0d2c3069b00306ec1c8b1425bccc0b4888689659 --- /dev/null +++ b/akernelizedsteindiscrepancyforbiologicalsequences/412a2418-bcd2-47cb-bdb3-d03b20be00b2_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ddf9c9cb8f39f8414892a740772f2bd10fd39d7264ff211e50a2a072d87f3290 +size 2721882 diff --git a/akernelizedsteindiscrepancyforbiologicalsequences/full.md b/akernelizedsteindiscrepancyforbiologicalsequences/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e747f68623a87256d151ee4d382acaef51558a47 --- /dev/null +++ b/akernelizedsteindiscrepancyforbiologicalsequences/full.md @@ -0,0 +1,1773 @@ +# A Kernelized Stein Discrepancy for Biological Sequences + +Alan N. Amin1 Eli N. Weinstein*2 Debora S. Marks*13 + +# Abstract + +Generative models of biological sequences are a powerful tool for learning from complex sequence data, predicting the effects of mutations, and designing novel biomolecules with desired properties. To evaluate generative models it is important to accurately measure differences between high-dimensional distributions. In this paper we propose the "KSD-B", a novel divergence measure for distributions over biological sequences that is based on the kernelized Stein discrepancy (KSD). The KSD-B can be evaluated even when the normalizing constant of the model is unknown; it allows for variable length sequences and can take into account biological notions of sequence distance. Unlike previous KSDs over discrete spaces the KSD-B (a) is theoretically guaranteed to detect convergence and non-convergence of distributions over sequence space and (b) can be efficiently estimated in practice. We demonstrate the advantages of the KSD-B on problems with synthetic and real data, and apply it to measure the fit of state-of-the-art machine learning models. Overall, the KSD-B enables rigorous evaluation of generative biological sequence models, allowing the accuracy of models, sampling procedures, and library designs to be checked reliably. + +# 1. Introduction + +Generative models of biological sequences have wide and growing application in protein design, phylogenetic analysis, clinical human genetics, epidemiology and beyond (Hopf et al., 2017; Riesselman et al., 2018; Russ et al., 2020; Shin et al., 2021; Frazer et al., 2021; Davidsen et al., 2019; Weinstein et al., 2022b; Thadani et al., 2022; Meier et al., 2021; Madani et al., 2020; Alley et al., 2019; Marcou et al., 2018). A central challenge in working with generative bi + +*Equal contribution , Eli N. Weinstein , Debora S. Marks . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +ological sequence models, as for all generative models, is model evaluation. How accurate are density estimates and how realistic are sequences sampled from the model? + +Common existing evaluation techniques for generative sequence models do not provide an absolute and reliable measure of model fit. For example, generative sequence models are often evaluated based on their log likelihood; but this only addresses whether one model fits to the data better than another, not whether any model matches the data absolutely. Another evaluation strategy is to draw sequences from the model and test whether they match the data based on expertly chosen statistics or predictions of sequence properties (hydrophobicity, secondary structure, etc.). However, a generative model that passes such a test may still poorly fit the data outside of the chosen statistics. + +In this article, we address the model evaluation problem by developing a divergence to compare distributions over sequences. We call this divergence the "KSD-B", as it is based on the kernelized Stein discrepancy (KSD) but applies to biological sequences (B) (Chwialkowski et al., 2016; Liu et al., 2016). The KSD-B allows us to answer whether or not the distribution over sequences produced by a generative model, $p$ , equals the distribution of the data, $q$ . That is, it enables a goodness of fit test for biological sequences. The KSD-B provides an absolute rather than a relative measure of model quality, and, given enough data, it is able to detect any difference between $p$ and $q$ . + +KSDs compare distributions by (1) building a stochastic process that is stationary for $p$ , (2) applying it to $q$ , and (3) using a kernel to evaluate how much $q$ changes. Much emphasis has been placed previously on the study of stochastic processes and kernels over Euclidean space, i.e. $\mathbb{R}^d$ , such as diffusion processes, and Gaussian, inverse multiquadric or Matérn kernels. The KSD-B, by contrast, is defined over sequence space $S = \cup_{L=1}^{\infty} \mathcal{B}^L$ , the set of all finite length strings of an alphabet $\mathcal{B}$ , where $\mathcal{B}$ is nucleotides for DNA or amino acids for proteins (Amin et al., 2021; Weinstein, 2022). The KSD-B uses a stochastic process based on substitutions, insertions and deletions (the sorts of mutations often seen in biological sequences) and uses kernels such as alignment kernels and kmer spectrum kernels (which capture biological notions of sequence similarity). + +Despite the shift in setting, we build the stochastic process and kernel carefully so that the KSD-B shares with the orig + +inal Euclidean KSD a number of desirable properties and theoretical guarantees. First, it faithfully measures whether $p$ and $q$ are equal for any $p$ and $q$ , i.e. $\mathrm{KSD - B}_p(q) = 0$ if and only if $p = q$ . This means the KSD-B can detect any difference between $p$ and $q$ . Second, the KSD-B detects convergence and non-convergence: $\mathrm{KSD - B}_p(q_n)$ converges to 0 as $n \to \infty$ if and only if $q_n$ converges to $p$ (Gorham & Mackey, 2017). This protects against big mistakes by the KSD-B in practice: if $\mathrm{KSD - B}_p(q)$ is very close to zero (say, within the noise of estimation), $q$ must at least be very similar to $p$ , not completely different. Third, $\mathrm{KSD - B}_p(q)$ can be computed using only unnormalized probabilities from $p$ and samples from $q$ . This means the KSD-B can be used even when the normalizing constant of the model is intractable, as in energy-based generative models. Finally, the KSD-B can be efficiently estimated in realistic settings. + +As a computable divergence, the KSD-B is a broadly useful tool for more than evaluating model fit. For instance, it can be used to evaluate model samples. Often it is only possible to draw approximate samples from a posteriori, for instance in the context of semi-supervised protein design or ancestral sequence reconstruction. Since the KSD-B requires only unnormalized probabilities, and can detect convergence and non-convergence, it can be used to determine whether the distribution of approximate samples matches the model (Gorham & Mackey, 2017). The KSD-B can also be used to design libraries based on generative models. Since the KSD-B can detect convergence and non-convergence, we can find a set of samples that are representative of $p$ by minimizing $\mathrm{KSD - B}_p(q)$ with respect to an empirical distribution of samples $q$ (Chen et al., 2018; 2010). + +Sec. 2 provides background on kernelized Stein discrepancies. Sec. 3 introduces a novel class of discrete Stein discrepancies. Sec. 4 defines the KSD-B. Sec. 5 proves that it is faithful and detects both convergence and non-convergence. Sec. 6 describes how the KSD-B can be estimated efficiently. Sec. 7 develops kernels for the KSD-B. Sec. 8 demonstrates the KSD-B empirically. Sec. 9 concludes. + +Related Work Computable Stein discrepancies were first developed for Euclidean space (Gorham & Mackey, 2015; Liu et al., 2016; Chwialkowski et al., 2016; Gorham & Mackey, 2017). There have been a number of generalizations to discrete space (Shi et al., 2022; Yang et al., 2018a; Han et al., 2020; Hodgkinson et al., 2020). However, these methods only apply to finite discrete spaces, and so, biologically, they are only appropriate in the special case where all sequences have the same length. Our method differs in that it applies to the infinite discrete setting of $S$ , and thus allows for arbitrary length variation. + +During preparation of this manuscript, Baum et al. (2022) proposed a method similar to the KSD-B, that is applicable to infinite discrete spaces such as $S$ . We go further by proving strong theoretical guarantees for the KSD-B, and + +showing how the method of Baum et al. (2022) can fail to satisfy these guarantees. We also introduce a rigorous approximation method that is substantially more computationally efficient than that of Baum et al. (2022) in practice. + +There are a small number of other approaches to nonparametric testing of generative biological sequence models. Amin et al. (2021) develop a Bayesian goodness of fit test (BEAR), but it depends on access to normalized likelihoods from the model and does not provide guarantees for convergence detection. Amin et al. (2023) develop a maximum mean discrepancy (MMD) two-sample test with guarantees for convergence detection, but it requires samples from the model. Even when these alternatives are applicable, we find empirically that the KSD-B can outperform both. + +# 2. Background + +The KSD-B builds on and extends the notion of a Stein discrepancy, which is a type of integral probability metric. + +Integral Probability Metrics (IPMs) IPMs are a general method for measuring the difference between two distributions $p$ and $q$ . They compute the maximum difference in expectation between $p$ and $q$ over a set of test functions $\mathcal{F}$ , i.e. $\sup_{f\in \mathcal{F}}|E_qf - E_pf|$ . Here, $E_{q}f = \sum_{X\in S}f(X)q(X)$ is the expectation with respect to $q$ of a function $f$ on sequences. Many popular divergences can be written as IPMs, e.g. choosing $\mathcal{F}$ to be the set of bounded functions gives the total variation distance. In general, the success of an IPM at detecting if $p\neq q$ depends on the size of $\mathcal{F}$ , with larger function families offering better discrimination. + +IPMs are an especially useful choice of discrepancy for biological sequence models, where the ultimate goal is often to synthesize and test samples from a model $p$ in the laboratory. In particular, let $f^{*}$ denote the mapping from sequence to phenotype of interest (protein stability, binding, etc.). In general, $f^{*}$ is not known before performing extensive experiments; however, if $\mathcal{F}$ is large, it can be reasonable to assume $f^{*} \in \mathcal{F}$ . Then, a small IPM guarantees that samples from $p$ will have similar phenotypes to samples from $q$ , as $|E_qf^* - E_pf^*| \leq \sup_{f \in \mathcal{F}} |E_qf - E_pf|$ (Weinstein et al., 2022b;a). This guarantees, for instance, that samples from a generative model have similar phenotypes to the natural sequences it was trained on. + +Stein Discrepancies To evaluate an expectation such as $E_{p}f$ , one would typically require samples or normalized probabilities from $p$ . However, these are not always available, for instance if $p$ is an energy-based model, or the posterior of a semi-supervised model. The Stein discrepancy solves this problem by constructing a family of test functions $\mathcal{F}$ for which $E_{p}f$ is exactly zero, but which is still sufficiently large to detect any difference between $p$ and $q$ . + +The basic idea is to use a continuous time Markov process with $p$ as its stationary distribution. In particular, consider a generator $\mathcal{L}_p$ , defined as $(\mathcal{L}_p f)(X) =$ + +$\lim_{t \to 0} \frac{1}{t} (E[f(X^t)] - f(X))$ , where $X^t$ is the position of the Markov process initialized at $X$ after evolving for time $t$ (Barbour, 1990; Gorham et al., 2019; Shi et al., 2022). Now, $E_q \mathcal{L}_p f$ describes the amount that the expectation of $f$ changes as $q$ evolves under the Markov process. If $q \neq p$ , then evolving $q$ will change it to be closer to $p$ , so intuitively, if $\mathcal{F}$ is large enough, there must exist some function $f$ that changes expectation, i.e. $\sup_{f \in \mathcal{F}} |E_q \mathcal{L}_p f| > 0$ . However, if $q = p$ , the expectation of $f$ will not change at all since $p$ is stationary, i.e. $\sup_f |E_p \mathcal{L}_p f| = 0$ . Thus if we set $\mathcal{F} = \mathcal{L}_p (\mathcal{F}')$ for a set of functions $\mathcal{F}'$ , the IPM becomes $\sup_{f \in \mathcal{F}} |E_q f - E_p f| = \sup_{f \in \mathcal{F}'} |E_q \mathcal{L}_p f|$ , so it no longer depends on an expectation with respect to $p$ , but can still detect if $q \neq p$ for $\mathcal{F}'$ sufficiently large. This IPM is a "Stein divergence", and $\mathcal{L}_p$ a "Stein operator". Markov chain Monte Carlo (MCMC) methods are a useful tool for building Stein operators, as they allow construction of a Markov processes with stationary distribution $p$ even when the normalizing constant of $p$ is unknown. + +Diffusion Stein Discrepancies On Euclidean space, a classic choice of operator is the Langevin Stein operator, which is derived from a Langevin sampler (Liu et al., 2016; Chwialkowski et al., 2016; Gorham & Mackey, 2017). The Langevin Stein operator is an instance of a diffusion Stein discrepancy, which are derived from Itô diffusions, and have strong theoretical properties, in particular they detect non-convergence (Gorham et al., 2019; Barp et al., 2019). + +The properties of diffusion Stein discrepancies stem in part from a subtle but important extension of the discrepancy $\sup_{f\in \mathcal{F}}|E_q\mathcal{L}_pf|$ to a larger set of test functions. When the generator $\mathcal{L}_p$ comes from a diffusion, it can be written in the form $\mathcal{L}_p = \mathcal{T}_p\nabla$ , where $\mathcal{T}_p$ is another operator and $\nabla$ is the gradient operator. This gives a Stein discrepancy $\sup_{f\in \mathcal{F}}|E_q\mathcal{T}_p\nabla f| = \sup_{g\in \nabla \mathcal{F}}|E_q\mathcal{T}_pg|$ , i.e. we can think of the discrepancy as maximizing over the set of gradients of functions in $\mathcal{F}$ . Now, $\mathcal{F}$ is a set of functions from $\mathbb{R}^d$ to $\mathbb{R}$ , so $\nabla \mathcal{F}$ is a set of functions from $\mathbb{R}^d$ to $\mathbb{R}^d$ , i.e. each $g\in \nabla \mathcal{F}$ is a vector field rather than scalar field. However, not all vector field functions can be written as the gradient of scalar field functions. One can therefore consider expanding the class of test functions to the larger set of all vector fields, $\mathcal{G}\supset \nabla \mathcal{F}$ , making the Stein discrepancy $\sup_{g\in \mathcal{G}}|E_q\mathcal{T}_pg|$ . Since the set of test functions is larger, diffusion Stein discrepancies can more easily detect whether or not $q$ matches $p$ . Critically, the properties of Itô diffusions guarantee that we still have $\sup_{g\in \mathcal{G}}|E_p\mathcal{T}_pg| = 0$ (Gorham et al., 2019). + +# 3. Discrete Vector Field Stein Discrepancies + +In this section we describe Stein discrepancies for distributions on discrete spaces. (Our discussion in this section is not specific to sequence space $S$ ; it applies to any infinite discrete space.) On discrete spaces, diffusion Stein discrepancies are not applicable, since they depend on gradients. We introduce a novel approach to expanding the set of test + +functions on discrete spaces, which will be essential for guaranteeing that our new discrepancy, the KSD-B, detects non-convergence and convergence. The idea is analogous to diffusion Stein discrepancies, with finite differences replacing gradients, and the property of detailed balance replacing the condition that the Markov process is an Itô diffusion. + +To construct a Stein operator for a discrete space $S$ , we first consider a generic continuous time Markov process (Shi et al., 2022). Let $T_{p,X\to Y}$ denote the transition rate of the process from sequence $X$ to sequence $Y$ . We assume this transition rate is zero except to a finite number of sequences $Y$ that are near $X$ , i.e. "mutants" of $X$ ; we write $YMX$ if $Y$ is a mutant of $X$ , where $M$ is a relation on $S$ . Then, the Stein operator is, + +$$ +\left(\mathcal {L} _ {p} f\right) (X) = \sum_ {Y \in S | Y M X} T _ {p, X \rightarrow Y} \left(f (Y) - f (X)\right), +$$ + +since in a continuous time Markov process, the transition rate out of $X$ must equal the total transition rate to other states, i.e. $T_{p,X\rightarrow X} = -\sum_{YM_X}T_{p,X\rightarrow Y} = -\mathrm{flux}_p(X)$ . We can think of the quantity $f(Y) - f(X)$ for $YMX$ as a discrete analogue of the gradient of $f$ , as it looks at the difference between the value of $f$ at adjacent points $Y$ and $X$ . We therefore define $\nabla f$ for any function $f$ on $S$ as $\nabla f(X,Y) = f(Y) - f(X)$ for $YMX$ , following Chow et al. (2018). Now, we can write $\mathcal{L}_p f$ in the form $\mathcal{T}_p\nabla f$ , so the discrete Stein discrepancy is $\sup_{f\in \mathcal{F}}|E_q\mathcal{T}_p\nabla f|$ . + +Assume the Markov process also satisfies detailed balance, i.e. $T_{p,X\to Y}p(X) = T_{p,Y\to X}p(Y)$ . We can rearrange the Stein discrepancy (Proposition B.8), so $E_q\mathcal{T}_p\nabla f =$ + +$$ +\frac {1}{2} \sum_ {Y M X} q (X) T _ {p, Y \rightarrow X} \left(\frac {p (Y)}{p (X)} - \frac {q (Y)}{q (X)}\right) \nabla f (X, Y), \tag {1} +$$ + +where we have used the fact that $\nabla f(X,Y) = -\nabla f(Y,X)$ . This equation gives some intuition for the discrete Stein discrepancy: the term $p(Y) / p(X) - q(Y) / q(X)$ compares the likelihood ratios of $q$ and $p$ at nearby points $X$ and $Y$ rather than their likelihoods at a single point, and so does not depend on the normalizing constant of $p$ or $q$ . If $p = q$ , the difference in likelihood ratios is zero, so the entire equation is zero regardless of the value of $\nabla f$ . + +We now expand the set of test functions. Instead of considering the Stein discrepancy $\sup_{f\in \mathcal{F}}|E_q\mathcal{T}_p\nabla f| = \sup_{g\in \nabla \mathcal{F}}|E_q\mathcal{T}_pg|$ , we consider $\sup_{g\in \mathcal{G}}|E_q\mathcal{T}_pg|$ , where $\mathcal{G}\supset \nabla \mathcal{F}$ is a set of functions from $S\times S$ to $\mathbb{R}$ which satisfy the anticommutative property $g(X,Y) = -g(Y,X)$ . We refer to such functions $g$ as "vector fields", following Chow et al. (2018). Thanks to the anticommutative property, Eqn. 1 still holds, with $\nabla f$ replaced by $g$ . Using $\mathcal{G}$ instead of $\nabla \mathcal{F}$ , we can have, for example, test functions $g$ that are non-zero for just one edge $(X,Y)$ in $M$ ; this is impossible for gradients $\nabla f$ , and allows the Stein discrepancy to test for more subtle differences between $p$ and $q$ . Moreover, Eqn. 1 shows that despite expanding the class of functions, the Stein discrepancy is still zero at $p = q$ , i.e. $\sup_{g\in \mathcal{G}}|E_p\mathcal{T}_pg| = 0$ . + +We will refer to discrete Stein discrepancies that use vector fields, $\sup_{g\in \mathcal{G}}|E_q\mathcal{T}_p g|$ , as "vector field Stein discrepancies", in contrast to the "scalar field Stein discrepancies" $\sup_{f\in \mathcal{F}}|E_q\mathcal{T}_p\nabla f|$ that have been studied previously (Shi et al., 2022; Baum et al., 2022). We will find that vector field Stein discrepancies have distinct theoretical and practical advantages over scalar field Stein discrepancies in the context of biological sequence data. + +# 4. A Stein Discrepancy for Biological Sequences + +In this section we define the KSD-B, a kernelized vector field Stein discrepancy for biological sequences. + +Mutation We start by considering the relation $M$ , that is, what pairs of sequences should be considered nearby. Typically in biology, two sequences are considered similar if they differ by a small number of mutations. We therefore say $YMX$ if $Y$ differs from $X$ by a single mutation, either a substitution (changing a single letter of $X$ ), insertion (adding a single letter anywhere in $X$ ) or deletion (removing a single letter anywhere in $X$ ). This choice of $M$ ensures that there are only a relatively small number of neighbors of each sequence, but one can still reach any point in $S$ from any other by jumping from neighbor to neighbor. + +Zanella Stein Operator To construct the Markov process over sequences, we use the framework of locally informed proposals, which is a general strategy for building Markov chain Monte Carlo methods on discrete spaces that satisfy detailed balance (Zanella, 2020; Shi et al., 2022). Consider any continuous non-negative, non-decreasing, and non-zero function $\chi$ that satisfies $\chi(t) = t\chi(1/t)$ for all $t > 0$ and $\chi(0) = 0$ ; examples are $\chi(t) = \sqrt{t}$ and $\chi(t) = \min\{t, 1\}$ . For any $YMX$ , define the transition rate from $X$ to $Y$ , $T_{p,X \to Y}$ as + +$$ +\chi \left(\frac {p (Y)}{p (X)}\right) \times \# \{\text {s i n g l e m u t a t i o n s t a k i n g} X \text {t o} Y \}, \tag {2} +$$ + +with $T_{p,X\to Y} = \infty$ if $p(X) = 0$ . The first term depends on the difference in probability under $p$ of $X$ and $Y$ ; in the case $\chi (t) = \min \{t,1\}$ , we can recognize it as the Metropolis-Hastings-Rosenbluth correction. The second term accounts for variation in sequence length, which creates situations where different mutations to $X$ can lead to the same $Y$ ; for example, if $X = AA$ , we can reach $Y = A$ by deleting either the first or the second position of $X$ , so $\# \{\text{single mutations taking } X \text{ to } Y\} = 2$ . With this construction, the Markov process satisfies detailed balance, i.e. $T_{p,X\to Y}p(X) = T_{p,Y\to X}p(Y)$ where we define $\infty \times 0 = 0$ throughout. The resulting Stein operator $\mathcal{T}_p\nabla$ is called the "Zanella Stein operator" (Hodgkinson et al., 2020). With this operator, we can define not only a scalar field Stein discrepancy but also a vector field Stein discrepancy, $\sup_{g\in \mathcal{G}}|E_q\mathcal{T}_pg|$ . + +Kernelization To make the discrepancy tractable and ensure + +$\mathcal{G}$ is sufficiently large to detect when $q \neq p$ , we turn to reproducing kernel Hilbert spaces (RKHSs). As with the original kernelized Stein discrepancy, we let the set of test functions $\mathcal{G}$ be a unit ball in an RKHS $\mathcal{H}_k$ with kernel $k$ , i.e. $\mathcal{G} = \{g: \| g \|_k \leq 1\}$ where $\| \cdot \|_k$ is the norm on $\mathcal{H}_k$ (Gorham & Mackey, 2017; Liu et al., 2016). To ensure that $\mathcal{H}_k$ only contains vector fields, i.e. functions $g: S \times S \to \mathbb{R}$ that satisfy anticommutivity, it is sufficient to use a kernel that also satisfies anticommutivity, + +$$ +k \left(\left(X, Y\right), \left(X ^ {\prime}, Y ^ {\prime}\right)\right) = - k \left(\left(Y, X\right), \left(X ^ {\prime}, Y ^ {\prime}\right)\right). \tag {3} +$$ + +Note, since kernels are symmetric, the analogous equation holds flipping $X', Y'$ . We will see in Sec. 7 and Appx. B.4.3 how to construct vector field kernels that capture biological notions of sequence similarity. + +Starting from a kernel $k: S \times S \to \mathbb{R}$ describing scalar fields on $S$ , one can derive a kernel $k^{\nabla}$ satisfying Eqn. 3 such that the functions in $\mathcal{H}_{k^{\nabla}}$ are the gradients of the functions in $\mathcal{H}_k$ (Appx. B.3.2). We can thus obtain kernelized scalar field Stein discrepancies as a special case of kernelized vector field Stein discrepancies, using a "scalar field kernel", instead of a more general "vector field kernel". + +KSD-B The KSD-B is defined as $\mathrm{KSD - B}_{p,k}(q) = \sup_{\| f\| _k\leq 1}|E_q\mathcal{T}_pf|$ . We can compute the supremum analytically (proof in Appx. B.3.1). + +Proposition 4.1. Say $k$ is a vector field kernel and $q$ is a $p, k$ -integrable distribution on $S$ , meaning $E_{X \sim q} \sum_{Y M X} T_{p, Y \rightarrow X} \sqrt{k((X, Y), (X, Y))} < \infty$ . Now, KSD-B $_{p, k}$ (q) $^2$ = $(\sup_{\| g \|_k \leq 1} |E_q \mathcal{T}_p f|)^2$ is equal to + +$$ +E_{X,X^{\prime}\sim q}\sum_{YM X,Y^{\prime}MX^{\prime}}T_{p,X\to Y}T_{p,X^{\prime}\to Y^{\prime}}k((X,Y),(X^{\prime},Y^{\prime})). +$$ + +If $p$ is $p$ , $k$ -integrable, then for all $f \in \mathcal{H}_k$ , $E_p \mathcal{T}_p f = 0$ . (4) + +5. Detecting convergence and non-convergence In this section we establish the key theoretical properties of the KSD-B that makes it useful for goodness of fit tests, evaluating sample quality, and other applications. Our proofs are inspired by those for Euclidean space KSDs in Gorham & Mackey (2017); Gorham et al. (2019). + +KSD-B is Faithful For the KSD-B to be useful as a measure of goodness of fit, it must be able to detect if a model distribution $p$ matches a data distribution $q$ . To do so, the divergence must be faithful: $\mathrm{KSD - B}_{p,k}(q) \to 0 \iff p = q$ . Faithfulness holds if the set of test functions is sufficiently large. For KSDs on Euclidean spaces, faithfulness is usually guaranteed by using a kernel that is universal, meaning $\mathcal{H}_k$ is dense on some function space (such as $L^p$ space). + +Over a discrete space such as $S$ , we can use a set of test functions that is, in some sense, even larger. More precisely, there are kernels on $S$ (but not $\mathbb{R}^d$ ) that have discrete masses, meaning their RKHS $\mathcal{H}_k$ includes delta functions (Def. B.11) (Jorgensen & Tian, 2015). Kernels with discrete + +masses are always universal, but not the other way around (Amin et al., 2023). If we use a kernel with discrete masses, the KSD-B is faithful (proof in Appx. B.3.4). + +Proposition 5.1. Assume the support of $p$ is connected, i.e. $\operatorname{supp}(p)$ is a connected set in the graph with vertices $S$ and edges $M$ . If either (a) $k$ is a vector field kernel with discrete masses or (b) $k$ is a scalar field kernel with discrete masses on $S$ and $E_{q} \sum_{YMX} T_{p,Y \to X} < \infty$ , then $\mathrm{KSD - B}_{p,k}(q) = 0$ if and only if $p = q$ . + +KSD-B Detects Convergence and Non-convergence Now, rather than consider a fixed distribution $q$ , we consider a sequence of distributions $q_{1}, q_{2}, \ldots$ . We are interested in showing the KSD-B can detect convergence and non-convergence of $q_{n}$ to $p$ , meaning $\mathrm{KSD - B}_{p,k}(q_n) \to 0$ if and only if $q_{n}$ converges to $p$ in distribution, or in some closely related metric. This is useful for goodness of fit testing, because it says that if $\mathrm{KSD - B}_{p,k}(q)$ is close to but not exactly zero, the difference between $q$ and $p$ cannot be very large, suggesting that we are unlikely to make a big mistake if our estimate of $\mathrm{KSD - B}_{p,k}(q)$ is slightly off in practice. It is also useful for evaluating sample quality; in this case, $q_{n}$ corresponds to the empirical distribution of $n$ samples, and we are interested in whether the sample distribution converges to $p$ (Gorham & Mackey, 2017). Another setting in which detecting convergence and non-convergence is important is when we are optimizing $q$ to match $p$ using $\mathrm{KSD - B}_{p,k}(q)$ as the objective. In this case, we want lower values of $\mathrm{KSD - B}_{p,k}(q_{n})$ to correspond to distributions $q_{n}$ closer to the target $p$ . + +We first show that the KSD-B detects non-convergence, i.e. $\mathrm{KSD - B}_{p,k}(q_n)\to 0$ implies $q_{n}\rightarrow p$ in distribution. In the Euclidean case, KSDs based on diffusions can only detect non-convergence if the stochastic process converges quickly enough, which occurs if $p$ is not heavy tailed (Gorham et al., 2019; Gorham & Mackey, 2017). In sequence space, the "tail" of a distribution describes how it depends on sequence length (Amin et al., 2021). For the KSD-B to detect nonconvergence, we also need to control the tail of $p$ ; Prop. B.15 shows what can go wrong otherwise. We assume that $p$ has uniformly quickly decreasing tails (Asm. B.3). By constructing a Lyapunov function for stochastic processes on sequences, we show this implies sufficiently fast convergence of the stochastic process $\mathcal{L}_p$ (Thm. B.4). Our tail assumption is reasonable for many generative models of sequences trained on real data sets; for example, we prove that profile HMMs, a widely used model of protein domains, always satisfy the assumption, regardless of the data they are trained on (for $\chi (t) = t\wedge 1$ ; Sec. B.2.5). Note the second term in Eqn. 2 is crucial for ensuring our tail assumption implies fast convergence of $\mathcal{L}_p$ . + +Another important condition needed in the Euclidean case is that the kernel $k$ is heavy tailed (Gorham & Mackey, 2017). In Props. B.16 and B.17 we show that the KSD-B, too, may fail to detect non-convergence if we allow $k$ to have thin + +tails. The crucial issue is that to detect non-convergence of distributions $q_{n}$ that become more and more spread out as $n\to \infty$ , there has to exist a test function $\mathcal{T}_p g$ , for $g\in \mathcal{H}_k$ , that has thick tails. To guarantee this, we assume $k$ has heavy tails (Appx. B.4.3). + +Theorem 5.2. Say $p$ is a distribution on $S$ obeying Asm. B.3 and $k$ is either a vector field kernel with discrete masses obeying Asm. B.18A or a scalar field kernel with discrete masses obeying Asm. B.18B. Say $(q_n)_n$ is a sequence of distributions on $S$ . If $\mathrm{KSD - B}_{p,k}(q_n) \to 0$ then $q_n$ converges to $p$ in distribution. + +Proof in Appx. B.3.5. Next, we show that the KSD-B detects convergence, i.e. if $q_{n} \to p$ then $\mathrm{KSD - B}_{p,k}(q_n) \to 0$ (proof in Appx. B.3.5). In the Euclidean case, the KSD is guaranteed to detect convergence in a reweighted Wasserstein metric (Gorham & Mackey, 2017). We show that the KSD-B detects convergence in a reweighted total variation metric. + +Proposition 5.3. Say $k$ is a vector field kernel and $p, q_1, q_2, \ldots$ are $p, k$ -integrable distributions on $S$ . Call $A(X) = \sum_{Y \in X} T_{p,Y \to X} \sqrt{k((X,Y), (X,Y))}$ . If $\sum_X |p(X) - q_n(X)|A(X) \to 0$ then KSD-Bp,k(qn) → 0. + +Scalar Field Case If we use a scalar field kernel, rather than a vector field kernel, we can still guarantee detection of non-convergence, but only if we assume that the kernel is unbounded (Asm. B.18B), implying that $k(X,X)$ grows arbitrarily large as the length of $X$ increases (Prop. B.16). This in turn implies that the discrepancy will consider longer sequences arbitrarily more important than shorter sequences when judging the similarity between $p$ and $q$ . Biologically, this judgment rarely makes sense. It is also contrary to the common practice of normalizing kernels so that $k(X,X) = 1$ for all $X\in S$ (Saigo et al., 2004). + +Scalar field discrepancies can also detect convergence, but with an unbounded kernel, they cannot do so very well. In particular, Prop. 5.3 still holds (since scalar field kernels are a special case of vector field kernels), but $A(X)$ is now unbounded. It is thus harder to achieve $\sum_{X} |p(X) - q_{n}(X)|A(X) \to 0$ , and so there are fewer cases in which the discrepancy can detect convergence. + +# 6. Approximating the KSD-B + +In this section we develop an efficient stochastic approximation to the KSD-B, improving its ability to scale to long sequences. Our approach centers on reducing the cost of evaluating each of the terms, + +$$ +\sum_ {X M Y, X ^ {\prime} M Y ^ {\prime}} T _ {p, X \rightarrow Y} T _ {p, X ^ {\prime} \rightarrow Y ^ {\prime}} k ((X, Y), (X ^ {\prime}, Y ^ {\prime})), \tag {5} +$$ + +which appear inside the expectation in Eqn. 4. Evaluating these terms exactly is expensive for longer sequences, since the number of possible single mutations of $X$ , that is $|\{Y \in S | YMX\}|$ , scales linearly with sequence length $|X|$ . + +Baum et al. (2022) study a discrepancy that is essentially, in our terminology, a KSD-B with a scalar field kernel. They propose to reduce the computational cost of Eqn. 5 by reducing the size of the neighborhood around each sequence, shrinking $M$ to $M_{(\tau)}$ where $\tau$ is a parameter controlling the size of the graph. For example, they have $YM_{(\tau)}X$ only if $Y$ differs from $X$ by a single substitution, with only substitutions within a distance $\tau$ allowed; distance between letters in $\mathcal{B}$ is measured by assigning each letter to an integer between 1 and $|\mathcal{B}|$ , then computing their difference modulo $|\mathcal{B}|$ . This approach has weaknesses in that (a) removing connections between sequences of different lengths results in a discrepancy that is no longer faithful, and (b) biologically, there is no canonical ordering of amino acids or nucleotides. + +Instead of shrinking neighborhoods deterministically, we approximate Eqn. 5 stochastically, by sampling mutants of $X$ . We do so by taking a single step of a discrete time Markov process initialized at $X$ , with transition matrix $K$ , where $K_{X\rightarrow Y} = T_{p,X\rightarrow Y} / \mathrm{flux}_p(X)$ if $X\neq Y$ and $K_{X\rightarrow Y} = 0$ if $X = Y$ (Appx. B.2). + +Proposition 6.1. Let $p$ be a distribution on $S$ , and $(q_n)_n$ a sequence of distributions on $S$ with $\sup_n E_{q_n} \, \mathrm{flux}_p < \infty$ . Say $k$ is a bounded vector field kernel. Let $(N_{n,X})_{n,X \in S}$ be a set of numbers. For each $X,n$ , let $(Y_{X,m}^n)_{m=1}^{N_{n,X}}$ be a set of iid samples, each drawn by taking a single step of a Markov chain with the transition matrix $K_{X \to Y}$ initialized at $X$ . Define the approximation $\widehat{\mathbf{KSD-B}}_{p,k}^n(q_n)^2$ , as + +$$ +\begin{array}{l} E _ {X, X ^ {\prime} \sim q} \big [ \mathrm {f l u x} _ {p} (X) \mathrm {f l u x} _ {p} (X ^ {\prime}) \times \\ \frac {1}{N _ {n , X} N _ {n , X ^ {\prime}}} \sum_ {m, m ^ {\prime}} k ((X, Y _ {X, m} ^ {n}), (X ^ {\prime}, Y _ {X ^ {\prime}, m ^ {\prime}} ^ {n})) ]. \\ \end{array} +$$ + +If $N_{n,X} / (\log (n) + |X|) \to \infty$ then almost surely + +$$ +\left| \mathrm {K S D - B} _ {p, k} (q _ {n}) - \widehat {\mathrm {K S D - B}} _ {p, k} ^ {n} (q _ {n}) \right|\rightarrow 0. +$$ + +The proof is in Appx. B.3.7 and is roughly based on the use of a sub-Gaussian concentration inequality as in Thm. 4 of Gorham et al. (2020). The result shows that we can accurately approximate the KSD-B by sub-sampling mutants. Note it requires that the kernel is bounded, which is impossible for scalar field kernels that detect non-convergence. Thus, a further advantage of vector field over scalar field kernels is access to a good approximation of the KSD-B. + +In summary, there are two computationally intensive steps in approximating the KSD-B. Given a fixed $N_{n}$ and maximum sequence length $L$ , we need to (1) calculate the likelihood under $p$ of all mutational neighbours of all $n$ sequences and (2) perform $(n \times N_{n})^{2}$ evaluations of $k$ . So, in principle, the computational cost scales as $O(n \times L \times [p \text{ scaling with } L] + n^{2} \times N_{n}^{2} \times [k \text{ scaling with } L])$ . The second term usually dominates. + +# 7. Kernels for the KSD-B + +In this section we describe kernels for the KSD-B. The challenge is to construct kernels that simultaneously satisfy the requirements of our theoretical guarantees and capture biological notions of sequence similarity. Common approaches to measuring biological sequence similarity include (1) comparing sequences position by position (e.g. Hamming kernels), (2) comparing sequences based on pairwise alignments (e.g. alignment kernels), (3) comparing sequences based on their kmer content (e.g. kmer spectrum kernels), and (4) comparing sequences using learned embeddings into Euclidean space. Amin et al. (2023) develop kernels on $S$ that use these biological notions of sequence similarity but are also highly flexible, having discrete masses. + +To build kernels for the KSD-B, we first extend these scalar field kernels to vector fields, while preserving the discrete mass property. General techniques for doing so are developed in Proposition B.34; here we introduce two concrete examples, one based on Hamming distance and the other based on pairwise alignment distance. To define the kernels, we give their value for just one ordering of each pair of sequences related by $M$ . More precisely, let $\sigma$ be a function that takes every pair of sequences $(X,Y)$ satisfying $XMY$ to $\{-1,1\}$ , with the restriction that $\sigma(X,Y) = -\sigma(Y,X)$ and $\sigma(X,Y) = 1$ if $|X| > |Y|$ . Define $M^{\sigma} = \{(X,Y) \in M \mid \sigma(X,Y) = 1\}$ . In Prop. B.33 we show that any kernel $k$ defined on $M^{\sigma}$ can be uniquely extended to a vector field kernel on $M$ by applying the anticommutivity property (Eqn. 3). Now, let $d_{H}(X,Y)$ be the Hamming distance between $X$ and $Y$ . The exponential Hamming kernel is $\exp(-\lambda d_{H}(X,Y))$ with $\lambda > 0$ ; it has discrete masses by Thm. 21 of Amin et al. (2023). Now, the exponential Hamming vector field kernel (Exp-H) is, + +$$ +\left(\exp \left(- \lambda d _ {H} \left(X, X ^ {\prime}\right)\right) + \exp \left(- \lambda d _ {H} \left(Y, Y ^ {\prime}\right)\right)\right) ^ {2}, +$$ + +for $(X,Y),(X^{\prime},Y^{\prime})\in M^{\sigma}$ . Similarly, if $k_{\mathrm{ali}}$ is an alignment kernel with discrete masses (Amin et al., 2023, Thm. 23), we can construct the vector field alignment kernel (Ali), + +$$ +\left(r ^ {| X |} k _ {\mathrm {a l i}} (X, X ^ {\prime}) r ^ {| X ^ {\prime} |} + r ^ {| Y |} k _ {\mathrm {a l i}} (Y, Y ^ {\prime}) r ^ {| Y ^ {\prime} |}\right) ^ {2} +$$ + +for $r > 0$ sufficiently small. The Ali and Exp-H kernels have discrete masses, guaranteeing the KSD-B will be faithful (Prop. 5.1). + +Discrete masses are not sufficient, however, to guarantee the kernel can be used to detect non-convergence; for this we need kernels that are, roughly speaking, heavy tailed. We consider kernels $k((X,Y),(X',Y'))$ of the form, + +$$ +\check {k} (Y, Y ^ {\prime}) \mathbb {1} (| X | \geq | Y |) \mathbb {1} (| X ^ {\prime} | \geq | Y ^ {\prime} |) +$$ + +for $(X,Y),(X',Y')\in M^{\sigma}$ . Setting $\check{k} (Y,Y') = (C + d_H(Y,Y'))^{-\beta}$ gives an inverse multiquadric Hamming vector field kernel (IMQ-H). It is heavy tailed in the sense that + +it decays as a power law of Hamming distance, rather than exponentially. We also consider setting $\tilde{k}(X,X^{\prime})$ to + +$$ +\left. | X | ^ {- 3 / 2} \left(\sum_ {V \in S} \# \{V \text {i n} X \} \# \{V \text {i n} X ^ {\prime} \}\right) | X ^ {\prime} | ^ {- 3 / 2}, \right. +$$ + +where $\# \{V$ in $X\}$ is the number of occurrences of the substring (kmer) $V$ in $X$ . This gives an infinite kmer spectrum vector field kernel (ISK), which decays as a power law of sequence length. + +By adding together a kernel with discrete mass $k_{\delta}$ and one with heavy tails $k_{\mathrm{HT}}$ we can build a kernel $k_{\delta} + k_{\mathrm{HT}}$ that meets the requirements of Thm. 5.2. In particular, Prop. B.39 shows that if $p$ is a pHMM and $\chi(t) = \min\{t,1\}$ , we can add either of the discrete mass kernels (Exp-H or Ali) to either of the heavy tail kernels (IMQ-H or ISK) and satisfy all the assumptions of Thm. 5.2. + +**Embedding Kernels** Another approach to constructing kernels for biological sequences is to leverage representations. Consider in general an embedding function $F: S \to \mathbb{R}^{D}$ that maps sequences to a low-dimensional Euclidean space. For example, $F$ may come from a deep generative model trained on a large data set of biological sequences; we use UniRep64 (Alley et al., 2019). We can apply a Euclidean kernel $k_{E}$ to the embedding space to build a scalar field kernel: for $X, Y \in S$ , $k_{F,\mathrm{Emb}}(X,Y) = k_{E}(F(X),F(Y))$ (Yang et al., 2018b; Amin et al., 2021). This approach allows for learned, rather than hand-crafted, notions of sequence similarity. It also allows for fast evaluation of the KSD-B. The cost of using the Hamming kernel scales as $O(n^{2} \times N_{n}^{2} \times L)$ , and that of the alignment kernel as $O(n^{2} \times N_{n}^{2} \times L^{2})$ . With an embedding kernel, using an $F$ defined by an autoregressive sequence model, we can (1) embed all sequences and their mutants in $O(n \times N_{n} \times L)$ time (since the cost of evaluating $F$ scales linearly with $L$ ), and then (2) calculate the kernel $k_{E}$ applied to the embeddings in $O(n^{2} \times N_{n}^{2})$ time, as the embedding space is independent of $L$ . Though embedding kernels are less theoretically tractable than the other kernels we have considered, our theory can nonetheless help guide their design. First, we extend scalar field embedding kernels to vector fields. Then, we construct an embedding kernel that is likely to have discrete masses by rescaling $F$ , following the recommendation of Amin et al. (2023). We then add to it an embedding kernel that uses a heavy tailed Euclidean kernel (Appx. C.10). + +Scalar Field Kernels We will compare our vector field kernels to scalar field alternatives. We develop unbounded versions of the inverse multiquadric Hamming kernel, IMQ-H (U), alignment kernel, Ali (U), and infinite kmer spectrum kernel, ISK (U), which have discrete masses and are guaranteed to detect non-convergence (Prop. B.38). We also consider scalar field embedding kernels that are likely to have discrete masses. + +![](images/85fac38daa985d16856cbcb920ad1b5ad466d93ad98f83b037bc381bb6c9fa50.jpg) +(a) $q_{m,n}\not\to p$ +Figure 1. Detecting Convergence and Non-convergence Vector field kernels are shown in turquoise, unbounded scalar field kernels in blue, and bounded scalar field kernels in red. The y-axis gives the KSD-B, normalized to its value at $q_{1,6}$ or $q_{3}$ . + +![](images/7d2ae9a83a89bafdfb1f0cc3ef236bc0d8b47b9c7fea9dad9c81336e9eeeb052.jpg) +(b) $q_{n}\rightarrow p$ + +# 8. Empirical Results + +In this section we examine the empirical performance of the KSD-B on simulated and real data. Details are in Appx. C. + +Detecting Convergence and Non-convergence We first illustrate our theoretical results on detecting convergence and non-convergence. To start, we consider a simple example model with a single letter in the alphabet, $|\mathcal{B}| = 1$ and $p(X) \propto e^{-|X|}$ . We consider a sequence of distributions defined by $q_{m,n}(X) \propto |X|^{-1} \mathbb{1}(m \leq |X| < n)$ . As $m, n \to \infty$ , $q_{m,n}$ does not converge to $p$ . In line with our theoretical results (Thm. 5.2), the KSD-B with vector field kernels does not converge to zero, nor does the KSD-B with unbounded scalar field kernels (Fig. 1(a)). However, if we use bounded versions of the scalar field kernels, normalized to have $k(X, X) = 1$ for all $X$ (IMQ-H (N), Ali (N), and ISK (N)), we find that the KSD-B fails to detect non-convergence (Prop. B.16). + +Next, consider the heavy tailed distribution $p(X) \propto |X|^{-1.4}$ , and $q_{n}(X) \propto |X|^{-1.4}\mathbb{1}(|X| \leq n)$ . Now, $q_{n}$ converges to $p$ as $n \to \infty$ . In line with our theoretical results (Prop. 5.3), the KSD-B with vector field kernels converges to zero, as does the KSD-B with bounded scalar field kernels (Fig. 1(b)). However, with an unbounded scalar field kernel, it does not. In short, vector field Stein discrepancies enable detection of both convergence and non-convergence, while bounded scalar field discrepancies cannot reliably detect non-convergence, and unbounded scalar field discrepancies cannot reliably detect convergence. + +Goodness of Fit Testing In this section we evaluate the ability of the KSD-B to detect mismatches between models and data. We start with a generative biological sequence model $p$ , then perturb its parameters to form $q$ , and draw samples; we then evaluate the goodness of fit of $p$ on the samples from $q$ , for differing perturbation strengths. We are interested in how well the KSD-B can detect small perturbations. To construct a hypothesis test, we bootstrap the KSD-B as in Liu et al. (2016), and set a significance threshold of 0.1. In the following examples, we use the DNA alphabet and $N_{n} = 20$ samples in the KSD-B approximation. We draw 100 samples from $q$ to form each data set, + +and report the mean and standard error of the test's rejection rate (power) across independent samples of the entire data set. We compare tests based on a vector field kernel, IMQ-H+Exp-H (labelled vf KSD-B in Fig. 2), unbounded scalar field kernel, IMQ-H (U) (sf (U) KSD-B), and normalized scalar field kernel, IMQ-H (N) (sf (N) KSD-B). Where possible, we also compare to an MMD two-sample test with the IMQ-H (N) kernel (MMD), which requires samples from $p$ (Gretton et al., 2012; Amin et al., 2023), and a nonparametric Bayesian goodness of fit test (BEAR), which requires normalized likelihoods for $p$ (Amin et al., 2021). In principle, the KSD-B test is most powerful when the "slopes" of $p$ and $q$ differ (Eqn. 1), while the MMD and BEAR tests focus on differences in the likelihood of $p$ and $q$ . We expect the KSD-B and MMD tests to be powerful when the differences between $p$ and $q$ lead to large changes in the chosen kernel, while we expect the BEAR test to be powerful when the differences between $p$ and $q$ lead to large changes in the parameters of an autoregressive model fit to each distribution. $^{1}$ + +First, we consider testing profile hidden Markov models (pHMMs). We let $p$ be a pHMM with latent sequence length 20, and with a high probability of generating the letter $C$ at position 5. We then perturb the model by decreasing the probability of the $C$ at position 5. The KSD-B test should easily detect the presence of sequences which are one mutation away from a much more likely sequence. We see the KSD-B indeed has high power to detect this perturbation, as compared to the MMD and BEAR tests (Fig. 2(a)). + +Next we consider a Potts model with a mutational emission (MuE) distribution (Marks et al., 2011; Weinstein & Marks, 2021). Potts models are often used for evolutionary protein families, and have been applied to design novel proteins and predict 3D structure and mutational effects. The MuE adds insertions and deletions to samples from the Potts model. We set the Potts model length to 15. We start with no interaction energies between sites in the Potts model, and perturb by adding stronger and stronger interactions. Here, we find the MMD test outperforms the KSD-B test, which in turn outperforms the BEAR test (Fig. 2(b)). + +Next we examine an autoregressive model; such models have been used, for example, to design novel proteins (Shin et al., 2021; Amin et al., 2021). We set $p$ to be a linear autoregressive model of lag 2 which generates sequences that range in length from 1 to 60. We perturb $p$ by adding a nonlinear term. Here, we find that the vector field KSD-B test and the MMD test are the most powerful (Fig. 2(c)). + +Finally, we consider an ancestral sequence reconstruction model; such models have been used, for example, to res + +![](images/152c6377d4e8802fb4912fdf5c6b7023029696d4e36cf783fa81997249eaef27.jpg) +(a) pHMM + +![](images/b45131273b2d7984e9eb88c340c5803f44bc9e69c9067042535d4f1bddb98a3b.jpg) +(b) Potts + MuE + +![](images/52363a5bdcd6e22b6469b303a03ade34e7d8c4c70774de56ca46cd83b2774853.jpg) +(c) autoregressive + +![](images/c985e098ee7b7dbcab9034400c36ee8a30ff1877f7ec1d1ea2749358df7eb895.jpg) +(d) ancestral reconstruction +Figure 2. Goodness of Fit Testing We evaluate the power of goodness of fit tests, based on the KSD-B and alternatives, to reject the null hypothesis that $p = q$ . In each plot, the x-axis corresponds to different values of $q$ , which come from perturbing $p$ by different amounts. We also plot the $10\%$ significance threshold. + +orrect ancient proteins (Pillai et al., 2020). We consider a star-shaped phylogeny, in which an ancestral sequence $X$ is drawn from a pHMM prior, $\pi(X)$ , and descendants $Y_{i}$ are drawn by evolving $X$ for time $t$ according to a stochastic mutational process $\kappa(Y \mid X, t)$ , parameterized by a MuE distribution. We are interested in the posterior over ancestors, $p(X \mid Y_{1}, \ldots, Y_{5}) \propto \pi(X) \prod_{i=1}^{5} \kappa(Y_{i} \mid X, t)$ . We set $p$ to the posterior with $t = 1$ , then perturb $p$ by modifying $t$ . In this example, the normalizing constant of $p$ is unavailable, and sampling from $p$ requires approximation methods. To have a point of comparison, we applied the MMD to samples from $p$ drawn by a long run of an efficient discrete MCMC method (Sun et al., 2022). We find the KSD-B can accurately detect model-data mismatches, without samples or normalized likelihoods (Fig. 2(d)). + +![](images/14b79f0d6888ac10fc540a99e506f7e431784a80343146cd9925a174d1171883.jpg) +Figure 3. Approximating the KSD-B We compare the power of a goodness of fit test using our stochastic approximation (green) to one using neighborhood reduction (black). We set $\tau \in \{1,2,3\}$ for the neighborhood size and $N_{n} \in \{2,4,10,20\}$ for the number of mutation samples. The solid lines are for tests performed on samples from a perturbed distribution and the dotted lines are for tests performed on samples from $p$ . In the latter case, the test matches the $10\%$ significance threshold, showing good calibration. + +Approximating the KSD-B Next we evaluate our stochastic KSD-B approximation strategy, and compare its efficiency to the reduced neighborhood approach of Baum et al. (2022) (Sec. 6). We set $p$ to a pHMM, with $\mathcal{B}$ the amino + +![](images/873adc524c2389571d882342fb9adfc65487439355efba6c4cdf18118ce6f8a4.jpg) +Figure 4. Evaluating Variational Synthesis Models Goodness of fit tests comparing a target model $p$ to synthesis models $q$ . Each subplot is a different DNA synthesis technology. The x-axis shows the number of templates, which corresponds roughly to the complexity and cost of the synthesis procedure, with larger numbers of templates allowing better matches to $p$ (Weinstein et al., 2022b). Legend is identical to Fig. 2 + +acids. The approach of Baum et al. (2022) requires assigning an ordering to the amino acids; we do so on the basis of hydrophobicity. We then consider perturbations of $p$ that increase the probability of hydrophobic residues. Since the reduced graph $M_{(\tau)}$ does not have connections between strongly hydrophilic and strongly hydrophobic residues (even when those amino acids are similar in another respect, such as size), tests based on $M_{(\tau)}$ can struggle to detect this perturbation. + +We compare the power of tests using reduced neighborhoods to those using stochastic sub-sampling, as a function of computational cost (solid lines, Fig. 3). Cost is measured by the total number of kernel evaluations required for each pair of sequences in the data set. Our approach yields an order-of-magnitude decrease in kernel evaluations while achieving the same power. It also maintains good calibration, as we confirm by evaluating the test on samples from the unperturbed model $p$ (dashed lines). + +**Evaluating Synthesis Strategies** Next we consider an application of the KSD-B to a specific library design problem where goodness of fit tests have been used previously, variational synthesis (Weinstein et al., 2022b). Here, the aim is to design a stochastic synthesis procedure which produces approximate samples from a target generative model in the laboratory at very large scale. As a target $p$ , we consider a pHMM trained on a data set of human T cell receptor CDR3 sequences, which range in length from 10 to 27 amino acids (10x Genomics, 2022). We optimize synthesis models $q$ based on different synthesis technologies: finite nucleotide mixtures, enzymatic mutagenesis, finite codon mixtures, and arbitrary codon mixtures. Previously, BEAR tests were used to evaluate the match between the synthesis model $q$ and target $p$ ; here, we apply the KSD-B. We find that the KSD-B test can still detect mismatches even when the BEAR test cannot, and that vector field kernels outperform scalar field kernels and the MMD test. + +![](images/99a7da0651d8f0bae133353665ecd445e46348f9af41809c3dd2594638c86189.jpg) +Figure 5. Evaluating Large Models Fit to Protein Families We perform a goodness of fit test for two deep generative models, Wavenet (top row) and Trancection (bottom row), using scalar field (turquoise) and vector field (red) embedding kernels. Each column is a different protein family dataset. We perform the test for 5 independent samples of the $N_{n} = 10$ mutants for four protein families and plot how often the null hypothesis was rejected at level 0.05 for increasing data $n$ . + +Evaluating Large Models Fit to Protein Families Finally, we use the KSD-B to evaluate the fit of state-of-the-art deep generative models of proteins. We considered data sets consisting of evolutionarily related protein families (Shin et al., 2021). We trained a deep autoregressive model on each data set (Wavenet; Shin et al. (2021)), and tested its goodness of fit on held-out sequences using the KSD-B. We also tested the goodness of fit of a transformer model trained on a data set of all known proteins (Tranception; Notin et al. (2022)); in Appx.C.10.1 we explain why the KSD-B is particularly suitable for evaluating such "protein universe" models. To scale the KSD-B to $n = 1000$ proteins of length roughly $L = 250$ , we sample $N_{n} = 10$ mutants and use an embedding kernel. Despite the small $N_{n}$ , we see little variation in our KSD-B estimates when we resample mutants, particularly as compared to the differences in the KSD-B between different models (Fig. 7). + +The KSD-B is capable of detecting model-data mismatch for both Wavenet and Trancection, even when given fewer than 100 sequences for evaluation (Fig. 5). Moreover, the test's power does not fall on data sets with longer sequences (though this is not always true in other scenarios; see Fig. 6). In almost all cases, our vector-field KSD-B test is more powerful than a scalar-field KSD-B test (this holds even for different kernels; see Fig. 8). + +# 9. Conclusion + +In this paper we have developed the KSD-B, a novel discrepancy for distributions over biological sequences, with strong theoretical guarantees. One possible direction for future work is to further scale the KSD-B using methods for KSDs in Euclidean space (Jitkrittum et al., 2017; Huggins & Mackey, 2018; Gorham et al., 2020). Another is to apply the KSD-B to develop better samplers for biological sequences (Gorham & Mackey, 2017; Grathwohl et al., 2021). Overall, we hope the KSD-B can help ensure the accuracy, reliability, and trustworthiness of methods based on generative sequence models as they see growing use across biology, biotechnology and biomedicine. + +# References + +10x Genomics. A new way of exploring immunity: linking highly multiplexed antigen recognition to immune repertoire and phenotype. 10x Genomics, 2022. +Alley, E. C., Khimulya, G., Biswas, S., AlQuraishi, M., and Church, G. M. Unified rational protein engineering with sequence-based deep representation learning. Nat. Methods, 16(12):1315-1322, December 2019. +Amin, A. N., Weinstein, E. N., and Marks, D. S. A generative nonparametric bayesian model for whole genomes. Advances in Neural Information Processing Systems (NeurIPS), 2021. +Amin, A. N., Weinstein, E. N., and Marks, D. S. Biological sequence kernels with guaranteed flexibility. April 2023. +Barbour, A. D. Stein's method for diffusion approximations. Probab. Theory Related Fields, 84(3):297-322, 1990. +Barp, A., Briol, F. X., Duncan, A. B., Girolami, M., and Mackey, L. Minimum stein discrepancy estimators. Advances in Neural Information Processing Systems (NeurIPS), 2019. +Baum, J., Kanagawa, H., and Gretton, A. A kernelstein test of goodness of fit for sequential models. October 2022. +Chen, W. Y., Mackey, L., Gorham, J., Briol, F.-X., and Oates, C. Stein points. In International Conference on Machine Learning (ICML), 2018. +Chen, Y., Welling, M., and Smola, A. Super-samples from kernel herding. In Conference on Uncertainty in Artificial Intelligence (UAI), 2010. +Chow, S.-N., Li, W., and Zhou, H. Entropy dissipation of Fokker-Planck equations on graphs. Discrete Contin. Dyn. Syst. Ser. A, 38(10):4929-4950, 2018. +Chwialkowski, K., Strathmann, H., and Gretton, A. A kernel test of goodness of fit. In International Conference on Machine Learning (ICML), pp. 2606-2615, June 2016. +Davidsen, K., Olson, B. J., DeWitt, 3rd, W. S., Feng, J., Harkins, E., Bradley, P., and Matsen, 4th, F. A. Deep generative models for T cell receptor protein sequences. *Elife*, 8, September 2019. +Douc, R., Fort, G., and Guillin, A. Subgeometric rates of convergence of f-ergodic strong markov processes. Stochastic Process. Appl., 119(3):897-923, March 2009. +Frazer, J., Notin, P., Dias, M., Gomez, A., Min, J. K., Brock, K., Gal, Y., and Marks, D. S. Disease variant prediction with deep generative models of evolutionary data. Nature, 599(7883):91-95, 2021. + +Gorham, J. and Mackey, L. Measuring sample quality with stein's method. Advances in Neural Information Processing Systems (NeurIPS), 2015. +Gorham, J. and Mackey, L. Measuring sample quality with kernels. In International Conference on Machine Learning (ICML), 2017. +Gorham, J., Duncan, A. B., Vollmer, S. J., and Mackey, L. Measuring sample quality with diffusions. Ann. Appl. Probab., 29(5):2884-2928, October 2019. +Gorham, J., Raj, A., and Mackey, L. Stochastic Stein discrepancies. In Advances in Neural Information Processing Systems (NeurIPS), 2020. +Grathwohl, W., Swersky, K., Hashemi, M., Duvenaud, D., and Maddison, C. J. Oops I took a gradient: Scalable sampling for discrete distributions. 2021. +Gretton, A., Borgwardt, K. M., Rasch, M. J., Schölkopf, B., and Smola, A. A kernel two-sample test. J. Mach. Learn. Res., 13:723-773, 2012. +Hairer, M. Convergence of markov processes, 2021. +Han, J., Ding, F., Liu, X., Torresani, L., Peng, J., and Liu, Q. Stein variational inference for discrete distributions. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2020. +Haussler, D. Convolution kernels on discrete structures UCSC CRL. 1999. +Hodgkinson, L., Salomone, R., and Roosta, F. The reproducingstein kernel approach for post-hoc corrected sampling. January 2020. +Hopf, T. A., Ingraham, J. B., Poelwijk, F. J., Scharfe, C. P. I., Springer, M., Sander, C., and Marks, D. S. Mutation effects predicted from sequence co-variation. Nat. Biotechnol., 35(2):128-135, 2017. +Huggins, J. H. and Mackey, L. Random feature Stein discrepancies. In Advances in Neural Information Processing Systems (NeurIPS), 2018. +Jitkrittum, W., Xu, W., Szabo, Z., Fukumizu, K., and Gretton, A. A linear-time kernel goodness-of-fit test. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, pp. 261-270, Red Hook, NY, USA, December 2017. Curran Associates Inc. +Jorgensen, P. and Tian, F. Discrete reproducing kernel hilbert spaces: Sampling and distribution of dirac-masses. Journal of Machine Learning Research, 2015. + +Leslie, C. S., Eskin, E., Cohen, A., Weston, J., and Noble, W. S. Mismatch string kernels for discriminative protein classification. 20(4):467-476, 2004. +Liggett, T. M. Continuous time Markov processes: An introduction. Graduate studies in mathematics. American Mathematical Society, Providence, RI, March 2010. +Liu, Q., Lee, J. D., and Jordan, M. A kernelized Stein discrepancy for goodness-of-fit tests. International Conference on Machine Learning (ICML), 2016. +Madani, A., McCann, B., Naik, N., Keskar, N. S., Anand, N., Eguchi, R. R., Huang, P.-S., and Socher, R. ProGen: Language modeling for protein generation. March 2020. +Marcou, Q., Mora, T., and Walczak, A. M. High-throughput immune repertoire analysis with IGoR. Nat. Commun., 9 (1):561, February 2018. +Marks, D. S., Colwell, L. J., Sheridan, R., Hopf, T. A., Pagnani, A., Zecchina, R., and Sander, C. Protein 3D structure computed from evolutionary sequence variation. *PLoS One*, 6(12):e28766, December 2011. +Meier, J., Rao, R., Verkuil, R., Liu, J., Sercu, T., and Rives, A. Language models enable zero-shot prediction of the effects of mutations on protein function. July 2021. +Nijkamp, E., Ruffolo, J., Weinstein, E. N., Naik, N., and Madani, A. ProGen2: Exploring the boundaries of protein language models. June 2022. +Notin, P., Dias, M., Frazer, J., Marchena-Hurtado, J., Gomez, A., Marks, D. S., and Gal, Y. Trancection: protein fitness prediction with autoregressive transformers and inference-time retrieval. May 2022. +Pillai, A. S., Chandler, S. A., Liu, Y., Signore, A. V., Cortez-Romero, C. R., Benesch, J. L. P., Laganowsky, A., Storz, J. F., Hochberg, G. K. A., and Thornton, J. W. Origin of complexity in haemoglobin evolution. Nature, 2020. +Riesselman, A. J., Ingraham, J. B., and Marks, D. S. Deep generative models of genetic variation capture the effects of mutations. Nat. Methods, 15(10):816-822, 2018. +Russ, W. P., Figliuzzi, M., Stocker, C., Barrat-Charlaix, P., Socolich, M., Kast, P., Hilvert, D., Monasson, R., Cocco, S., Weigt, M., and Ranganathan, R. An evolution-based model for designing chorismate mutase enzymes. Science, 369(6502):440-445, 2020. +Saigo, H., Vert, J.-P., Ueda, N., and Akutsu, T. Protein homology detection using string alignment kernels. Bioinformatics, 20(11):1682-1689, July 2004. + +Shi, J., Zhou, Y., Hwang, J., Titsias, M. K., and Mackey, L. Gradient estimation with discretestein operators. Advances in Neural Information Processing Systems (NeurIPS), February 2022. +Shin, J.-E., Riesselman, A. J., Kollasch, A. W., McMahon, C., Simon, E., Sander, C., Manglik, A., Kruse, A. C., and Marks, D. S. Protein design and variant prediction using autoregressive generative models. Nat. Commun., 12(1): 2403, April 2021. +Sperling, A. K. and Li, R. W. Repetitive sequences. In Maloy, S. and Hughes, K. (eds.), Brenner's Encyclopedia of Genetics (Second Edition), pp. 150-154. Academic Press, San Diego, January 2013. +Sriperumbudur, B. K., Gretton, A., Fukumizu, K., Schölkopf, B., and Lanckriet, G. R. G. Hilbert space embeddings and metrics on probability measures. J. Mach. Learn. Res., 11(50):1517-1561, 2010. +Sriperumbudur, B. K., Fukumizu, K., and Lanckriet, G. R. G. Universality, characteristic kernels and RKHS embedding of measures. J. Mach. Learn. Res., 12(70):2389-2410, 2011. +Sun, H., Dai, H., Xia, W., and Ramamurthy, A. Path auxiliary proposal for MCMC in discrete space. International Conference on Learning Representations (ICLR), 2022. +Thadani, N. N., Gurev, S., Notin, P., Youssef, N., Rollins, N. J., Sander, C., Gal, Y., and Marks, D. S. Learning from pre-pandemic data to forecast viral antibody escape. July 2022. +Vershynin, R. High-Dimensional Probability: An Introduction with Applications in Data Science. 2020. +Weinstein, E. N. Generative Statistical Methods for Biological Sequences. PhD thesis, Harvard University, Ann Arbor, United States, 2022. +Weinstein, E. N. and Marks, D. A structured observation distribution for generative biological sequence prediction and forecasting. In Meila, M. and Zhang, T. (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 11068-11079. PMLR, 2021. +Weinstein, E. N., Amin, A. N., Frazer, J., and Marks, D. S. Non-identifiability and the blessings of misspecification in models of molecular fitness and phylogeny. Advances in Neural Information Processing Systems (NeurIPS), 2022a. +Weinstein, E. N., Amin, A. N., Grathwohl, W., Kassler, D., Disset, J., and Marks, D. S. Optimal design of stochastic DNA synthesis protocols based on generative sequence + +models. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2022b. +Yang, J., Liu, Q., Rao, V., and Neville, J. Goodness-of-fit testing for discrete distributions via stein discrepancy. In International Conference on Machine Learning (ICML), 2018a. +Yang, K. K., Wu, Z., Bedbrook, C. N., and Arnold, F. H. Learned protein embeddings for machine learning. Bioinformatics, 34(15):2642-2648, August 2018b. +Zanella, G. Informed proposals for local MCMC in discrete spaces. J. Am. Stat. Assoc., 115(530):852-865, 2020. + +# A. Broader Impact Statement + +The present work has the potential to impact a variety of procedures in biotechnology and health. Through its use in model criticism and sequence design, the KSD-B could facilitate the design of novel therapeutics. Biological sequence design may, however, also be used in applications with negative societal impact. The KSD-B may also be used to critique generative sequence models used for diagnosis and disease discovery. This could lead to more reliable models, which could lead to more accurate patient diagnoses, and a deeper understanding of the genetic underpinnings of disease. Research in this direction, however, also has the potential to exacerbate health outcome disparities that affect marginalized groups, and to do so based on the genetics of such groups. + +# B. Proofs + +In this appendix we prove the assertions in the main text. First in Section B.1 we lay out our notation. Next, in Section B.2 we study stochastic processes on sequence space, and perform a Lyapunov function analysis of their convergence rates. In Section B.3 we show that the KSD-B is faithful, can detect convergence and non-convergence, and can be efficiently approximated. Finally, in Section B.4, we develop kernels that satisfy our theoretical requirements for detecting convergence and non-convergence. + +# B.1. Notation + +Let our alphabet, $\mathcal{B}$ , be a finite set, and let the set of all sequences be defined as $S = \cup_{L=0}^{\infty} \mathcal{B}^{L}$ where $\mathcal{B}^{0}$ is defined to only contain the empty string $\emptyset$ . If $p$ is a distribution on $S$ let $\operatorname{supp}(p) = \{X \mid p(X) > 0\}$ and $M_{p,p} = \{(X,Y) \in M \mid X,Y \in \operatorname{supp}(p)\}$ . We will say $p$ has connected support if $\operatorname{supp}(p)$ is a connected set in the graph with vertices $S$ and edges $M$ . Finally, for $X \in S$ , define $\operatorname{flux}_p(X) = \sum_{YMX} T_{p,X \to Y}$ . + +Let $C_b(S)$ be the set of bounded functions on $S$ , let $C_0$ be the set of functions on $S$ vanishing at infinity, and let $C_C(S)$ be the set of functions on $S$ that are non-zero at only finitely many points. We also define the set of all vector fields that are non-zero on only finitely many points in $M$ as $C_{C,vf}(M)$ . We define $\| \cdot \|_{\infty}$ as the infinity norm on $C_b(S)$ . For two distributions $\mu, \nu$ on $S$ , call $\| \nu - \mu \|_{\mathrm{TV}}$ their distance in total variation. + +For two sequences of real numbers $(a_{n})_{n\in \mathbb{N}},(b_{n})_{n\in \mathbb{N}}$ , both possibly undefined for small $n$ , we write $a_{n}\lesssim b_{n}$ to mean that there is a positive constant $C$ such that eventually $a_{n}\leq C b_{n}$ . We write $a_{n}\sim b_{n}$ when $a_{n}\lesssim b_{n}$ and $a_{n}\gtrsim b_{n}$ . We write $a_{n} = O(b_{n})$ if $a_{n}\lesssim b_{n}$ and $a_{n} = o(b_{n})$ if $\frac{|a_n|}{|b_n|}\to 0$ . We define $a\wedge b$ as the minimum of $a$ and $b$ , and $a\vee b$ as the maximum. Define $\mathbb{1}(P)$ to be the indicator function that is 1 if $P$ is true and 0 otherwise. + +A kernel on a set $H$ is a symmetric function $k:H\times H\to \mathbb{R}$ that is "non-negative definite", i.e. for all $X_{1},\ldots ,X_{N}\in H$ , $\alpha_{1},\ldots ,\alpha_{N}\in \mathbb{R}$ , $\sum_{n = 1}^{N}\sum_{n^{\prime} = 1}^{N}\alpha_{n}\alpha_{n^{\prime}}k(X_{n},X_{n^{\prime}})\geq 0$ . We also require that $k(X,X) > 0$ for all $X\in S$ . For every $X\in S$ define the function $k_{X} = k(X,\cdot)$ . Define the dot product $(\cdot |\cdot)_k$ on linear combinations of these functions with $(k_{X}|k_{Y}) = k(X,Y)$ and call the associated norm $\| \cdot \| _k$ . Let $\mathcal{H}_k$ be the Hilbert space completion of the span of $\{k_X\}_{X\in H}$ under $(\cdot |\cdot)_k$ and call this the reproducing kernel Hilbert space (RKHS) of $k$ . Elements of the RKHS can be understood as functions on $H$ by $(f|k_{X}) = f(X)$ . + +Say $k$ is a kernel on a space $H$ and $A: H \to (0, \infty)$ . We call $k^A(X, Y) = A(X)k(X, Y)A(Y)$ the kernel $k$ "tilted" by $A$ . $k^A$ is a kernel on $H$ and it is a well-known fact that the transformation that takes $g \in \mathcal{H}_k$ to $X \mapsto g(X)A(X)$ is a unitary isomorphism to $\mathcal{H}_{k^A}$ (see for example Proposition 35 of Amin et al. (2023)). + +We let $\chi$ be some non-zero, non-negative, and non-decreasing function on the non-negative real numbers $[0,\infty)$ such that $\chi (t) = t\chi (1 / t)$ for all $t > 0$ . + +# B.2. Stochastic Processes on Sequences + +In this section we study stochastic processes on sequences. Our aim is to understand the convergence rate of continuous-time Markov processes. These results will be essential in proving the KSD-B detects non-convergence. They are also of much wider relevance, as Markov processes over sequences appear in many other contexts, including (a) mathematical models of evolution and (b) Markov chain Monte Carlo (MCMC) methods for sampling sequences, such as in the context of ancestral sequence reconstruction or conditional generation. We leave detailed exploration of these applications, including extensions to discrete-time Markov processes, to future work. + +# B.2.1. CONTINUOUS-TIME MARKOV PROCESSES + +We study the continuous time Markov process on sequence space $S$ defined by the transition rates $T_{p,X\to Y}$ . Here, $T_{p,X\to Y}$ can be the transition probability of any continuous time Markov process that satisfies detailed balance, and $p$ is a distribution on sequence space $S$ . The operator $\mathcal{L}_p$ of the stochastic process is $\mathcal{L}_p = \mathcal{T}_p\nabla$ . It acts on functions; if $\delta_Y$ is a delta function at $Y$ , we have $\mathcal{L}_p(\delta_Y)(X) = T_{p,X\to Y}$ . Notationally, to avoid switching back and forth between $\mathcal{L}_p$ and $T$ , we will define $\mathcal{L}_{p,X,Y} = T_{p,X\to Y}$ . + +To simulate from a continuous time Markov process, one can first sample the sequence of distinct states the Markov process visits, and then sample how long the process stays at each state. In particular, let $K_{X \to Y} = \mathcal{L}_{p,X,Y} / \mathrm{flux}_p(X)$ if $X \neq Y$ and 0 if $X = Y$ . The entries of $K$ are positive and its rows sum to 1 so it defines a discrete-time stochastic process $(Z_0,Z_1,\ldots)$ , known as the "underlying stochastic process". The continuous-time process stays at each state for time $\tau_n \sim \mathrm{Exp}(\mathrm{flux}_p(Z_n))$ . Thus, the continuous-time process $(X_t)_t$ with operator $\mathcal{L}_p$ is defined as $X_{t} = Z_{n}$ for $\tau_{n - 1} \leq t < \tau_{n}$ and all $n$ . + +If we start at $X_0$ and run the continuous-time Markov process forward for time $t$ , we obtain a distribution over $S$ , denoted $P_t(X_0)$ (note $P_t(X_0)$ is a distribution; its value at $x \in S$ is denoted $P_t(X_0)(x)$ ). For any $f \in C_C(S)$ , we define $P_t f(X)$ to be its expectation under $P_t(X)$ , that is $P_t f(X) = E_{P_t(X)} f$ . Note $P_t f(X)$ is continuously differentiable in $t$ and satisfies the backwards Kolmogorov equation, i.e. $\frac{d}{dt} P_t f(X) = \mathcal{L}_p P_t f(X)$ (see section 2.5 of Liggett (2010)). If we sample the starting position $X_0$ from a distribution $p$ on $S$ , and this distribution does not change under the stochastic process – in the sense that $E_p P_t f = E_p f$ for all $f \in C_C(S)$ – then we call $p$ stationary. We use the notation $T_p$ and $\mathcal{L}_p$ to emphasize that the stationary distribution of the stochastic process they define is, by construction, $p$ . + +# B.2.2. EXISTENCE AND OTHER USEFUL PROPERTIES + +Sequence space $S$ is infinite. Whenever the state space of a continuous time Markov process is infinite, the process may not, for any given $\mathcal{L}_p$ , exist. Fundamentally, this is because $(X_{t})_{t}$ can "explode" by transitioning infinitely many times over some finite time period. This can result in a situation where $P_{t}$ is not an actual distribution (that is, $\sum_{Y}P_{t}(X)(Y) < 1$ ) or the forward Kolmogorov equation no longer holds (that is, $\frac{d}{dt} P_t f(X)\neq P_t\mathcal{L}_pf(X)$ ). To avoid these pathologies, we add an integrability condition on $p$ , namely that $E_{p}\mathrm{flux}_{p} < \infty$ . The below lemma shows that in this case the $(P_{t})_{t}$ are valid probability distributions and the forward Kolmogorov equation holds. We also list some additional consequences that will help prove future results. + +Lemma B.1. Say $p$ has connected support and $E_{p}\mathrm{flux}_{p} < \infty$ . + +(A) There is a Markov process $(X_{t})_{t}$ on $\operatorname{supp}(p)$ such that for all $f\in C_C(S)$ , $P_{t}f(X)$ is continuously differentiable in $t$ and $\frac{d}{dt} P_t f(X) = \mathcal{L}_pP_t f(X) = P_t\mathcal{L}_p f(X)$ . +$(B)p$ is stationary under $P_{t}$ for all $t$ . If $q$ is another distribution with $E_{q}\mathrm{flux}_{p} < \infty$ , then $q = p$ if and only if $E_{q}\mathcal{L}_{p}f = 0$ for all $f\in C_C(S)$ . +(C) If $f \in C_C(S)$ , $f(X_t) - \int_0^t \mathcal{L}_p f(X_s) ds$ is a martingale in $t$ conditional on $X_0 = X$ for every $X \in \operatorname{supp}(p)$ . + +Proof. Take $K$ , $(Z_{n})_{n}$ , $(\tau_{n})_{n}$ , $(X_{t})_{t}$ and $P_{t}$ defined as above. $(Z_{n})_{n}$ is an irreducible Markov chain by definition as $\operatorname{supp}(p)$ is connected. To show that the $P_{t}$ indeed define probability distributions, note that $\nu = \mathrm{flux}_{p} p$ is a finite measure on $S$ that is stationary with respect to $K$ since $\mathrm{flux}_{p}(X) p(X) K_{X \to Y} = \mathrm{flux}_{p}(Y) p(Y) K_{Y \to X}$ . This implies that $(Z_{n})_{n}$ will visit each $X \in \operatorname{supp}(p)$ infinitely many times almost surely. To see this, assume $(Z_{n})_{n}$ , starting at some point, visits an $X \in \operatorname{supp}(p)$ only finitely many times with positive probability. Since $(Z_{n})_{n}$ is irreducible, every time $Z_{n}$ hits $X$ there is a fixed chance that it never hits $X$ again, so, almost surely, $Z_{n}$ hits $X$ only finitely many times. Let $\hat{\nu} = \nu(X) / \nu(S)$ so, since $\hat{\nu}$ is stationary for $K$ , + +$$ +\hat {\nu} (X) = \int d \hat {\nu} (Y) (K ^ {m}) _ {Y \rightarrow X} = E _ {Z _ {0} \sim \hat {\nu}} \left[ \mathbb {1} (Z _ {m} = X) \right]\rightarrow 0 +$$ + +as $m\to \infty$ by dominated convergence, a contradiction. Thus, by Corollary 2.34 (b) of Liggett (2010), $P_{t}$ are distributions on $S$ and $\sum_{n}\tau_{n} = \infty$ almost surely, that is, $(X_{t})_{t}$ is a well defined Markov process. We also have that $P_{t}f(X) = E[f(X_{t})|X_{0} = X]$ for all $X,t$ . + +For the second claim, first note $E_q\mathrm{flux}_p < \infty$ implies $\mathrm{supp}(q) \subseteq \mathrm{supp}(p)$ since if $X \notin \mathrm{supp}(X)$ , $T_{p,X \to Y}$ is defined to be $\infty$ . By equation 2.40 of Liggett (2010), if $q$ is a distribution on $S$ such that $E_q\mathrm{flux}_p(X) < \infty$ , $q$ is stationary for all + +$P_{t}$ if and only if $E_q\mathcal{L}_p\delta_X = 0$ for all $X \in \operatorname{supp}(p)$ . In particular, $p$ is stationary for all $P_{t}$ . On the other hand, by our construction of $(X_{t})_{t}$ , since $\operatorname{supp}(p)$ is connected, by Proposition 2.6 of Hairer (2021), each $P_{t}$ has at most one stationary distribution for $t > 0$ . Thus, $p = q$ if and only if $E_q\mathcal{L}_pf = 0$ for all $f \in C_C(S)$ . + +To show that we also have the forward Kolmogorov equation it suffices by Theorem 2.39 of Liggett (2010) to show that $P_{t}\mathrm{flux}_{p}(X) < \infty$ for all $t, X$ . To see this, note by the fact that $p$ is stationary for $P_{t}$ , + +$$ +E _ {p} \mathrm {f l u x} _ {p} \geq E _ {p} \left(\mathrm {f l u x} _ {p} \mathbb {1} (| X | < N)\right)) = E _ {p} P _ {t} \left(\mathrm {f l u x} _ {p} \mathbb {1} (| X | < N)\right)\rightarrow E _ {p} P _ {t} \mathrm {f l u x} _ {p} +$$ + +as $N\to \infty$ by monotone convergence so that $P_{t}\mathrm{flux}_{p}(X) < \infty$ for all $t > 0,X\in \operatorname {supp}(p)$ + +The statement about martingales holds by the backwards and forwards Kolmogorov equations and Theorem 3.32 of Liggett (2010). + +![](images/7fe85b50a5a865e6f0e5a1471174afda14f6a70b7635bd8b1fb1a71aed2f3f27.jpg) + +# B.2.3. ESTABLISHING CONVERGENCE RATES + +We are interested in studying the convergence of continuous time Markov processes on sequences. In other words, we would like to know whether the process defined by $\mathcal{L}_p$ approaches the stationary distribution $p$ , and if so, how quickly. We can quantify closeness in terms of total variation distance, $\| P_t(X) - p\|_{\mathrm{TV}}$ . We are interested in how the total variation distance shrinks as a function of $t$ . To investigate this question, we will use the following general theorem on Markov process convergence rates, which specializes Theorem 4.1 of Hairer (2021) to the infinite discrete space $S$ . This theorem defines the convergence rate using a Lyapunov function, $V$ . + +Theorem B.2 (Theorem 4.1 of Hairer (2021)). Say $p$ has connected support and $E_{p}\mathrm{flux}_{p} < \infty$ . Say $V: S \to [1, \infty)$ is a function such that $V(X) \to \infty$ as $|X| \to \infty$ . Assume $\mathcal{L}_pV \leq R - \varphi \circ V$ on $\operatorname{supp}(p)$ for some number $R$ and strictly concave $\varphi: [0, \infty) \to [0, \infty)$ with $\varphi(0) = 0$ and increasing to infinity. Now define $H(u) = \int_1^u \varphi(s)^{-1} ds$ . Then there is a $C > 0$ such that for all $X \in \operatorname{supp}(p)$ , + +$$ +\| P _ {t} (X) - p \| _ {\mathrm {T V}} \leq \frac {C V (X)}{H ^ {- 1} (t)} + \frac {C}{\varphi \circ H ^ {- 1} (t)}. +$$ + +Proof. All conditions of Theorem 4.1 of Hairer (2021) are obviously satisfied except for the fact that $V(X_{t}) - \int_{0}^{t} ds \left( R - \varphi \circ V(X_{t}) \right)$ is a local super-martingale conditioned on $X_0 = X$ for some $X \in \operatorname{supp}(p)$ . This follows from Theorem 3.4 of Douc et al. (2009) if $Q_{t} = V(X_{t}) - \int_{0}^{t} \mathcal{L}V(X_{s}) ds$ defines a local martingale when $X_0 = X$ for all $X \in \operatorname{supp}(p)$ . + +To show this, for every number $N$ and $X \in S$ , call $V^{N}(X) = V(X)\mathbb{1}(V(X) < N)$ so that $V_{N} \in C_{C}(S)$ . Also define $T_{N} = \inf \{t \mid \exists Y \text{ s.t. } YM X_{t}\text{ and } V(Y) \geq N\}$ . $T_{N}$ is a stopping time and $T_{N} \to \infty$ almost surely. By Lemma B.1, $Q_{t}^{N} = V^{N}(X_{t}) - \int_{0}^{t}\mathcal{L}V^{N}(X_{s})ds$ is a martingale conditioned on $X_{0} = X$ for any $X \in \operatorname{supp}(p)$ and, by the definition of $T_{N}$ , $Q_{t} = Q_{t}^{N}$ for all $t \leq T_{N}$ . Thus, $(Q_{t})_{t}$ is a local martingale. + +# B.2.4. CONVERGENCE RATES FOR SEQUENCE DISTRIBUTIONS + +We now establish convergence rates for continuous time Markov processes on sequences. The fundamental challenge is to construct Lyapunov functions that are appropriate for biological sequences. In general, Lyapunov functions are constructed based on the tails of the stationary distribution $p$ ; the thinner the tails, the faster the convergence of the stochastic process. In Euclidean space, the tail of a probability distribution $p$ refers to its value at large $X$ . In sequence space, the tail refers to its value at long $X$ (Amin et al., 2021). We will find that if $p$ falls off quickly with sequence length, the stochastic process will be able to explore $p$ rapidly, and so converge quickly; if $p$ falls off slowly, convergence is slowed. + +To describe the tails of probability distributions over sequences, we introduce the quantities, + +$$ +\operatorname{del}_{p}(X) = \sum_{|Y| = L - 1,XMY}T_{p,X\to Y} +$$ + +$$ +\operatorname{ins}_{p}(X) = \sum_{|Y| = L + 1,XMY}T_{p,X\to Y} +$$ + +$$ +\operatorname {g a p} _ {p} (L) = \inf _ {X \in S | | X | = L} \operatorname {d e l} _ {p} (X) - \operatorname {i n s} _ {p} (X). +$$ + +(If $X \notin \operatorname{supp}(p)$ , take $\mathrm{del}_p(L) = \infty$ , and $\operatorname{ins}_p(X) = 0$ .) Now, $\mathrm{del}_p(X)$ describes the propensity to gain a deletion, $\operatorname{ins}_p(L)$ the propensity to gain an insertion, and $\mathrm{gap}_p(L)$ describes the difference between the two. The intuition is that $\mathrm{gap}_p(L)$ characterizes how much probability mass will move towards shorter sequences under the stochastic process. If $p$ falls off quickly with $L$ , then at long sequence lengths $L$ , $\mathrm{gap}_p(L)$ will be big (for $p$ to be the stationary distribution, the stochastic process must be very likely to head back towards shorter sequences). Conversely, if $p$ falls off slowly with $L$ , then at long sequence lengths $L$ , $\mathrm{gap}_p(L)$ will be small. We can thus use $\mathrm{gap}_p(L)$ to describe the tail of $p$ . + +We now translate our description of the tail of $p$ into a Lyapunov function $V_{p}$ , and from there into a convergence rate. + +Assumption B.3. We assume $p$ has connected support, $E_{p}\mathrm{flux}_{p} < \infty$ , and there is some concave function $V_{p}:[0,\infty)\to [0,\infty)$ such that $\lim_{L\to \infty}V_p(L) = \infty$ and + +$$ +\operatorname {g a p} _ {p} (L) \gtrsim \frac {V _ {p} (L) ^ {\frac {1 + \epsilon_ {V}}{2 + \epsilon_ {V}}}}{V _ {p} (L) - V _ {p} (L - 1)} \tag {6} +$$ + +for some $\epsilon_V > 0$ + +If a function $V_{p}$ exists that satisfies this assumption, we can guarantee convergence. If $V_{p}$ is small, we can guarantee fast convergence. + +Theorem B.4. Recall $P_{t}(X)$ is the distribution of a stochastic process with operator $\mathcal{L}_p = \mathcal{T}_p\nabla$ , after being initialized at $X$ and evolving for time $t$ . Say the stationary distribution $p$ obeys Assumption B.3. Then, the stochastic process converges to the stationary distribution in total variation. It does so with rate, + +$$ +\| P _ {t} (X) - p \| _ {\mathrm {T V}} \lesssim t ^ {- (1 + \epsilon)} + V _ {p} (| X |) t ^ {- (2 + \epsilon)}. +$$ + +Proof. Define $\Delta V_{p,L} = V_p(L) - V_p(L - 1)$ and define $V_{p}(X)$ as $V_{p}(|X|)$ . If $X\in \operatorname{supp}(p)$ with $|X| = L$ + +$$ +\begin{array}{l} \mathcal {L} _ {p} V _ {p} (X) = \sum_ {Y M X, | Y | = | X | + 1} T _ {p, X \rightarrow Y} \Delta V _ {p, L + 1} - \sum_ {Y M X, | Y | = | X | - 1} T _ {p, X \rightarrow Y} \Delta V _ {p, L} \\ = \operatorname {i n s} _ {p} (X) \Delta V _ {p, L + 1} - \operatorname {d e l} _ {p} (X) \Delta V _ {p, L} \\ \leq \operatorname {i n s} _ {p} (X) (\Delta V _ {p, L + 1} - \Delta V _ {p, L}) - \operatorname {g a p} _ {p} (L) \Delta V _ {p, L} \\ \end{array} +$$ + +Since $V_{p}$ in concave, the first term is negative. As well, by Assumption B.3, $\mathrm{gap}_p(L)\Delta V_{p,L}\gtrsim \varphi (V_p(L - 1))$ where $\varphi (x) = x^{(1 + \epsilon) / (2 + \epsilon)}$ . Thus there are constants $C_1,C_2$ such that for all $X\in \operatorname {supp}(p)$ + +$$ +\mathcal {L} _ {p} V _ {p} (X) \leq C _ {1} - C _ {2} \varphi \circ V _ {p} (X). +$$ + +By Theorem B.2, with $H = \int_{1}^{u} ds \varphi^{-1}(s) = C_3 \left( u^{\frac{1}{2 + \epsilon}} - 1 \right)$ , we have + +$$ +\| P _ {t} (X) - p \| _ {\mathrm {T V}} \lesssim V _ {p} (X) t ^ {- (2 + \epsilon)} + t ^ {- (1 + \epsilon)}. +$$ + +Theorem B.4 tells us that the rate of convergence of the stochastic process depends on $V_{p}(|X|)$ , the value of the Lyapunov function at the initialization point $X$ . Larger values of $V_{p}(|X|)$ translate into a looser bound on the total variation, and thus slower convergence rates. + +To understand the connection between the tail of $p$ (as quantified by $\mathrm{gap}_p$ ) and the convergence rate of the stochastic process (as quantified by $V_{p}$ ) in greater depth, we investigate Equation 6 further. We are interested in the smallest value of $V_{p}$ that satisfies Equation 6 for a given value of $\mathrm{gap}_p$ ; this tells us how fast a convergence rate we can guarantee. Note first that since $V_{p}$ is concave and goes to $\infty$ , the right hand side of Equation 6 is eventually less than + +$$ +\frac {V _ {p} (L) ^ {\frac {1 + \epsilon_ {V}}{2 + \epsilon_ {V}}}}{V _ {p} ^ {\prime} (L)} = \frac {V _ {p} (L)}{V _ {p} ^ {\prime} (L)} V _ {p} (L) ^ {- \frac {1}{2 + \epsilon_ {V}}} = \left(\left(\log V _ {p}\right) ^ {\prime} (L) V _ {p} ^ {\frac {1}{2 + \epsilon_ {V}}} (L)\right) ^ {- 1}. \tag {7} +$$ + +where $V_{p}^{\prime}$ is the derivative of $V_{p}$ . This quantity is larger when $V_{p}$ grows more slowly with $L$ . Let's now consider three example choices of $\mathrm{gap}_p$ and $V_{p}$ . First, consider $V_{p}$ of the form $V_{p}(L) = L^{\alpha}$ . Now, Equation 7 is proportional to $L^{1 - \frac{\alpha}{2 + \epsilon_V}}$ . So, we can satisfy Equation 6 if $\mathrm{gap}_p(L) \gtrsim L^\beta$ for some $\beta > 1/2$ , if we choose $V_{p} = L^{\alpha}$ with $0 < \alpha \leq 1$ . Alternatively, we can consider $V_{p}$ of the form $V_{p}(L) = (\log (L))^{\alpha}$ , which is slower growing. In this case Equation 7 is proportional to $L\log (L)^{1 - \frac{\alpha}{2 + \epsilon_V}}$ . We can thus satisfy Equation 6 for $\mathrm{gap}_p(L) \gtrsim L$ if we choose $V_{p}(L) = (\log (L))^{\alpha}$ with $\alpha > 2$ . Finally, define $\log^{\circ N}$ as $\log$ composed with itself $N$ times, and consider the very slow growing $V_{p}(L) = (\log^{\circ N}(L))^{\alpha}$ , which corresponds to $\mathrm{gap}_p(L) \gtrsim L\log (L)\log \log (L)\dots \log^{\circ (N - 1)}(L)(\log^{\circ N}(L))^{1 - \frac{\alpha}{2 + \epsilon_V}}$ . Now, if $\mathrm{gap}_p(L) \gtrsim L^\beta$ for some $\beta > 1$ , then we can pick a $V_{p}$ that grows as slowly as desired, ensuring very fast convergence. Also note that this is satisfied by $\operatorname{supp}(p)$ being finite. Thus, in general, the faster $\mathrm{gap}_p$ increases, the slower we can make $V_{p}$ increase, and the faster convergence rate we can guarantee. + +In summary, in this section, we have studied the implications of a basic biological fact: sequences come in different lengths. We have found that length variation has a major impact on the convergence rate of stochastic processes over sequences. If the tail of the probability distribution falls off quickly with sequence length, convergence is rapid; if it falls off more gradually, convergence slows; if it falls off very gradually, convergence is no longer guaranteed at all. We will later find that in order for the KSD-B to detect non-convergence to a distribution $p$ , the stochastic process used in the KSD-B (as defined by the Stein operator) must in fact converge to $p$ . + +# B.2.5. TAILS OF COMMON SEQUENCE DISTRIBUTIONS + +We now consider some examples of distributions $p$ on sequence space, and study their tail behavior. This gives us the rate of convergence to $p$ of the stochastic process defined by the Zanella Stein operator. It will also tell us whether or not the KSD-B can detect non-convergence to $p$ . + +We start with some relatively simple examples. We then study profile hidden Markov models (pHMMs), a type of model which is ubiquitous in biological sequence analysis. We show that we can guarantee convergence to any pHMM, and quantify the rate. pHMMs are often successful models of real biological sequence distributions, especially distributions over evolutionarily related proteins or protein domains. Our results are thus informative not just about pHMMs specifically, but also about biological sequence distributions found in nature. + +Examples with Convergence We start with some simple examples of distributions $p$ for which we can prove convergence and quantify the rate. Consider $p(X) \propto |\mathcal{B}|^{-L}e^{-\mu L}$ , which falls off exponentially with sequence length. We have, + +$$ +\mathrm {g a p} _ {p} (L) = L \chi (| \mathcal {B} | e ^ {\mu}) - (L + 1) | \mathcal {B} | \chi (| \mathcal {B} | ^ {- 1} e ^ {- \mu}) = L \chi (| \mathcal {B} | e ^ {\mu}) \left(1 - \frac {L + 1}{L} e ^ {- \mu}\right) \sim L. +$$ + +Thus, by the discussion above, we can satisfy Equation 7 with $V_{p}(L) = (\log (L))^{2 + \epsilon}$ for any $\epsilon >0$ . This translates into a convergence rate of $t^{-(1 + \epsilon)} + L^{\epsilon}t^{-(2 + \epsilon)}$ by Theorem B.4. + +Alternatively, consider $p(X) \propto |\mathcal{B}|^{-L} L!^{-1}$ where $L!$ is $L$ factorial; this distribution falls off even faster with sequence length. Choose $\chi$ such that $\chi(t) = t^{\alpha}$ when $t \leq 1$ and $\chi(t) = t^{1 - \alpha}$ when $t \geq 1$ for some $0 < \alpha < 1$ . Then, whenever $|X| = |Y| - 1$ , we have $p(X) / p(Y) = L|\mathcal{B}|$ . As a result $\mathrm{gap}_p(L) \sim L^{2 - \alpha}$ . We can now satisfy Equation 7 with $V_p(L) = \log^{\circ N}(L)$ for any $N$ , ensuring very fast convergence of the stochastic process. + +Example without Convergence We next consider an example of a distribution $p$ for which convergence is not guaranteed. The distribution $p$ is defined by an autoregressive model, with lag 2. The idea is that, for letters $A, B \in \mathcal{B}$ , the motif $ABA$ is high probability while $AAA$ is low probability. Thus a sequence such as $X = AAAA$ may increase in probability by + +gaining an insertion of $B$ and so $\mathrm{del}_p(X) < \mathrm{ins}_p(X)$ . This results in a situation where Assumption B.3 cannot be satisfied, and so we cannot guarantee convergence by Theorem B.4. + +Proposition B.5. Let $A \neq B \in \mathcal{B}$ . Let $p$ be a distribution on $S$ such that $X \sim p$ all start with two $A$ 's, i.e. $X_{(0:2)} = AA$ , and the rest of the sequence is sampled autoregressively with lag 2 as $X_{(l)} \sim p(b|X_{(l-2:l)})$ . We set $p(A|AA) = 0.1$ , $p(B|AA) = 0.8$ , $p(\S|AA) = 0.1$ , $p(A|AB) = 0.8$ , $p(B|AB) = 0.1$ , $p(\S|AB) = 0.1$ , $p(A|BA) = 0.8$ , $p(B|BA) = 0.1$ , and $p(\S|BA) = 0.1$ , and $p(\cdot|BB)$ to be anything, where $\$$ represents the end of the sequence. Then $\mathrm{gap}_p(L) < 0$ for large enough $L$ and so $p$ does not satisfy Assumption B.3. + +Proof. Call $X = L \times A$ . $p(X) = 0.1^{L - 1}$ , so $\mathrm{del}_p(X) = L\chi(0.1^{-1})$ . However, for $L_1, L_2 \geq 2$ , $p(L_1 \times A + B + L_2 \times A) = 0.1^{L_1 + L_2 - 2 - 1} 0.8^3$ , so, if $L_1 + L_2 = L$ , $p(L_1 \times A + B + L_2 \times A) / p(L \times A) = 0.8^3 0.1^{-2} > 0.1^{-1}$ . Thus, + +$$ +\begin{array}{l} \operatorname{del}_{p}(L\times A) - \operatorname{ins}_{p}(L\times A)\leq L\chi (0.1^{-1}) - \left(\sum_{L_{1},L_{2}}\chi \left(\frac{p(L_{1}\times A + B + L_{2}\times A)}{p(L\times A)}\right) + (L + 1)\chi \left(\frac{p((L + 1)\times A)}{p(L\times A)}\right)\right) \\ \leq L \chi (0. 1 ^ {- 1}) - \left((L - 3) \chi (0. 1 ^ {- 1}) + (L + 1) \chi (0. 1)\right) \\ = 3 \chi (0. 1 ^ {- 1}) - (L + 1) \chi (0. 1) \\ \end{array} +$$ + +where we have used our assumption that $\chi$ is non-decreasing. Thus, $\mathrm{gap}_p(L)\leq 0$ for large enough $L$ . + +![](images/a2688df8aa475556fb72ee60fd956fcce758bc1f81fa15e167955a47ff028ffa.jpg) + +Profile Hidden Markov models We now study the tails of pHMMs, a widely used probabilistic model of biological sequences. In this section, for a sequence $X \in S$ we define $X_{(l)}$ as its $l$ -th letter, starting counting at $l = 0$ , and $X_{(l;l^{\prime})}$ as the sequence of $l^{\prime} - l$ letters $X_{(l)}, X_{(l + 1)}, \ldots, X_{(l^{\prime} - 1)}$ . For an $X \in S$ and a number $L$ define $L \times X$ as $X$ concatenated to itself $L$ times. Let $X + Y$ for $X, Y \in S$ be their concatenation. + +To define a pHMM, we start with a Markov model with "match" states $\mathcal{J}_s = \{s_1,s_2,\dots ,s_{\tilde{L}}\}$ , "insertion" states $\mathcal{J}_i = \{i_0,i_1,\ldots ,i_{\tilde{L}}\}$ , a start state $s_0$ , and a termination state $\Delta$ . $s_l$ and $i_l$ may only transfer to $s_{l^{\prime}}$ for $l^{\prime} > l$ or $i_{l^{\prime}}$ for $l^{\prime}\geq l$ . Then each of these hidden states, except $s_0$ and $\Delta$ , emits a $b\in \mathcal{B}$ with probability $p(b|Z)$ for a state $Z$ . Thus a probability of a sequence $X$ with $|X| = L$ can be written as + +$$ +p (X) = \sum_ {Z \in \mathcal {I} _ {L}} p (Z) p (X | Z) = \sum_ {Z \in \mathcal {I} _ {L}} p (Z _ {L} | Z _ {L - 1}) \prod_ {l = 0} ^ {L - 1} p (Z _ {l} | Z _ {l - 1}) p (X _ {(l)} | Z _ {l}) +$$ + +where we define $\mathcal{I}_L = \{(Z_{-1}, Z_0, Z_1, \ldots, Z_L) \mid Z_i \in \mathcal{J}_s \cup \mathcal{J}_i$ for $1 \leq i \leq L$ , $Z_L = \Delta$ , $Z_{-1} = s_0\}$ . We add a few mild conditions to our pHMM. The first is that infinite length insertions are not allowed, i.e. $\sup_l p(i_l | i_l) \leq e^{-\mu}$ for some $\mu > 0$ . We also require that emission probabilities are non-zero, i.e. $p(b | Z) > 0$ for all states $Z$ and $b \in \mathcal{B}$ . Call $\eta = \min_{b, Z} p(b | Z)$ . Finally, we require that if $p(Z | i_l) > 0$ for some state $Z$ and $p(i_l | s_{l'}) > 0$ , then $p(Z | s_{l'}) > 0$ , that is, if a state can be reached by $s_{l'}$ by first adding an insertion, then it can be reached by $s_{l'}$ directly as well. This last condition guarantees that removing an insertion from any sequence of states $Z$ does not make the sequence probability 0. + +Before our proof let us build some intuition. For long sequences $X$ we will see that the latent alignment $p(Z|X)$ has almost all its mass on $Z$ for which almost all states have $Z_{l} = i_{l*}$ where $i_{l*}$ is the insertion state that maximizes $p(i_l|i_l)$ . In this case, $p(X)\approx p(X||X|\times i_{l*})p(|X|\times i_{l*}) = \left(\prod_{l = 0}^{|X| - 1}p(X_{(l)}|i_{l*})\right)e^{-\mu |X|}$ . Let us thus consider the toy situation where $p(X) = e^{-\mu |X|}\prod_{l = 0}^{|X| - 1}q(X_{(l)})$ for some distribution $q$ over $\mathcal{B}$ (note this is also technically a pHMM). For every sequence $X$ in this case, + +$$ +\operatorname {i n s} _ {p} (X) = (L + 1) \sum_ {b \in \mathcal {B}} \chi (e ^ {- \mu} q (b)) = (L + 1) e ^ {- \mu} \sum_ {b} q (b) \chi (e ^ {\mu} q (b) ^ {- 1}) = (L + 1) e ^ {- \mu} E _ {b \sim q} \chi (e ^ {\mu} q (b) ^ {- 1}), +$$ + +$$ +\operatorname {d e l} _ {p} (X) = \sum_ {l = 0} ^ {| X | - 1} \chi \left(e ^ {\mu} q \left(X _ {(l)}\right) ^ {- 1}\right). +$$ + +Thus, if $b^{*}$ maximizes $q(b)$ , + +$$ +\operatorname {g a p} _ {p} (L) = L \chi (e ^ {\mu} q (b ^ {*}) ^ {- 1}) - (L + 1) e ^ {- \mu} E _ {b \sim q} \chi (e ^ {\mu} q (b) ^ {- 1}) = L \left(\chi (e ^ {\mu} q (b ^ {*}) ^ {- 1}) - \frac {L + 1}{L} e ^ {- \mu} E _ {b \sim q} \chi (e ^ {\mu} q (b) ^ {- 1})\right). +$$ + +If $q(b) = |\mathcal{B}|^{-1}$ for all $b$ then $E_{b\sim q}\chi(e^{\mu}q(b)^{-1}) = \chi(e^{\mu}q(b^{*})^{-1})$ and we recover the situation of our example with $p(X)\propto |\mathcal{B}|^{-L}e^{-\mu L}$ . However if $q(b)$ is not uniform and $\chi$ is strictly increasing, we can have $E_{b\sim q}\chi(e^{\mu}q(b)^{-1}) > \chi(e^{\mu}q(b^{*})^{-1})$ . In this case, if $\mu$ is sufficiently small, then $\mathrm{gap}_p(L) < 0$ for large enough $L$ . Thus for general pHMMs $p$ and $\chi$ , whether or not $p$ has uniformly decreasing tails can depend on $\mu$ and the emission probabilities at the most likely insertion. Note however, by selecting $\chi(t) = t\wedge 1$ , $E_{b\sim q}\chi(e^{\mu}q(b)^{-1}) = \chi(e^{\mu}q(b^{*})^{-1}) = 1$ so $\mathrm{gap}_p(L)\sim L$ , and thus $p$ satisfies Assumption B.3 regardless of $q$ and $\mu$ . We will therefore take $\chi(t) = t\wedge 1$ , and show that in this case pHMMs always satisfy Assumption B.3. + +We now characterize the tails of pHMMs by lower bounding $\mathrm{gap}_p(L)$ . Note, our analysis also allows us to prove that pHMMs are subexponential, i.e. if $p$ is a pHMM then $E_{p}e^{t|X|} < \infty$ for any $t$ small enough (Amin et al., 2021); this will be useful later for proving the pHMM is $p$ , $k$ -integrable. + +Proposition B.6. If $p$ is a $pHMM$ and $\chi(t) = t \wedge 1$ then $\mathrm{gap}_p(L) \gtrsim L$ . Also, $\mathrm{ins}_p(X) \lesssim \mathrm{del}_p(X) \sim \mathrm{flux}_p(X) \sim |X|$ and $E_p e^{t|X|} < \infty$ for any $t < \mu$ . + +Proof. Let $|X| = L$ . Applying our choice of $\chi$ , we have + +$$ +\operatorname {i n s} _ {p} (X) = \sum_ {| Y | = L + 1, X M Y} T _ {p, X \to Y} \leq \frac {1}{p (X)} \sum_ {l = 0} ^ {L} \sum_ {b \in \mathcal {B}} p (X _ {b, + l}) +$$ + +where $X_{b, + l}$ is the sequence $X$ with an inserted letter $b$ at position $l$ . Now we use the sum over $\mathcal{B}$ to marginalize out the emission at position $l$ : + +$$ +\begin{array}{l} \sum_ {b \in \mathcal {B}} p (X _ {b, + l}) = \sum_ {b \in \mathcal {B}} \sum_ {Z \in \mathcal {I} _ {L + 1}} p (Z) p (X _ {b, + l} | Z) \\ = \sum_ {b \in \mathcal {B}} \sum_ {Z \in \mathcal {I} _ {L + 1}} p (Z) \left(\prod_ {l ^ {\prime} = 0} ^ {l - 1} p \left(X _ {b, + l, \left(l ^ {\prime}\right)} \mid Z _ {l ^ {\prime}}\right) \prod_ {l ^ {\prime} = l} ^ {L} p \left(X _ {b, + l, \left(l ^ {\prime} + 1\right)} \mid Z _ {l ^ {\prime} + 1}\right)\right) p \left(X _ {b, + l, (l)} \mid Z _ {l}\right) \\ = \sum_ {Z \in \mathcal {I} _ {L + 1}} p (Z) \prod_ {l ^ {\prime} = 0} ^ {l - 1} p \left(X _ {\left(l ^ {\prime}\right)} \mid Z _ {l ^ {\prime}}\right) \prod_ {l ^ {\prime} = l} ^ {L} p \left(X _ {\left(l ^ {\prime}\right)} \mid Z _ {l ^ {\prime} + 1}\right) \left(\sum_ {b \in \mathcal {B}} p (b \mid Z _ {l})\right) \\ = \sum_ {Z \in \mathcal {I} _ {L + 1}} p (Z) p (X | \tilde {Z}) \\ \end{array} +$$ + +where, for $Z \in \mathcal{I}_{L+1}$ , $\tilde{Z} \in \mathcal{I}_L$ is defined to be $Z$ but with $Z_l$ removed. The idea of the proof is to show that the leading terms of the last sum are ones in which $Z_l$ is in the middle of a multiple insertion. For these $Z, Z \mapsto \tilde{Z}$ is an injection and we can replace $p(Z)$ with its upper bound $e^{-\mu}p(\tilde{Z})$ . Then summing over $\tilde{Z}$ will give us $e^{-\mu}p(X)$ and finally summing over $l$ and dividing by $p(X)$ will give our bound $|X|e^{-\mu}$ on $\operatorname{ins}_p(X)$ . + +We take $L$ to be sufficiently large such that $L > 3\tilde{L}$ , where recall $\tilde{L}$ is the number of match states and the number of insertion states in the pHMM. For every $\hat{L}, l$ , define $\mathcal{I}_{\hat{L},s}(l) = \{Z \in \mathcal{I}_{\hat{L}} \mid Z_l \in \mathcal{J}_s\}$ and $\mathcal{I}_{\hat{L},i}(l) = \{Z \in \mathcal{I}_{\hat{L}} \mid Z_l \in \mathcal{J}_i\}$ . First we consider $Z$ with a match state at position $l$ , i.e. $Z \in \mathcal{I}_{L+1,s}(l)$ . For each $Z \in \mathcal{I}_{L+1,s}(l)$ pick a position $l_Z$ such that $Z_{l_Z} = Z_{l_{Z+1}} = Z_{l_{Z-1}} \in \mathcal{J}_i$ , i.e. $l_Z$ is in the middle of a multiple insertion. Define $\hat{Z}$ to be $Z$ with $l_Z$ removed; note that $\hat{Z} \in \mathcal{I}_{L,s}(l-1) \cup \mathcal{I}_{L,s}(l)$ . First note, by our choice of $l_Z$ , $p(Z)/p(\hat{Z}) < 1$ . Next, note $\hat{Z}$ differs from $\tilde{Z}$ in at most $2\tilde{L}$ positions, since there are $2\tilde{L}$ states (excluding the start and termination states). Using the fact that the emission probability is lower bounded by $\eta$ , we have $p(X|\hat{Z}) \leq p(X|\hat{Z})\eta^{-2\tilde{L}}$ . Finally, note that at most $\tilde{L} + 1$ different $Z$ map to the same $\hat{Z}$ , + +i.e. $\hat{Z}$ has at most $\tilde{L} + 1$ multiple insertions (since there are $\tilde{L} + 1$ insertion states). Now write + +$$ +\begin{array}{l} \sum_ {Z \in \mathcal {I} _ {L + 1, s} (l)} p (Z) p (X | \tilde {Z}) = \sum_ {Z \in \mathcal {I} _ {L + 1, s} (l)} \frac {p (Z)}{p (\tilde {Z})} \frac {p (X | \tilde {Z})}{p (X | \hat {Z})} p (\hat {Z}) p (X | \hat {Z}) \\ \leq \eta^ {- 2 \tilde {L}} \sum_ {Z \in \mathcal {I} _ {L + 1, s} (l)} p (\hat {Z}) p (X | \hat {Z}) \\ \leq \eta^ {- 2 \tilde {L}} (\tilde {L} + 1) \sum_ {Z ^ {\prime} \in \mathcal {I} _ {L, s} (l - 1) \cup \mathcal {I} _ {L, s} (l)} p (Z ^ {\prime}) p (X | Z ^ {\prime}) \\ = \eta^ {- 2 \tilde {L}} (\tilde {L} + 1) p (X, Z ^ {\prime} \in \mathcal {I} _ {L, s} (l - 1) \cup \mathcal {I} _ {L, s} (l)) \\ \leq \eta^ {- 2 \tilde {L}} (\tilde {L} + 1) (p (X, Z _ {l} \in \mathcal {J} _ {s}) + p (X, Z _ {l - 1} \in \mathcal {J} _ {s})). \\ \end{array} +$$ + +For the first term in the sum write + +$$ +\begin{array}{l} \frac {1}{p (X)} \sum_ {l = 0} ^ {L} \eta^ {- 2 \tilde {L}} (\tilde {L} + 1) p (X, Z _ {l} \in \mathcal {J} _ {s}) = \eta^ {- 2 \tilde {L}} (\tilde {L} + 1) \sum_ {l = 0} ^ {L} p (Z _ {l} \in \mathcal {J} _ {s} | X) \\ = \eta^ {- 2 \tilde {L}} (\tilde {L} + 1) \sum_ {l = 0} ^ {L} E [ \mathbb {1} (Z _ {l} \in \mathcal {J} _ {s}) | X ] \\ = \eta^ {- 2 \tilde {L}} (\tilde {L} + 1) E \left[ \sum_ {l = 0} ^ {L} \mathbb {1} \left(Z _ {l} \in \mathcal {J} _ {s}\right) \Bigg | X \right] \\ \leq \eta^ {- 2 \tilde {L}} (\tilde {L} + 1) \tilde {L} = O (1) \\ \end{array} +$$ + +Where the last inequality follows from the fact that if $p(Z) > 0$ , then at most $\tilde{L}$ states are letters. The second term is similar. + +Next we consider $Z \in \mathcal{I}_{L + 1,i}(l)$ . Note that at most $\tilde{L} + 1$ elements $Z$ of $\mathcal{I}_{L + 1,i}(l)$ map to the same $\tilde{Z}$ . (The bound $\tilde{L} + 1$ comes from considering the $\tilde{L} + 1$ possible values of the deleted state. For example, consider a $Z$ with entries $Z_{l'} = i_{\tilde{L}}$ for all $l' \in \{1, \ldots, \tilde{L}\}$ . If we delete the zeroth entry of $Z$ to obtain $\tilde{Z}$ , i.e. we set $l = 0$ , then we always obtain the same value of $Z$ , regardless of $Z_0$ . Since $Z_0$ can take any value $i_{l'}$ for $l' \in \{0, 1, \ldots, \tilde{L}\}$ , we have $\tilde{L} + 1$ possibilities.) Note also that by the fact that $p(Z) = p(Z_0|Z_{-1}) \times \ldots \times p(Z_{L + 1}|Z_L)$ and our assumption that removing an insertion does not make the sequence probability 0, there is a $\gamma > 0$ such that $p(Z)/p(\tilde{Z}) \leq \gamma$ for all $Z \in \mathcal{I}_{L + 1,i}(l)$ . We will split $\mathcal{I}_{L + 1,i}(l)$ into two parts: define $A_1 = \{Z \in \mathcal{I}_{L + 1,i}(l) | Z_{l - 1} \neq Z_{l + 1}\}$ and $A_2 = \{Z \in \mathcal{I}_{L + 1,i}(l) | Z_{l - 1} = Z_{l + 1}\}$ . That is, if $Z \in A_2$ then $Z_{l - 1} = Z_{l + 1}$ , so position $l$ is in a multiple insertion and $Z_l = Z_{l - 1}$ . Thus, if $Z \in A_2$ , then $p(Z)/p(\tilde{Z}) \leq e^{-\mu}$ , and, $Z \mapsto \tilde{Z}$ is injective on $A_2$ . On the other hand, if $Z \in A_1$ then $\tilde{Z}_{l - 1} \neq \tilde{Z}_l$ . Thus, + +$$ +\sum_ {Z \in A _ {1}} p (Z) p (X | \tilde {Z}) \leq \gamma \sum_ {Z \in A _ {1}} p (\tilde {Z}) p (X | \tilde {Z}) \leq (\tilde {L} + 1) \gamma p (X, Z _ {l - 1} \neq Z _ {l}) +$$ + +$$ +\sum_ {Z \in A _ {2}} p (Z) p (X | \tilde {Z}) \leq e ^ {- \mu} \sum_ {Z \in A _ {2}} p (\tilde {Z}) p (X | \tilde {Z}) \leq e ^ {- \mu} p (X) +$$ + +and, since there are $2\tilde{L} + 3$ total states in the Markov chain, + +$$ +\begin{array}{l} \frac {1}{p (X)} \sum_ {l = 0} ^ {L} \sum_ {Z \in A _ {1}} p (Z) p (X | \tilde {Z}) \leq (\tilde {L} + 1) \gamma E \left[ \sum_ {l = 0} ^ {L} \mathbb {1} \left(Z _ {l - 1} \neq Z _ {l}\right) \Bigg | X \right] \leq (\tilde {L} + 1) \gamma (2 \tilde {L} + 3) = O (1) \\ \frac {1}{p (X)} \sum_ {l = 0} ^ {L} \sum_ {Z \in A _ {2}} p (Z) p (X | \tilde {Z}) \leq e ^ {- \mu} L. \\ \end{array} +$$ + +Combining the above results we finally have + +$$ +\operatorname {i n s} _ {p} (X) \leq L e ^ {- \mu} + O (1) +$$ + +Considering deletions now, we have + +$$ +\mathrm {d e l} _ {p} (X) = \sum_ {| Y | = L - 1, X M Y} T _ {p, X \to Y} = \sum_ {l = 0} ^ {L - 1} \frac {p (X _ {- l})}{p (X)} \wedge 1 +$$ + +with $X_{-l}$ defined to be $X$ with position $l$ deleted. In this case, + +$$ +p (X _ {- l}) = \sum_ {Z \in \mathcal {I} _ {L - 1}} p (Z) \prod_ {l ^ {\prime} = 0} ^ {l - 1} p (X _ {(l ^ {\prime})} | Z _ {l ^ {\prime}}) \prod_ {l ^ {\prime} = l + 1} ^ {L - 1} p (X _ {(l ^ {\prime})} | Z _ {l ^ {\prime} - 1}). +$$ + +For $Z \in \mathcal{I}_{L-1,i}(l-1)$ , let $\tilde{Z}$ be $Z$ but with an extra $i_k$ in position $l$ if $Z_{l-1} = i_k$ . For $Z \in \mathcal{I}_{L-1,i}(l-1)$ , $p(Z)/p(\tilde{Z}) \geq e^\mu \geq e^\mu p(X_l|\tilde{Z}_l)$ and $Z \mapsto \tilde{Z}$ is a bijection to elements $Z$ of $\mathcal{I}_L$ such that $Z_{l-1} = Z_l \in \mathcal{J}_i$ . Thus we have, + +$$ +\frac {1}{p (X)} p (X _ {- l}) \geq e ^ {\mu} \frac {1}{p (X)} \sum_ {Z \in \mathcal {I} _ {L - 1, i} (l - 1)} p (\tilde {Z}) p (X | \tilde {Z}) = e ^ {\mu} p (Z _ {l - 1} = Z _ {l} \in \mathcal {J} _ {i} | X). +$$ + +Now let $R = \sum_{l=0}^{L} \left( e^{\mu} \mathbb{1}\big(Z_{l-1} = Z_{l} \in \mathcal{J}_{i}\big) \right) \wedge 1 = \sum_{l=0}^{L} \mathbb{1}\big(Z_{l-1} = Z_{l} \in \mathcal{J}_{i}\big)$ , which is lower bounded by $L - 3\tilde{L}$ . Thus + +$$ +\operatorname{del}_{p}(X) = \sum_{|Y| = L - 1,XMY}T_{p,X\to Y}\geq E_{p}\left[R|X\right]\geq \left(L - 3\tilde{L}\right) = L - O(1). +$$ + +On the other hand we clearly have $\mathrm{del}_p(X) \leq L$ . Note we similarly have $\mathrm{flux}_p(X)$ must be less than the number of neighbours of $X$ , $L + (|B| - 1)L + |B|(L + 1)$ . Thus we have $\mathrm{ins}_p(X) \lesssim \mathrm{del}_p(X) \sim \mathrm{flux}_p(X) \sim |X|$ and $\mathrm{gap}_p(L) \geq (L - O(1)) - (Le^{-\mu} + O(1)) \gtrsim L$ . + +Finally, recall out bound for any sequence $X$ , + +$$ +\operatorname {i n s} _ {p} (X) \leq \frac {1}{p (X)} \sum_ {l = 0} ^ {L} \sum_ {b \in \mathcal {B}} p (X _ {b, + l}) \leq L e ^ {- \mu} + O (1) \leq (L + 1) (e ^ {- \mu} + o (1)). +$$ + +Thus, + +$$ +\sum_ {| X | = L} p (X) \geq \sum_ {| X | = L} p (X) \frac {1}{(L + 1) (e ^ {- \mu} + o (1))} \left(\frac {1}{p (X)} \sum_ {l = 0} ^ {L} \sum_ {b \in \mathcal {B}} p (X _ {b, + l})\right) = (e ^ {- \mu} + o (1)) ^ {- 1} \sum_ {| X | = L + 1} p (X) +$$ + +so $p(|X| = L)\lesssim e^{-tL}$ if $e^{-t} > e^{-\mu}$ . In particular, if $t < \mu$ , then $E_{p}e^{t|X|} = \sum_{L}p(|X| = L)e^{tL} < \infty$ . + +Now, for any pHMM, we can guarantee convergence of the stochastic process and characterize its convergence rate. + +Corollary B.7. If $p$ is a $pHMM$ and $\chi(t) = t \wedge 1$ then $E_p\mathrm{flux}_p(X) < \infty$ and Assumption B.3 is satisfied with $V_p(L) = (\log L)^{2 + \epsilon}$ for any $\epsilon > 0$ . + +This convergence speed is faster than that we obtained in the scenario where $p$ decayed exponentially with $L$ , but slower than when $p$ decayed as $L!$ or when $p$ had zero probability past a certain length. It suggests that for many real biological sequence distributions – those for which the pHMM is a good model – we can expect reasonably fast convergence of the stochastic process. + +# B.3. Proofs of the KSD-B's Properties + +In this section we prove the results described in the main text for KSD-Bs. Section B.3.1 shows how to tractably compute the KSD-B. Section B.3.2 shows that scalar field KSD-Bs can be written as a special case of vector field KSD-Bs. Section B.3.3 introduces the property of discrete masses and describes its generalization to vector field kernels. Section B.3.4 establishes conditions under which the KSD-B is faithful and detects tight non-convergence. Section B.3.5 establishes conditions under which the KSD-B can detect non-convergence more generally, and also proves that the KSD-B detects convergence. Section B.3.6 details examples of scenarios where the KSD-B can fail, if the kernel is not chosen appropriately or $p$ is pathological. Section B.3.7 shows how to accurately approximate the KSD-B. + +# B.3.1. CALCULATING THE KSD-B + +In this section we develop a tractable expression for computing the KSD-B. First note that by our definition that $T_{p,X\to Y} = \infty$ if $p(X) = 0$ , any $p, k$ integrable distribution $q$ must have $\mathrm{supp}(q)\subseteq \mathrm{supp}(p)$ . + +Proposition B.8. (Proof of Eq. 1 and Proposition 4.1) Say $k$ is a vector field kernel and $q$ is a $p, k$ -integrable distribution on $S$ . Then for all $f \in \mathcal{H}_k$ , + +$$ +E _ {q} \mathcal {T} _ {p} f = \frac {1}{2} \sum_ {(X, Y) \in M _ {p, p}} p (Y) T _ {p, Y \rightarrow X} \left(\frac {q (X)}{p (X)} - \frac {q (Y)}{p (Y)}\right) f (X, Y). \tag {8} +$$ + +Note when $q(X) > 0$ , for $(X,Y)\in M_{p,p}$ + +$$ +p (Y) \left(\frac {q (X)}{p (X)} - \frac {q (Y)}{p (Y)}\right) = q (X) \left(\frac {p (Y)}{p (X)} - \frac {q (Y)}{q (X)}\right) +$$ + +and we recover Eq. 1. As well, + +$$ +\mathrm {K S D - B} _ {p, k} (q) ^ {2} = E _ {X, X ^ {\prime} \sim q} \sum_ {Y M X, Y ^ {\prime} M X ^ {\prime}} T _ {p, X \rightarrow Y} T _ {p, X ^ {\prime} \rightarrow Y ^ {\prime}} k ((X, Y), (X ^ {\prime}, Y ^ {\prime})). +$$ + +If $p$ is $p$ , $k$ -integrable, then for all $f \in \mathcal{H}_k$ , $E_p \mathcal{T}_p f = 0$ . + +Proof. Say $q$ is $p, k$ -integrable. Define $\phi_q(f) = E_q\mathcal{T}_pf$ . For $f \in \mathcal{H}_k$ , + +$$ +\begin{array}{l} \phi_ {q} (f) = E _ {X \sim q} \sum_ {Y M X} T _ {p, X \rightarrow Y} \left(f \mid k _ {(X, Y)}\right) _ {k} \\ \leq \| f \| _ {k} E _ {X \sim q} \sum_ {Y M X} T _ {p, X \rightarrow Y} \sqrt {k ((X , Y) , (X , Y))}. \tag {9} \\ \end{array} +$$ + +by Cauchy Schwarz. Thus $\phi_q$ is a bounded linear operator on $\mathcal{H}_k$ and is thus a member of $\mathcal{H}_k$ , by the Riesz representation theorem. As well, $\mathrm{KSD - B}_{p,k}(q) = \| \phi_k\| _k$ . We now have, + +$$ +\begin{array}{l} \left(\phi_ {q} | \phi_ {q}\right) _ {k} = \phi_ {q} \left(\phi_ {q}\right) \\ = E _ {X \sim q} \sum_ {Y M X} T _ {p, X \rightarrow Y} \phi_ {q} (k _ {(X, Y)}) \\ = E _ {X \sim q} \sum_ {Y M X} T _ {p, X \rightarrow Y} E _ {X ^ {\prime} \sim q} \sum_ {Y ^ {\prime} M X ^ {\prime}} T _ {p, X ^ {\prime} \rightarrow Y ^ {\prime}} k _ {(X, Y)} (X ^ {\prime}, Y ^ {\prime}) \\ = E _ {X, X ^ {\prime} \sim q} \sum_ {Y M X} \sum_ {Y ^ {\prime} M X ^ {\prime}} T _ {p, X \rightarrow Y} T _ {p, X ^ {\prime} \rightarrow Y ^ {\prime}} k ((X, Y), (X ^ {\prime}, Y ^ {\prime})). \\ \end{array} +$$ + +Note that since all quantities in the expectation and sum are positive, Eqn. 9 shows the absolute integrability of the expectation and sum. Thus we can rearrange terms to get + +$$ +\begin{array}{l} \phi_ {q} (f) = E _ {X \sim q} \sum_ {Y M X} T _ {p, X \rightarrow Y} f (X, Y) \\ = \sum_ {(X, Y) \in M _ {p, p}} q (X) T _ {p, X \rightarrow Y} f (X, Y) \\ = \frac {1}{2} \sum_ {(X, Y) \in M _ {p, p}} (q (X) T _ {p, X \rightarrow Y} f (X, Y) + q (Y) T _ {p, Y \rightarrow X} f (Y, X)) \\ = \frac {1}{2} \sum_ {(X, Y) \in M _ {p, p}} p (Y) T _ {p, Y \rightarrow X} \left(\frac {q (X)}{p (X)} - \frac {q (Y)}{p (Y)}\right) f (X, Y) \\ \end{array} +$$ + +where the last line follows from detailed balance, $T_{p,X\to Y}p(X) = T_{p,Y\to X}p(Y)$ . If we set $p = q$ we have $\frac{q(X)}{p(X)} = \frac{q(Y)}{p(Y)}$ for all $(X,Y)\in M_{p,p}$ , so $E_{p}\mathcal{T}_{p}f = 0$ . + +# B.3.2. SCALAR FIELD KSD-BS AS AN INSISTENCE OF VECTOR FIELD KSD-BS + +Here we demonstrate that every scalar field KSD-B can be written as a special case of a vector field KSD-B. Recall that scalar field Stein discrepancies take the form $\sup_{g\in \nabla \mathcal{F}}|E_q\mathcal{T}g|$ , whereas vector field Stein discrepancies take the more general form $\sup_{g\in \mathcal{G}}|E_q\mathcal{T}g|$ . We will show that if the family of functions $\mathcal{F}$ is an RKHS $\mathcal{H}_k$ with kernel $k$ , then the set of functions $\nabla \mathcal{F}$ - that is, the set of gradients of functions in $\mathcal{H}_k$ - is itself an RKHS with kernel $k^{\nabla}$ . This implies that the scalar field Stein discrepancy with kernel $k$ is equivalent to a vector field Stein discrepancy with kernel $k^{\nabla}$ . To show this, we start by defining $k^{\nabla}$ . + +Proposition B.9. For any given scalar field kernel $k$ on $S$ , + +$$ +k ^ {\nabla} \left(\left(X, Y\right), \left(X ^ {\prime}, Y ^ {\prime}\right)\right) = \left(k _ {Y} - k _ {X} \mid k _ {Y ^ {\prime}} - k _ {X ^ {\prime}}\right) _ {k} = k \left(Y, Y ^ {\prime}\right) - k \left(X, Y ^ {\prime}\right) - k \left(Y, X ^ {\prime}\right) + k \left(X, X ^ {\prime}\right) +$$ + +for $(X,Y),(X',Y') \in M$ defines a vector field kernel. For every $f \in \mathcal{H}_{k\nabla}$ there is a $g \in \mathcal{H}_k$ with $f = \nabla g$ and $\| f\|_{k\nabla} = \| g\|_k$ . + +Proof. $k^{\nabla}$ is non-negative definite as if $(X_1, Y_1), \ldots, (X_N, Y_N) \in M$ and $\alpha_1, \ldots, \alpha_N \in \mathbb{R}$ then, calling $f = \sum_{n} \alpha_n k_{X_n}$ and $g = \sum_{n} \alpha_n k_{Y_n}$ , + +$$ +\sum_ {n, m} \alpha_ {n} \alpha_ {m} k ^ {\nabla} ((X _ {n}, Y _ {n}), (X _ {m}, Y _ {m})) = (g | g) _ {k} - (f | g) _ {k} - (g | f) _ {k} + (f | f) _ {k} = \| f - g \| _ {k} \geq 0. +$$ + +One can also verify that $k_{(X,Y)}^{\nabla} = -k_{(Y,X)}^{\nabla}$ for all $(X,Y) \in M$ , so for every $f \in \mathcal{H}_{k^{\nabla}}$ , + +$$ +f (X, Y) = \left(f | k _ {(X, Y)} ^ {\nabla}\right) _ {k \nabla} = - \left(f | k _ {(Y, X)} ^ {\nabla}\right) _ {k \nabla} = - f (Y, X). +$$ + +Thus $k^{\nabla}$ is a vector field kernel. + +Let $f = \sum_{i=1}^{n} \alpha_i k_{(X_i, Y_i)}^{\nabla}$ and $g = \sum_{i=1}^{n} \alpha_i (k_{Y_i} - k_{X_i})$ . + +$$ +f (X, Y) = \sum_ {i = 1} ^ {n} \alpha_ {i} \left(k _ {Y _ {i}} - k _ {X _ {i}} \mid k _ {Y} - k _ {X}\right) _ {k} = g (Y) - g (X) = \nabla g (X). +$$ + +As well, + +$$ +\| f \| _ {k \nabla} ^ {2} = \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \alpha_ {i} \alpha_ {j} \left(k _ {Y _ {i}} - k _ {X _ {i}} \mid k _ {Y _ {j}} - k _ {X _ {j}}\right) _ {k} = (g | g) _ {k} = \| g \| _ {k} ^ {2}. +$$ + +Now say $f \in \mathcal{H}_{k\nabla}$ and $(f_n)_n$ is a sequence of finite linear combinations of $(k_{(X,Y)}^{\nabla})_{(X,Y) \in M}$ such that $f_n \to f$ . Say $g_n \in \mathcal{H}_k$ such that $\nabla g_n = f_n$ . Since $\| f_n - f_m\|_{k\nabla} = \| g_n - g_m\|_k$ , $(g_n)_n$ is a Cauchy sequence and thus converges to a $g \in \mathcal{H}$ . $\| g\|_k = \lim_n \| g_n\|_k = \lim_n \| f_n\|_{k\nabla} = \| f\|_{k\nabla}$ and finally, + +$$ +f (X, Y) = (f | k _ {(X, Y)} ^ {\nabla}) _ {k \nabla} = \lim _ {n} (f _ {n} | k _ {(X, Y)} ^ {\nabla}) _ {k \nabla} = \lim _ {n} (g _ {n} | k _ {Y} - k _ {X}) _ {k} = (g | k _ {Y} - k _ {X}) _ {k} = \nabla g (X, Y). +$$ + +Now we show that $k^{\nabla}$ defines a vector field KSD-B that is identical to a scalar field KSD-B with $k$ . + +Proposition B.10. Say $k$ is a scalar field kernel on $S$ . If $q$ is a $p, k^{\nabla}$ -integrable distribution on $S$ , then, + +$$ +\sup_{\| f\|_{k}\nabla \leq 1}E_{q}\mathcal{T}_{p}f = \sup_{\| f\|_{k}\leq 1}E_{q}\mathcal{T}_{p}\nabla f. +$$ + +Proof. Define, similar to Proposition B.8, $\tilde{\phi}_q:\mathcal{H}_k\to \mathbb{R}\mid f\mapsto E_q\mathcal{T}_p\nabla f$ . For $f\in \mathcal{H}_k$ + +$$ +\begin{array}{l} \tilde {\phi} _ {q} (f) = E _ {X \sim q} \sum_ {Y M X} T _ {p, X \rightarrow Y} \left(f \mid k _ {Y} - k _ {X}\right) _ {k} \\ \leq \| f \| _ {k} E _ {X \sim q} \sum_ {Y M X} T _ {p, X \rightarrow Y} \| k _ {Y} - k _ {X} \| _ {k} \\ \leq \| f \| _ {k} E _ {X \sim q} \sum_ {Y M X} T _ {p, X \rightarrow Y} \sqrt {k ^ {\nabla} ((X , Y) , (X , Y))}. \\ \end{array} +$$ + +Thus $\tilde{\phi}_q$ is a bounded linear operator on $\mathcal{H}_k$ and, by the Riesz representation theorem, is a member of $\mathcal{H}_k$ . As well, $\left(\sup_{\|f\|_k \leq 1} E_q \mathcal{T}_p \nabla f\right)^2 = \|\phi_k\|_k^2$ . Now, + +$$ +\begin{array}{l} \left(\tilde {\phi} _ {q} \mid \tilde {\phi} _ {q}\right) _ {k} = \tilde {\phi} _ {q} \left(\tilde {\phi} _ {q}\right) \\ = E _ {X \sim q} \sum_ {Y M X} T _ {p, X \rightarrow Y} \left(\tilde {\phi} _ {q} \left(k _ {Y} - k _ {X}\right)\right) \\ = E _ {X \sim q} \sum_ {Y M X} T _ {p, X \rightarrow Y} \\ \times \left(E _ {X ^ {\prime} \sim q} \sum_ {Y ^ {\prime} M X ^ {\prime}} T _ {p, X ^ {\prime} \rightarrow Y ^ {\prime}} \left(\left(k _ {Y} \left(Y ^ {\prime}\right) - k _ {X} \left(Y ^ {\prime}\right)\right) - \left(k _ {Y} \left(X ^ {\prime}\right) - k _ {X} \left(X ^ {\prime}\right)\right)\right)\right) \\ = E _ {X, X ^ {\prime} \sim q} \sum_ {Y M X, Y ^ {\prime} M X ^ {\prime}} T _ {p, X \rightarrow Y} T _ {p, X ^ {\prime} \rightarrow Y ^ {\prime}} k ^ {\nabla} ((X, Y), (X ^ {\prime}, Y ^ {\prime})) \\ = \mathrm {K S D - B} _ {p, k} (q) ^ {2}. \\ \end{array} +$$ + +![](images/d9fbd1f7ed0c2a0a832d864339649fde3082bcd2cd0c2c5d68593c27590c080b.jpg) + +# B.3.3. KERNELS WITH DISCRETE MASSES + +We want the KSD-B to be able to be faithful, i.e. $\mathrm{KSD - B}_{p,k}(q) = 0$ if and only if $p = q$ . For this to hold, the set of test functions the KSD-B uses for comparing $q_{n}$ and $p$ – that is, $\mathcal{H}_k$ – must be sufficiently large. In Euclidean space, one can make sure $\mathcal{H}_k$ is sufficiently large by using a kernel that is universal (Gorham & Mackey, 2017). In our infinite discrete case, we will use kernels with discrete masses, i.e. kernels whose RKHS contains delta functions (Jorgensen & Tian, 2015). In this section we introduce the property of discrete masses, generalize it to vector fields, and discuss its implications. Later we will show how the discrete mass property, together with additional conditions on the kernel's tail, ensures the KSD-B can detect non-convergence (Section B.3.4-B.3.5); we then construct practical kernels with discrete masses (Section B.4). + +A standard, scalar field kernel has discrete masses if for every point $X \in S$ , the RKHS contains a delta function at $X$ . For vector field kernels, we generalize this idea from single points to pairs of neighboring points. + +Definition B.11 (Kernel with discrete masses). (A) If $k$ is a scalar field kernel on $S$ , then we say $k$ has discrete masses if $\delta_X \in \mathcal{H}_k$ for all $X \in S$ , where $\delta_X$ is the function that is 1 at $X$ and 0 elsewhere. + +(B) If $k$ is a vector field kernel, then we say $k$ has discrete masses if $\delta_{(X,Y)} \in \mathcal{H}_k$ for all $(X,Y) \in M$ , where $\delta_{(X,Y)}$ is the vector field on $M$ that is 1 at $(X,Y)$ , -1 at $(Y,X)$ , and 0 elsewhere. + +To see that having discrete masses implies that $\mathcal{H}_k$ is large in some absolute sense, note that a scalar field kernel has discrete masses if and only if $C_C(S) \subset \mathcal{H}_k$ , and a vector field kernel $k$ has discrete masses if and only if $C_{C,vf}(M) \subset \mathcal{H}_k$ . Moreover, $\mathcal{H}_k$ is dense in any space for which $C_{C,vf}(M)$ or $C_C(S)$ are dense. Thus, a kernel with discrete masses is $C_0$ and $L^p$ -universal, meaning that any function in $C_0$ or $L_p$ -space can be approximated arbitrarily well by a function in $\mathcal{H}_k$ (Sriperumbudur et al., 2011; Amin et al., 2023). Note also that kernels on Euclidean space cannot have discrete masses (Amin et al., 2023). + +Kernels with discrete masses are guaranteed to detect non-convergence when used in a maximum mean discrepancy (MMD) (Sriperumbudur et al., 2010; Amin et al., 2023). Although the KSD-B is closely related to the MMD, the same results do not transfer directly. In the MMD, the set of test functions is the RKHS $\mathcal{H}_k$ itself, i.e. we have $\sup_{\| f\| _k\leq 1:f\in \mathcal{H}_k}\left|E_qf - E_pf\right|$ . In the KSD-B, however, the set of test functions is the Stein operator applied to the RKHS, $\sup_{\tilde{f}\in \mathcal{T}_p(\{\| f\| _k\leq 1:f\in \mathcal{H}_k\})}\left|E_q\tilde{f} -E_pf\right|$ . + +We introduced vector field KSD-Bs as a generalization of scalar field KSD-Bs with a larger set of test functions. One way in which this notion of a larger set of test functions manifests itself is in terms of discrete masses. Consider a scalar field kernel $k$ ; even if this kernel has discrete masses, its corresponding vector field kernel $k^{\nabla}$ (Proposition B.10) cannot have discrete masses. In other words, even if $k$ can describe a very large set of functions on $S$ , $k^{\nabla}$ cannot describe a very large set of vector field functions on $M \subset S \times S$ . This is one important way in which scalar field KSD-Bs are limited. + +Proposition B.12. Say $k$ is a kernel on $S$ . Then $k^{\nabla}$ does not have discrete masses. + +Proof. Let $X_1, X_2, X_3$ be three distinct sequences in $S$ such that $X_1MX_2MX_3MX_1$ . For any $(X,Y) \in M$ , calling $f = k_{(X,Y)}^{\nabla}$ , we have + +$$ +\begin{array}{l} f \left(X _ {1}, X _ {2}\right) + f \left(X _ {2}, X _ {3}\right) + f \left(X _ {3}, X _ {1}\right) = \\ = k ^ {\nabla} \left(\left(X, Y\right), \left(X _ {1}, X _ {2}\right)\right) + k ^ {\nabla} \left(\left(X, Y\right), \left(X _ {2}, X _ {3}\right)\right) + k ^ {\nabla} \left(\left(X, Y\right), \left(X _ {3}, X _ {1}\right)\right) \\ = \left(k _ {Y} - k _ {X} \right| \left(k _ {X _ {2}} - k _ {X _ {1}}\right) + \left(k _ {X _ {3}} - k _ {X _ {2}}\right) + \left(k _ {X _ {1}} - k _ {X _ {3}}\right)) _ {k} \\ = 0. \\ \end{array} +$$ + +Thus, for all $f \in \mathcal{H}_{k\nabla}$ , $f(X_1, X_2) + f(X_2, X_3) + f(X_3, X_1) = 0$ . However, + +$$ +\delta_ {(X _ {1}, X _ {2})} (X _ {1}, X _ {2}) + \delta_ {(X _ {1}, X _ {2})} (X _ {2}, X _ {3}) + \delta_ {(X _ {1}, X _ {2})} (X _ {3}, X _ {1}) = 1. +$$ + +![](images/c137a8d48e515de9cb40d658096b93007473b13615f5b6fa9a1dffe6929ac4f0.jpg) + +# B.3.4. FAITHFULNESS AND TIGHT NON-CONVERGENCE + +In this section we show that the KSD-B is faithful, meaning that $\mathrm{KSD - B}_{p,k}(q) = 0$ if and only if $p = q$ . We also show that the KSD-B can detect tight non-convergence, meaning $\mathrm{KSD - B}_{p,k}(q_n)\nrightarrow 0$ if $q_{n}\nrightarrow p$ and $(q_{n})_{n}$ is uniformly tight. Intuitively, uniform tightness says that the distributions $q_{n}$ do not become too diffuse or spread out as $n\to \infty$ - a scenario that may occur, for instance, if $q_{n}$ is the empirical distribution of samples drawn by a biased sampler that "spins out of control" (Gorham & Mackey, 2017). In the next section, we will relax the tightness assumption, guarding against such a possibility. + +The basic idea behind our proof is that, if we use a kernel with discrete masses, $\mathrm{KSD - B}_{p,k}(q) = 0$ implies $E_{q}f = 0$ for all $f$ in $\mathcal{T}_p(C_C,v_f(M))$ (or, for a scalar field kernel, for all $f$ in $\mathcal{T}_p\nabla (C_C(S)))$ . This in turn implies $q = p$ , as we show in the following lemma. A brief technical point: when $k$ is a scalar field kernel, we will rely on the fact that the only distribution stationary for the stochastic process induced by $\mathcal{L}_p$ is $p$ (Lemma B.1 (B)). Thus we must add additional integrability assumptions to ensure that this process exists, namely that $E_{p}\mathrm{flux}_{p}$ and $E_{q}\mathrm{flux}_{p}$ are finite. In the vector field case, such extra assumptions are unnecessary as we can appeal directly to Equation 1. + +Lemma B.13. Say $p$ has connected support and $q$ is a distribution on $S$ . If $E_q\mathcal{T}_p f \neq \infty$ for all $f \in C_{C,vf}(M)$ , or if $E_q\mathcal{T}_p\nabla f \neq \infty$ for all $f \in C_C(S)$ , then $\mathrm{supp}(q) \subseteq \mathrm{supp}(p)$ . If $E_q\mathcal{T}_p f = 0$ for all $f \in C_{C,vf}(M)$ , then $q = p$ . Or, if $E_q\mathrm{flux}_p < \infty$ , $E_p\mathrm{flux}_p < \infty$ and $E_q\mathcal{T}_p\nabla f = 0$ for all $f \in C_C(S)$ , then $q = p$ . + +Proof. Assume $E_q \mathcal{T}_p f$ is well defined and finite for all $f \in C_{C,vf}(M)$ . If $\operatorname{supp}(q) \not\subseteq \operatorname{supp}(p)$ then there is a $X \in \operatorname{supp}(q) \setminus \operatorname{supp}(p)$ such that there is a $YMX$ where either $q(Y) = 0$ or $Y \in \operatorname{supp}(p)$ . In either case, we have $q(Y) T_{p,Y \to X} = 0$ (recall we define $0 \times \infty = 0$ ). Thus, from the definition of $\mathcal{T}_p$ , + +$$ +E _ {q} \mathcal {T} _ {p} \delta_ {(X, Y)} = q (X) T _ {p, X \rightarrow Y} - q (Y) T _ {p, Y \rightarrow X} = \infty , +$$ + +since $T_{p,X\to Y}$ is defined to be $\infty$ when $X\notin \operatorname{supp}(p)$ . This contradicts the assumption that $E_q\mathcal{T}_pf$ is finite, so $\operatorname{supp}(q)\subseteq \operatorname{supp}(p)$ . + +The proof for the scalar field kernel proceeds analogously. Assume $E_q \mathcal{T}_p \nabla f \neq \infty$ for all $f \in C_C(S)$ , and say $\operatorname{supp}(q) \not\subset \operatorname{supp}(p)$ . Again pick $X \in \operatorname{supp}(q) \setminus \operatorname{supp}(p)$ such that there is a $YMX$ such that $q(Y) = 0$ or $Y \in \operatorname{supp}(p)$ . We have, + +$$ +E _ {q} \mathcal {T} _ {p} \nabla \delta_ {Y} = q (Y) \mathcal {T} _ {p} \nabla \delta_ {Y} (Y) + \sum_ {Z M Y} q (Z) \mathcal {T} _ {p} \nabla \delta_ {Y} (Z) = - q (Y) \mathrm {f l u x} _ {p} (Y) + \sum_ {Z M Y} q (Z) T _ {p, Z \to Y}, +$$ + +and in either case the first term is finite and the second is $\infty$ , a contradiction. + +Now say $E_q \mathcal{T}_p f = 0$ for all $f \in C_{C,vf}(M)$ . If $X \in \operatorname{supp}(q), Y \in \operatorname{supp}(p)$ and $YMX$ , we have from Equation 1, + +$$ +0 = E _ {q} \mathcal {T} _ {p} \delta_ {(X, Y)} = q (X) T _ {p, Y \to X} \left(\frac {p (Y)}{p (X)} - \frac {q (Y)}{q (X)}\right). +$$ + +Thus, $q(Y) / q(X) = p(Y) / p(X)$ . Thus $\operatorname{supp}(q) = \operatorname{supp}(p)$ and $q(Y) / q(X) = p(Y) / p(X)$ for all $(X,Y) \in M_{p,p}$ . Since the support of $p$ in connected this implies that $q = p$ . + +Now if $E_{p}\mathrm{flux}_{p} < \infty$ and $E_{q}\mathcal{T}_{p}\nabla f = 0$ for all $f\in C_C(S)$ then $q = p$ by Lemma B.1 (B). + +We now show the KSD-B is faithful and detects tight non-convergence, proving Proposition 5.1 in the main text. + +Proposition B.14. Say $\operatorname{supp}(p)$ is connected. Assume either (a) $k$ is a vector field kernel with discrete masses or (b) $k$ is a scalar field kernel on $S$ with discrete masses and $E_{p}\mathrm{flux}_{p} < \infty$ . + +(A) The KSD-B is faithful. If $k$ is a scalar field kernel on $S$ , assume further that $E_q\mathrm{flux}_p < \infty$ . Now, $\mathrm{KSD - B}_{p,k}(q) = 0$ only if $p = q$ . +(B) The KSD-B detects tight non-convergence. Say $(q_{n})_{n}$ is a tight sequence of distributions on $S$ satisfying $\mathrm{KSD - B}_{p,k}(q_n)\to 0$ as $n\to \infty$ . If $k$ is a scalar field kernel, assume further that $\sup_n E_{q_n}\mathrm{flux}_p < \infty$ . Then, $q_{n}\rightarrow p$ in distribution. + +Proof. Assume $k$ is a vector field kernel with discrete masses. Say $\mathrm{KSD - B}_{p,k}(q_n) \to 0$ as $n \to \infty$ but $(q_{n})_{n}$ does not converge in distribution to $p$ . Since $(q_{n})_{n}$ is tight, we can, by Prokhorov's theorem, pass to a sub-sequence $(q_{n_k})_k$ that converges in distribution to a distribution $q$ on $S$ . Now, for all $f \in C_{C,vf}(M)$ , $\mathcal{T}_p f$ is non-zero on only finitely many points, so $E_q\mathcal{T}_p f = \lim_k E_{q_{n_k}}\mathcal{T}_pf$ . Recall that having discrete masses implies we also have $f \in \mathcal{H}_k$ . Therefore, $E_{q_{n_k}}\mathcal{T}_pf \leq \| f\|_k\mathrm{KSD - B}_{p,k}(q_{n_k})$ , and so $\lim_k E_{q_{n_k}}\mathcal{T}_pf = 0$ . We thus have $E_q\mathcal{T}_pf = 0$ , which by Lemma B.13 implies $q = p$ , a contradiction. + +If $k$ is a kernel on $S$ with discrete masses by the same logic as above we have that for all $f \in C_C(M)$ , $E_q\mathcal{T}_p f = 0$ . By Fatou's lemma we also have + +$$ +E _ {q} \operatorname {f l u x} _ {p} \leq \lim _ {k} \inf _ {k} E _ {q _ {n _ {k}}} \operatorname {f l u x} _ {p} < \infty . +$$ + +By Lemma B.13 we again have $q = p$ , a contradiction. + +If the support of $p$ is finite – for instance, if $p$ describes only sequences of fixed length – then any sequence of distributions $(q_n)_n$ that sends $\mathrm{KSD - B}_{p,k}(q_n) \to 0$ must be uniformly tight. The reason is that if $\mathrm{KSD - B}_{p,k}(q_n) \to 0$ , Lemma B.13 implies we must have $\operatorname{supp}(q_n) \subset \operatorname{supp}(p)$ eventually. Therefore, if $\operatorname{supp}(p)$ is finite, the KSD-B can detect non-convergence in general. + +# B.3.5. DETECTING CONVERGENCE AND NON-CONVERGENCE IN GENERAL + +We now establish conditions under which the KSD-B can detect convergence and non-convergence for any sequence of distributions $(q_{n})_{n}$ , no matter whether it is tight or not. In this more general setting, we require additional assumptions on $p$ ; we must also choose our kernel $k$ more carefully. + +Conditions on $p$ For the KSD-B to detect non-convergence to $p$ for any sequence $(q_n)_n$ , the stochastic process the KSD-B uses – namely, the continuous time Markov process on sequences defined by the Zanella Stein operator – must in fact converge to $p$ quickly. Intuitively, the KSD-B is evaluating how the expectation of functions in the test set $\mathcal{H}_k$ changes under an infinitesimal step of the stochastic process. If the stochastic process is guaranteed to converge to $p$ quickly, we know the KSD-B can only become very small when $q_n$ is very near to the stationary distribution $p$ . We will therefore require that $p$ satisfies Assumption B.3. Recall that in Corollary B.7, we saw that Assumption B.3 is satisfied for the pHMM, a biological sequence model that is widely successful in practice. + +Concretely, to see that an assumption like Assumption B.3 is in fact necessary for detecting non-convergence, consider the following example. We construct a $p$ that does not satisfy Assumption B.3, along with a sequence of distributions $(q_n)_n$ that does not converge to $p$ , such that the KSD-B nonetheless goes to zero. + +Proposition B.15. Say $\alpha_{1},\alpha_{2},\ldots$ is a decreasing positive sequence such that $\chi (\alpha_L) = L^{-1}|\mathcal{B}|^{-2L}$ . For a distribution $\tilde{p}$ on $\mathbb{N}$ , for a sequence $X\in S$ with $L = |X|$ , let $p(X)\propto |\mathcal{B}|^{-L / 2}\alpha_{L + 1}\tilde{p} (L / 2)$ if $L$ is even and $p(X)\propto |\mathcal{B}|^{-(L - 1) / 2}\tilde{p} ((L - 1) / 2)$ if $L$ is odd. Say $k$ is a bounded vector field kernel. Then there is a sequence $(q_n)_n$ such that $\mathrm{KSD - B}_{p,k}(q_n)\to 0$ and $q_{n}$ does not converge to $p$ in distribution. + +The proof is in Section B.3.6. The intuition is that the tail of $p$ is not "uniformly decreasing": $p(X)$ alternates back and forth between large and small values as $|X|$ increases. Since only sequences that differ in length by one letter (rather than two) are related by $M$ , the KSD-B is fooled into detecting convergence. Note that Assumption B.3 demands that the tail of $p$ falls off not only sufficiently slowly, but also uniformly, as $\mathrm{gap}_p(L)$ depends on the difference between the stochastic process's + +propensity to delete and to insert a letter. Biologically, this result suggests we should be cautious when applying the KSD-B to distributions over sequences with variable-length tandem repeats: if having a complete repeat motif is much more likely than a partial repeat, it may produce a non-uniform tail for $p$ , breaking Assumption B.3 (Sperling & Li, 2013). + +Conditions on $k$ Next, we describe what kinds of kernels we must use to guarantee detection of non-convergence. Recall that for Euclidean KSDs to detect non-convergence, one needs kernels that are heavy tailed, such that their RKHS includes "coercive" functions that have thick tails (Gorham & Mackey, 2017). Intuitively, the situation in sequence space is analogous: we will need RKHSs that include coercive functions. + +To motivate our conditions, we consider examples of scalar and vector field kernels that fail to detect non-convergence. First, we show that if we use a scalar field kernel that is bounded, the KSD-B cannot detect convergence. + +Proposition B.16. Say $k$ is a kernel on $S$ that is bounded. Then there exists a distribution $p$ on $S$ that satisfies Assumption B.3, and a sequence of distributions $q_{n}$ that does not converge to $p$ , such that $\mathrm{KSD - B}_{p,k}(q_n) \to 0$ . + +The proof is in Section B.3.6. The distribution $p$ in our construction does not have particularly heavy or non-uniform tails; in fact, $p$ decays exponentially in sequence length, and satisfies $\mathrm{gap}_p(L) \sim L$ . The issue therefore is the kernel, not $p$ . + +Next we give an example of a vector field kernel that fails to detect non-convergence. The construction is similar in idea to the example in Theorem 6 of Gorham & Mackey (2017). + +Proposition B.17. Let $p(X) \propto e^{-\mu |X|} |\mathcal{B}|^{-|X|}$ for some $\mu > 0$ and $k$ be a vector field kernel such that, for $(X,Y), (X',Y') \in M$ with $|X| = |X'|$ , + +$$ +| k ((X, Y), (X ^ {\prime}, Y ^ {\prime})) | \leq C \left(d _ {H} (X, X ^ {\prime}) + 1\right) ^ {- 4 - \epsilon} +$$ + +for some $C, \epsilon > 0$ where $d_H$ is the Hamming distance. Then there is a sequence of distributions $(q_n)_n$ in $S$ such that KSD-Bp,k(qn) → 0 but qn doesn't converge to p. + +The proof is in Section B.3.6. Here again, the distribution $p$ decays exponentially in sequence length; the problem is the kernel. + +Motivated by these examples, we will require that the RKHS of our kernel includes functions with thick tails. More precisely, we require that there is a $\tilde{f} \in \mathcal{H}_k$ such that $\mathcal{T}_p\tilde{f}$ increases sufficiently quickly with respect to the tail of $p$ . + +Assumption B.18. Say $p$ is a distribution on $S$ that satisfies Assumption B.3 with $V_{p}$ . We assume either: + +(A) $k$ is a vector field kernel such that there is a $\tilde{f} \in \mathcal{H}_k$ with $\lim_{|X| \to \infty} \mathcal{T}_p\tilde{f}(X) = \infty$ and + +$$ +\sum_ {L} \frac {\inf _ {| X | = L} \mathcal {T} _ {p} \tilde {f} (X)}{\left(\sup _ {| X | = L} \operatorname {i n s} _ {p} (X)\right) V _ {p} (L + 1)} = \infty . \tag {10} +$$ + +(B) $k$ is a kernel on $S$ such that $\mathrm{supp}(p)$ is finite or there is a $\tilde{f} \in \mathcal{H}_k$ with $\lim_{|X| \to \infty} \mathcal{T}_p \nabla \tilde{f}(X) = \infty$ and + +$$ +\sum_ {L} C _ {L} \wedge C _ {L + 1} = \infty \text {w h e r e} C _ {L} = \frac {\inf _ {| X | = L} \mathcal {T} _ {p} \nabla \tilde {f} (X)}{\left(\sup _ {| X | = L} \operatorname {f l u x} _ {p} (X)\right) V _ {p} (L + 1)}. \tag {11} +$$ + +To understand this condition, first consider part (A). The denominator in the sum is the maximum propensity for insertions, $\sup_{|X| = L}\mathrm{ins}_p(X)$ , multiplied by our Lyapunov function, $V_{p}(L + 1)$ . In Section B.4.3, we will construct $\tilde{f}$ such that $\inf_{|X| = L}\mathcal{T}_p\tilde{f} (X)\gtrsim \mathrm{gap}_p(|X|)\tilde{f} (X)$ . If $\mathrm{gap}_p(|X|)\gtrsim \mathrm{ins}_p(X)$ , then the assumption is satisfied if $\sum_L\frac{\inf_{|X| = L}\tilde{f}(X)}{V_p(L + 1)} = \infty$ . Recall that $V_{p}$ must diverge to infinity with increasing $L$ to meet Assumption B.3. Thus, part (A) in essence requires that $\tilde{f}$ has thick tails. Note also that if $p$ has thinner tails, $V_{p}$ can be smaller, and this assumption is easier to satisfy. + +Part (B) is similar to (A), except (1) it uses of the operator $\mathcal{T}_p\nabla$ instead of $\mathcal{T}_p$ , (2) the $\mathrm{ins}_p$ terms have been replaced by a $\mathrm{flux}_p$ term, which can be much larger, and (3) the sum over $L$ takes the minimum of sequential terms. The last difference (3) implies that the sequence $C_1,C_2,\ldots$ cannot alternate between large and small values. + +Assumption B.18 is analogous to the coercivity assumption used in Euclidean KSDs, which similarly requires that $\mathcal{H}_k$ includes functions $\tilde{f}$ such that $\mathcal{T}_p\tilde{f}$ increases sufficiently quickly; see Theorem 8 of Gorham & Mackey (2017) and Theorem 3.2 of Huggins & Mackey (2018). + +Non-convergence With Assumption B.3 on $p$ and Assumption B.18 on $k$ , we can prove the KSD-B detects nonconvergence. Our proof strategy is inspired by that of Theorem 8 of Gorham & Mackey (2017). We start by using the fact that the stochastic process converges to $p$ (Theorem B.4) to prove the following lemma, which is similar to Theorem 5 of (Gorham et al., 2019). + +Lemma B.19. Say $p$ is a distribution on $S$ obeying Assumption B.3. If $g \in C_b(S)$ and $g(X) = 0$ for $X \notin \operatorname{supp}(p)$ , then there is a $f_g : S \to \mathbb{R}$ such that $f_g(X) = 0$ for $X \notin \operatorname{supp}(p)$ , $T_p \nabla f_g = g - E_p g$ , and $f_g(X) \leq CV_p(X) \| g \|_{\infty}$ for a universal constant $C$ . + +Proof. Recall that by Theorem B.4, we have + +$$ +\| P _ {t} (X) - p \| _ {\mathrm {T V}} \lesssim V _ {p} (X) t ^ {- (2 + \epsilon)} + t ^ {- (1 + \epsilon)}. +$$ + +Now since $g\in C_b(S)$ + +$$ +\left| P _ {t} g (X) - E _ {p} g \right| \leq \| g \| _ {\infty} \| P _ {t} (X) - p \| _ {\mathrm {T V}}, +$$ + +so $\int_0^\infty dt|P_tg(X) - E_pg|\leq C'\| g\|_\infty V_p(X)$ for some large enough $C^\prime >0$ . Thus we can define + +$$ +f _ {g} (X) = \int_ {0} ^ {\infty} d t \left(E _ {p} g - P _ {t} g (X)\right) +$$ + +with $|f_g|(X) \leq C' \| g \|_{\infty} V_p(X)$ . Because we have absolute integrability, and by Lemma B.1 (A), we can also write + +$$ +\mathcal {L} _ {p} f _ {g} (X) = \int_ {0} ^ {\infty} d t \left(- \mathcal {L} _ {p} P _ {t} g (X)\right) = \int_ {0} ^ {\infty} d t \left(- \frac {d}{d t} P _ {t} g (X)\right) = g (X) - E _ {p} g. +$$ + +![](images/50ba2507a14a4821e2b02d82e45f42cb1e1a3ab478f69481361473a845d2ab56.jpg) + +We now show that the KSD-B can detect non-convergence for any sequence of distributions $q_{n}$ , giving Theorem 5.2. + +Theorem B.20. Say $p$ is a distribution on $S$ obeying Assumption B.3 and $k$ is a scalar or vector field kernel with discrete masses obeying Assumption B.18. Say $(q_n)_n$ is a sequence of distributions on $S$ . If $\mathrm{KSD - B}_{p,k}(q_n) \to 0$ then $q_n$ converges to $p$ in distribution. + +Proof. First note that by Lemma B.13, $\mathrm{supp}(q_n) \subseteq \mathrm{supp}(p)$ for all $n$ eventually. Let $g \in C_b(S)$ with $g(X) = 0$ for $X \notin \mathrm{supp}(p)$ and $\| g \|_{\infty} \leq 1$ , so by Lemma B.19, there is an $f_g : S \to \mathbb{R}$ such that $f_g \leq \tilde{C} V_p$ for some $\tilde{C} > 0$ and $\mathcal{T}_p \nabla f_g = g - E_p g$ . We will show that $E_{q_n} g - E_p g = E_{q_n} \mathcal{T}_p \nabla f_g \to 0$ , which will be enough to prove the theorem, since it implies that $q_n$ converges to $p$ in total variation. We will do so by picking a sequence of $h_m \in \mathcal{H}_k$ such that $\sup_n E_{q_n} | \mathcal{T}_p h_m - \mathcal{T}_p \nabla f_g | \to 0$ as $m \to \infty$ . This will show that + +$$ +\left| E _ {q _ {n}} \mathcal {T} _ {p} \nabla f _ {g} \right| \leq \left| E _ {q _ {n}} \mathcal {T} _ {p} h _ {m} \right| + E _ {q _ {n}} \left| \mathcal {T} _ {p} h _ {m} - \mathcal {T} _ {p} \nabla f _ {g} \right| \leq \| h _ {m} \| _ {k} K S D - B _ {p, k} (q _ {n}) + E _ {q _ {n}} \left| \mathcal {T} _ {p} h _ {m} - \mathcal {T} _ {p} \nabla f _ {g} \right|, +$$ + +which goes to zero as $n\to \infty$ and as $m\rightarrow \infty$ slowly enough. + +First assume $k$ is a scalar field kernel with discrete masses. For a sequence $v = (v_{1}, v_{2}, \ldots)$ of numbers $0 \leq v_{n} \leq 1$ such that $v_{n}$ is eventually equal to 0, define the vector field on $M$ given by $h_{v}(X,Y) = v_{|X| \vee |Y|} \nabla f_{g}(X,Y)$ . Since $v$ is eventually zero, $h_{v}(X,Y) > 0$ for only a finite set of $X, Y \in S$ ; since $k$ has discrete masses, $h_{v} \in \mathcal{H}_{k}$ . As well, + +$$ +\begin{array}{l} \mathcal {T} _ {p} h _ {v} (X) = \sum_ {Y M X} T _ {p, X \rightarrow Y} v _ {| X | \vee | Y |} \nabla f _ {g} (X, Y) \\ = v _ {| X |} \sum_ {Y M X \mid | Y | \leq | X |} T _ {p, X \rightarrow Y} \nabla f _ {g} (X, Y) + v _ {| X | + 1} \sum_ {Y M X \mid | Y | = | X | + 1} T _ {p, X \rightarrow Y} \nabla f _ {g} (X, Y) \\ = v _ {| X |} \mathcal {T} _ {p} \nabla f _ {g} (X) + (v _ {| X | + 1} - v _ {| X |}) \sum_ {Y M X, | Y | = | X | + 1} T _ {p, X \to Y} \nabla f _ {g} (X, Y). \\ \end{array} +$$ + +The first term is a better and better approximation of $\mathcal{T}_p\nabla f_g$ as $v\to 1$ . We will bound the second term using Assumption B.18 (A) and the fact + +$$ +\left| \sum_ {Y M X, | Y | = | X | + 1} T _ {p, X \rightarrow Y} \nabla f _ {g} (X, Y) \right| \leq 2 \tilde {C} V _ {p} (| X | + 1) \operatorname {i n s} _ {p} (X). +$$ + +Assume $k$ satisfies Assumption B.18 (A). Let $\tilde{f} \in \mathcal{H}_k$ satisfy Equation 10 and have $\mathcal{T}_p\tilde{f}(X) \to \infty$ as $|X| \to \infty$ . There is thus a $\zeta \in \mathbb{R}$ such that $\mathcal{T}_p\tilde{f}(X) + \zeta > 0$ for all $X \in S$ . Now call $\Delta v_L = |v_{L+1} - v_L|$ and $R_L := \frac{V_p(L+1)\sup_{|X| = L}\operatorname{ins}_p(X)}{\inf_{|X| = L}\mathcal{T}_p\tilde{f}(X) + \zeta}$ , so, + +$$ +\begin{array}{l} E _ {q _ {n}} | \mathcal {T} _ {p} h _ {v} - \mathcal {T} _ {p} \nabla f _ {g} | \leq E _ {q _ {n}} \left[ (1 - v _ {| X |}) | \mathcal {T} _ {p} \nabla f _ {g} | \right] + E _ {q _ {n}} \left[ \Delta v _ {| X |} 2 \tilde {C} V _ {p} (| X | + 1) \mathrm {i n s} _ {p} (X) \right] \\ \leq 2 \| g \| _ {\infty} E _ {q _ {n}} [ 1 - v _ {| X |} ] + 2 \tilde {C} E _ {q _ {n}} \left[ \left(\mathcal {T} _ {p} \tilde {f} + \zeta\right) \Delta v _ {| X |} \frac {V _ {p} (| X | + 1) \mathrm {i n s} _ {p} (X)}{\mathcal {T} _ {p} \tilde {f} + \zeta} \right] \\ \leq 2 E _ {q _ {n}} \left[ 1 - v _ {| X |} \right] + 2 \tilde {C} E _ {q _ {n}} \left[ \mathcal {T} _ {p} \tilde {f} + \zeta \right] \sup _ {L} (\Delta v _ {L} R _ {L}) \\ \leq 2 E _ {q _ {n}} \left[ \mathcal {T} _ {p} \tilde {f} + \zeta \right] \sup _ {L} \frac {1 - v _ {L}}{\inf _ {| X | = L} \mathcal {T} _ {p} \tilde {f} (X) + \zeta} + 2 \tilde {C} E _ {q _ {n}} \left[ \mathcal {T} _ {p} \tilde {f} + \zeta \right] \sup _ {L} (\Delta v _ {L} R _ {L}) \\ = E _ {q _ {n}} \left[ \mathcal {T} _ {p} \tilde {f} + \zeta \right] \left(2 \sup _ {L} \frac {1 - v _ {L}}{\inf _ {| X | = L} \mathcal {T} _ {p} \tilde {f} (X) + \zeta} + 2 \tilde {C} \sup _ {L} (\Delta v _ {L} R _ {L})\right) \\ \leq \left(\| \tilde {f} \| _ {k} \mathrm {K S D - B} _ {p, k} (q _ {n}) + \zeta\right) \left(2 \sup _ {L} \frac {1 - v _ {L}}{\inf _ {| X | = L} \mathcal {T} _ {p} \tilde {f} (X) + \zeta} + 2 \tilde {C} \sup _ {L} (\Delta v _ {L} R _ {L})\right) \\ \lesssim \sup _ {L} \frac {1 - v _ {L}}{\inf _ {| X | = L} \mathcal {T} _ {p} \tilde {f} (X) + \zeta} + \sup _ {L} \left(\Delta v _ {L} R _ {L}\right). \\ \end{array} +$$ + +By assumption $\mathcal{T}_p\tilde{f} +\zeta \to \infty$ and $\sum_{L}R_{L}^{-1} = \infty$ . For $\epsilon, L' > 0$ define $v_{L}^{\epsilon,L'} = 1$ for $L \leq L'$ and $\Delta v_{L}^{\epsilon,L'} = \epsilon R_{L}^{-1}\wedge (v_{L}^{\epsilon,L'})$ for $L' \geq L$ . By assumption $\sum_{L}R_{L}^{-1} = \infty$ so $v^{\epsilon,L'}$ is eventually 0. We thus have $\sup_L\left(\Delta v_L^{\epsilon ,L'}R_L\right) = \epsilon$ and $\sup_L\frac{1 - v_L^{\epsilon,L'}}{\inf_{|X|\equiv L}\mathcal{T}_p\tilde{f}(X) + \zeta} \leq \frac{1}{\inf_{|X|\geq L}\mathcal{T}_p\tilde{f} + \zeta}$ . By our assumption that $\mathcal{T}_p\tilde{f} \to \infty$ , both of these quantities go to 0 as $L' \to \infty$ and $\epsilon \to 0$ . + +Now assume $k$ is a vector field kernel with discrete masses obeying Assumption B.18 (B). The case that $\operatorname{supp}(p)$ is finite was shown in Proposition 5.1 so assume $\operatorname{supp}(p)$ is infinite. The proof is very similar. This time, for a sequence $v = (v_{1}, v_{2}, \ldots)$ of decreasing numbers $0 \leq v_{n} \leq 1$ such that $v_{n}$ is eventually equal to 0, define the function on $S$ , $h_{v}(X) = v_{|X|} f_{g}(X)$ . Since $v$ is eventually 0, and since $k$ has discrete masses, $h_{v} \in \mathcal{H}_{k}$ . Then, by similar reasoning to the previous case, + +$$ +\begin{array}{l} \mathcal {T} _ {p} \nabla h _ {v} (X) = v _ {| X |} \mathcal {T} _ {p} \nabla f _ {g} (X) + (v _ {| X | + 1} - v _ {| X |}) \sum_ {Y M X, | Y | = | X | + 1} T _ {p, X \to Y} \nabla f _ {g} (X, Y) \\ + \left(v _ {| X | - 1} - v _ {| X |}\right) \sum_ {Y M X, | Y | = | X | - 1} T _ {p, X \to Y} \nabla f _ {g} (X, Y). \\ \end{array} +$$ + +Note that since $V_{p}$ is increasing, the sum of the later two terms is upper bounded by + +$$ +2 \tilde {C} \tilde {\Delta} v _ {L} V _ {p} (| X | + 1) \mathrm {f l u x} _ {p} (X), +$$ + +defining $\tilde{\Delta} v_{L} = |v_{L + 1} - v_{L}| \vee |v_{L} - v_{L - 1}|$ . Now call $\tilde{R}_{L} := \frac{V_{p}(L + 1) \sup_{|X| = L} \mathrm{flux}_{p}(X)}{\inf_{|X| = L} \mathcal{T}_{p} \nabla \tilde{f}(X) + \zeta}$ , so, + +$$ +\begin{array}{l} E _ {q _ {n}} | \mathcal {T} _ {p} \nabla h _ {v} - \mathcal {T} _ {p} \nabla f | \leq E _ {q _ {n}} \left[ (1 - v _ {| X |}) | \mathcal {T} _ {p} \nabla f _ {g} | \right] + E _ {q _ {n}} \left[ \tilde {\Delta} v _ {| X |} 2 \tilde {C} V _ {p} (| X | + 1) \mathrm {f l u x} _ {p} (X) \right] \\ \leq 2 E _ {q _ {n}} \left[ \mathcal {T} _ {p} \nabla \tilde {f} + \zeta \right] \left(\sup _ {L} \frac {1 - v _ {L}}{\inf _ {| X | = L} \mathcal {T} _ {p} \nabla \tilde {f} (X) + \zeta} + 2 \tilde {C} \sup _ {L} \left(\tilde {\Delta} v _ {L} \tilde {R} _ {L}\right)\right) \\ \leq \left(\| \tilde {f} \| _ {k} \mathrm {K S D - B} _ {p, k} (q _ {n}) + \zeta\right) \left(2 \sup _ {L} \frac {1 - v _ {L}}{\inf _ {| X | = L} \mathcal {T} _ {p} \nabla \tilde {f} (X) + \zeta} + 2 \tilde {C} \sup _ {L} \left(\tilde {\Delta} v _ {L} \tilde {R} _ {L}\right)\right) \\ \lesssim \sup _ {L} \frac {1 - v _ {L}}{\inf _ {| X | = L} \mathcal {T} _ {p} \nabla \tilde {f} (X) + \zeta} + \sup _ {L} \left(\tilde {\Delta} v _ {L} \tilde {R} _ {L}\right). \\ \end{array} +$$ + +By assumption $\mathcal{T}_p\tilde{f} +\zeta \to \infty$ and $\sum_{L}\tilde{R}_{L}^{-1}\wedge \tilde{R}_{L + 1}^{-1} = \infty$ . For $\epsilon ,L^{\prime} > 0$ define $v_{L}^{\epsilon ,L^{\prime}} = 1$ for $L\leq L^{\prime}$ and $v_{L}^{\epsilon ,L^{\prime}} = v_{L - 1}^{\epsilon ,L^{\prime}} - \epsilon \tilde{R}_{L - 1}^{-1}\wedge \tilde{R}_{L}^{-1}\wedge (v_{L - 1})$ for $l\geq L$ . Thus $\tilde{\Delta} v_{L}\leq \epsilon \tilde{R}_{L}^{-1}$ . By assumption $\sum_{L}\tilde{R}_{L}^{-1}\wedge \tilde{R}_{L + 1}^{-1} = \infty$ so $v^{\epsilon ,L'}$ is + +eventually 0. We thus have $\sup_L\left(\tilde{\Delta} v_L^{\epsilon ,L'}\tilde{R}_L\right) = \epsilon$ and $\sup_L\frac{1 - v_{|X|}^\epsilon,L'}{\inf_{|X| = L}\mathcal{T}_p\nabla\tilde{f}(X) + \zeta}\leq \frac{1}{\inf_{|X|\geq L}\mathcal{T}_p\tilde{f} + \zeta}$ . By our assumption that $\mathcal{T}_p\nabla \tilde{f}\to \infty$ , both of these quantities go to 0 as $L^{\prime}\rightarrow \infty$ and $\epsilon \rightarrow 0$ . + +Convergence Finally, we prove that the KSD-B can detect convergence, giving Proposition 5.3. + +Proposition B.21. Say $k$ is a vector field kernel and $p, q_1, q_2, \ldots$ are $p, k$ -integrable distributions on $S$ . Call $A(X) = \sum_{Y \in X} T_{p,Y \to X} \sqrt{k((X,Y), (X,Y))}$ . + +$$ +\sum_ {X} | p (X) - q _ {n} (X) | A (X) \rightarrow 0 \Longrightarrow \mathrm {K S D - B} _ {p, k} \left(q _ {n}\right)\rightarrow 0. +$$ + +Proof. Say $f \in \mathcal{H}_k$ . Now, + +$$ +| E _ {p} \mathcal {T} _ {p} f - E _ {q _ {n}} \mathcal {T} _ {p} f | \leq \| f \| _ {k} \sum_ {X} | p (X) - q _ {n} (X) | \sum_ {Y M X} T _ {p, Y \to X} \sqrt {k ((X , Y) , (X , Y))} +$$ + +which proves the result. + +One implication of this result is that when the kernel is large, in the sense that $A(X)$ is big, convergence will be harder to detect. To ensure that the KSD-B can reliably detect convergence, we will want to choose kernels that are not very large. + +# B.3.6. PROOFS OF EXAMPLES + +Here, we give the proofs of the examples discussed in the previous section. + +Proposition B.22. (Proposition B.15) Say $\alpha_{1},\alpha_{2},\ldots$ is a decreasing positive sequence such that $\chi (\alpha_L) = L^{-1}|\mathcal{B}|^{-2L}$ . For a distribution $\tilde{p}$ on $\mathbb{N}$ , for a sequence $X\in S$ with $L = |X|$ , let $p(X)\propto |\mathcal{B}|^{-L / 2}\alpha_{L + 1}\tilde{p} (L / 2)$ if $L$ is even and $p(X)\propto |\mathcal{B}|^{-(L - 1) / 2}\tilde{p} ((L - 1) / 2)$ if $L$ is odd. Say $k$ is a bounded vector field kernel. Then there is a sequence $(q_n)_n$ such that $\mathrm{KSD - B}_{p,k}(q_n)\to 0$ and $q_{n}$ does not converge to $p$ in distribution. + +Proof. We will consider a sequence of distributions indexed by even sequence lengths $L$ , that is, $q_{L}$ for $L \in \{2, 4, 6, \ldots\}$ . Define $\tilde{q}_{L} = p\mathbb{1}_{|X| > L}$ and $q_{L} = \tilde{q}_{L} / \sum_{X} \tilde{q}_{L}(X)$ . Call $q_{L}(L') = q_{L}(X)$ for any $|X| = L'$ . Call $N_{L} = \{(X,Y) \in M \mid |X| = L + 1, |Y| = L\}$ . The terms of the sum in Equation 8 are non-zero only for $(X,Y) \in N_{L}$ . Thus, + +$$ +\begin{array}{l} \mathrm {K S D - B} _ {p, k} \left(q _ {L}\right) ^ {2} = \left(\sup _ {\| f \| _ {k} \leq 1} \sum_ {(X, Y) \in N _ {L}} q _ {L} (X) T _ {p, X \rightarrow Y} f (X, Y)\right) ^ {2} \\ = q _ {L} (L + 1) ^ {2} \left(\sup _ {\| f \| _ {k} \leq 1} \left(f \Bigg | \sum_ {(X, Y) \in N _ {L}} T _ {p, X \rightarrow Y} k _ {(X, Y)}\right) _ {k}\right) ^ {2} \\ = q _ {L} (L + 1) ^ {2} \left\| \sum_ {(X, Y) \in N _ {L}} T _ {p, X \rightarrow Y} k _ {(X, Y)} \right\| _ {k} ^ {2} \\ = q _ {L} (L + 1) ^ {2} \sum_ {(X, Y) \in N _ {L}} \sum_ {(X ^ {\prime}, Y ^ {\prime}) \in N _ {L}} T _ {p, X \to Y} T _ {p, X ^ {\prime} \to Y ^ {\prime}} k ((X, Y), (X ^ {\prime}, Y ^ {\prime})). \\ \end{array} +$$ + +If $(X,Y)\in N_L$ , then $T_{p,X\to Y}\leq (L + 1)\chi (\alpha_{L + 1}) = |\mathcal{B}|^{-2(L + 1)}$ ; this comes from the fact that the maximum number of mutations which can take $X$ to $Y$ is $L + 1$ (corresponding to the case where $X = L\times A$ and $Y = (L + 1)\times A$ , where $A\in \mathcal{B}$ , i.e. $X$ and $Y$ are homopolymers). Thus, if $k$ is bounded by a number $C > 0$ , + +$$ +\mathrm {K S D - B} _ {p, k} (q _ {L}) ^ {2} \leq q _ {L} (L + 1) ^ {2} | \mathcal {B} | ^ {2 (L + 1)} | \mathcal {B} | ^ {- 4 (L + 1)} C \leq | \mathcal {B} | ^ {- 2 (L + 1)} C \to 0. +$$ + +Proposition B.23. (Proposition B.16) Say $k$ is a kernel on $S$ that is bounded, such that $k(X,Y) \leq N < \infty$ for all $X,Y \in S$ . Then there exists a distribution $p$ on $S$ that satisfies Assumption B.3, and a sequence of distributions $q_{n}$ that does not converge to $p$ , such that $\mathrm{KSD - B}_{p,k}(q_n) \to 0$ . + +Proof. Note that since $k$ is bounded by $N$ , we have for all $X \in S$ and $f \in \{f \in \mathcal{H}_k : \| f \|_k \leq 1\}$ , $f(X) = (f|_{kX})_k \leq \| f \|_k \sqrt{k(X, X)} \leq N$ . Thus $\| f \|_{\infty} \leq N$ for all $f \in \{f \in \mathcal{H}_k : \| f \|_k \leq 1\}$ . So, to prove the result, it is sufficient to find $p$ and $q_n$ such that $\sup_{\| f \|_{\infty} \leq 1} E_{q_n} \mathcal{T}_p \nabla f \to 0$ . + +Let $p$ be the distribution supported on $\{\emptyset, A, AA, AAA, \ldots\}$ for $A \in \mathcal{B}$ with $p(L) = p(L \times A) = 2^{-(L + 1)}$ for any number $L$ . Note that this distribution satisfies $\mathrm{gap}_p(L) \sim L$ (as discussed in Section B.2.5). As a consequence, Assumption B.3 is met. + +Now, define $r = \chi\left(\frac{p(L)}{p(L - 1)}\right) = \chi(1/2)$ for any $L$ and $\tilde{r} = \chi\left(\frac{p(L - 1)}{p(L)}\right) = \chi(2) = 2\chi(1/2) = 2r > r$ for any $L$ . Let $\tilde{r}_0 = 0$ . Say $q$ is a distribution supported on finitely many $\{\emptyset, A, AA, AAA, \ldots\}$ , and $f$ is a function on $S$ with $\|f\|_{\infty} \leq 1$ , + +$$ +\begin{array}{l} E _ {q} \mathcal {T} _ {p} \nabla f = \sum_ {L = 0} ^ {\infty} q (L) ((L + 1) r (f (L + 1) - f (L)) + L \tilde {r} (f (L - 1) - f (L))) \\ = \sum_ {L = 0} ^ {\infty} f (L) \left(q (L + 1) (L + 1) \tilde {r} + q (L - 1) L r \right. \\ \left. - q (L) \left(L \tilde {r} + (L + 1) r\right)\right) \\ = \sum_ {L = 0} ^ {\infty} f (L) \left(q (L + 1) (L + 1) \tilde {r} - q (L) L \tilde {r} \right. \\ + q (L - 1) L r - q (L) (L + 1) r \Bigg). \\ \end{array} +$$ + +Let $\tilde{q}_{m,n}(L) = L^{-1}$ for $m \leq L < n$ and $\tilde{q}_{m,n}(L) = 0$ for $L \geq n$ and $L < m$ . Now let $q_{m,n} = \tilde{q}_{m,n} / Z_{m,n}$ where $Z_{m,n} = \sum_{L=m}^{n-1} L^{-1}$ which goes to $\infty$ as $n \to \infty$ . Thus, + +$$ +\begin{array}{l} E _ {q _ {m, n}} \mathcal {T} _ {p} \nabla f = f (m - 1) Z _ {m, n} ^ {- 1} \tilde {r} - f (n - 1) Z _ {m, n} ^ {- 1} \tilde {r} \\ + \sum_ {L = m + 1} ^ {n - 1} f (L) \left(q _ {m, n} (L - 1) L r - q _ {m, n} (L) (L + 1) r\right) \\ - f (m) q _ {m, n} (m) (m + 1) r + f (n) q _ {m, n} (n - 1) n r \\ = Z _ {m, n} ^ {- 1} \bigg (\tilde {r} f (m - 1) - \tilde {r} f (n) - r f (m) \frac {m + 1}{m} + r f (n) \frac {n}{n - 1} \bigg) \\ + \sum_ {L = m + 1} ^ {n - 1} q _ {m, n} (L - 1) f (L) r \left(L - \frac {L - 1}{L} (L + 1)\right) \tag {12} \\ \leq 6 \tilde {r} Z _ {m, n} ^ {- 1} + r \sum_ {L = m + 1} ^ {n - 1} q _ {m, n} (L - 1) L \left| 1 - \frac {L ^ {2} - 1}{L ^ {2}} \right| \\ = 6 \tilde {r} Z _ {m, n} ^ {- 1} + r \sum_ {L = m + 1} ^ {n - 1} q _ {m, n} (L - 1) L ^ {- 1} \\ \leq 6 \tilde {r} Z _ {m, n} ^ {- 1} + r (m + 1) ^ {- 1} \\ \end{array} +$$ + +This expression goes to 0 as $n, m \to \infty$ . + +![](images/ebe8e485ce1f23ac26f65455d4dbda07893553171665177042f581724c114d5c.jpg) + +Proposition B.24. (Proposition B.17) Let $p(X)\propto e^{-\mu |X|}|\mathcal{B}|^{-|X|}$ for some $\mu >0$ and $k$ be a vector field kernel such that, + +for $(X,Y),(X^{\prime},Y^{\prime})\in M$ with $|X| = |X^{\prime}|$ + +$$ +| k ((X, Y), (X ^ {\prime}, Y ^ {\prime})) | \leq C (d _ {H} (X, X ^ {\prime}) + 1) ^ {- 4 - \epsilon} +$$ + +for some $C, \epsilon > 0$ where $d_H$ is the Hamming distance. Then there is a sequence of distributions $(q_n)_n$ in $S$ such that KSD-B $p, k$ (q_n) \to 0$ but $q_n$ doesn't converge to $p$ . + +Proof. First note that for $(X,Y)\in M$ , calling $\chi (e^{\mu}|\mathcal{B}|) = c, T_{p,X\to Y}\leq c(|X| + 1)$ . For distinct points $X_{1},\ldots ,X_{N}\in \mathcal{B}^{L}$ let $q = \frac{1}{N}\sum_{n = 1}^{N}\delta_{X_n}$ . Call $R = \min_{n\neq m}d_H(X_n,X_m) > 0$ . Then by equation 4, + +$$ +\begin{array}{l} \mathrm {K S D - B} _ {p, k} (q) ^ {2} \leq \frac {c ^ {2} (L + 1) ^ {2}}{N ^ {2}} \sum_ {n = 1} ^ {N} \sum_ {m = 1} ^ {N} \sum_ {Y M X _ {n}} \sum_ {Y ^ {\prime} M X _ {m}} | k ((X _ {n}, Y), (X _ {m}, Y ^ {\prime})) | \\ = \frac {c ^ {2} (L + 1) ^ {2}}{N ^ {2}} \left(\sum_ {n = 1} ^ {N} \sum_ {Y M X _ {n}} \sum_ {Y ^ {\prime} M X _ {n}} | k ((X _ {n}, Y), (X _ {n}, Y ^ {\prime})) | \right. \\ \left. + \sum_ {n \neq m} \sum_ {Y M X _ {n}} \sum_ {Y ^ {\prime} M X _ {m}} | k ((X _ {n}, Y), (X _ {m}, Y ^ {\prime})) |\right) \\ \lesssim \frac {(L + 1) ^ {2}}{N ^ {2}} \left(N L ^ {2} + N ^ {2} L ^ {2} R ^ {- (4 + \epsilon)}\right) \\ = O \left(L ^ {4} \left(N ^ {- 1} + R ^ {- (4 + \epsilon)}\right)\right). \\ \end{array} +$$ + +We now set $R_{L} = L(1 - |\mathcal{B}|^{-1}) / 20$ and pick, for each $L$ , $X_{1},\ldots ,X_{N_{L}}\in \mathcal{B}^{L}$ to be the largest set of sequences such that $\min_{n\neq m}d_H(X_n,X_m) > R_L$ . We will show $L^4 N_L^{-1}\to 0$ , so that we will have $L^4\left(N_L^{-1} + R_L^{-(4 + \epsilon)}\right)\to 0$ and the proof will be complete. + +For $X \in \mathcal{B}^L$ , $r > 0$ , define the Hamming ball $B(X, r) = \{Y \in \mathcal{B}^L \mid d_H(X, Y) \leq r\}$ . Thus $\mathcal{B}^L = \cup_n B(X_n, R_L)$ , otherwise we could add another sequence to $(X_n)_n$ . Thus $|\mathcal{B}|^L \leq \sum_n |B(X_n, R_L)| = N_L |B(X_1, R_L)|$ . Now, consider the Hamming distance from $X_1$ to a sequence drawn uniformly at random from $\mathcal{B}^L$ . This distance is a random variable $Z$ distributed as a Binomial with parameters $L$ and $1 - |\mathcal{B}|^{-1}$ . Then $N_L^{-1} \leq |B(X_1, R_L)| / |\mathcal{B}|^L = P(Z \leq R_L)$ . On the other hand, calling $\gamma = 1 - |\mathcal{B}|^{-1}$ and $t = -\log \left(\frac{R_L}{L\gamma}\right) = \log 20$ , and using the moment generating function of the Binomial distribution, + +$$ +\begin{array}{l} P (Z \leq R _ {L}) = P \left(e ^ {- t Z} \geq e ^ {- t R _ {L}}\right) \\ \leq e ^ {t R _ {L}} E e ^ {- t Z} \\ = e ^ {t R _ {L}} \left(\gamma e ^ {- t} + (1 - \gamma)\right) ^ {L} \\ = e ^ {t R _ {L}} \left(1 + \gamma (e ^ {- t} - 1)\right) ^ {L} \\ \leq \exp (t R _ {L} + L \gamma (e ^ {- t} - 1)) \\ = \exp \left(R _ {L} (1 + t) - L \gamma\right) \\ = \exp \left(- L \gamma \left(1 - \frac {1}{2 0} (1 + \log 2 0)\right)\right) \\ \leq \exp \left(- \frac {1}{2} L \gamma\right). \\ \end{array} +$$ + +Thus, $N_{L}\geq e^{\frac{1}{2} L(1 - |\mathcal{B}|^{-1})}$ so that $L^4 N_L^{-1}\to 0$ as $L\rightarrow \infty$ + +# B.3.7. EFFICIENT APPROXIMATE KERNELIZED STEIN DISCREPANCIES + +In this section prove Proposition 6.1, which establishes an efficient approximation for the KSD-B. + +Proposition B.25. (Proposition 6.1) Let $p$ be a distribution on $S$ , and $(q_n)_n$ a sequence of distributions on $S$ with $\sup_n E_{q_n} \, \mathrm{flux}_p < \infty$ . Say $k$ is a bounded vector field kernel. Let $(N_{n,X})_{X \in S,n}$ be a family of numbers. For each $X, n$ , let + +$(Y_{X,m}^{n})_{m = 1}^{N_{n,X}}$ be a set of iid samples, each drawn by taking a single step of a Markov chain with the transition matrix $K_{X\to Y}$ initialized at $X$ . Define the approximate KSD-B, + +$$ +\widehat {\mathrm {K S D - B}} _ {p, k} ^ {n} (q _ {n}) ^ {2} = E _ {X, X ^ {\prime} \sim q} \mathrm {f l u x} _ {p} (X) \mathrm {f l u x} _ {p} (X ^ {\prime}) \frac {1}{N _ {n , X} N _ {n , X ^ {\prime}}} \sum_ {m, m ^ {\prime}} k ((X, Y _ {X, m} ^ {n}), (X ^ {\prime}, Y _ {X ^ {\prime}, m ^ {\prime}} ^ {n})). +$$ + +If $N_{n,X} / (\log (n) + |X|) \to \infty$ then almost surely + +$$ +\left| \mathrm {K S D - B} _ {p, k} (q _ {n}) - \widehat {\mathrm {K S D - B}} _ {p, k} ^ {n} (q _ {n}) \right|\rightarrow 0. +$$ + +Proof. Call $p(Y|X) = K_{X \to Y}$ . Sample $(Y_{X,m}^{n})_{m=1}^{N_{n,X}}$ iid from $p(Y|X)$ for all $X, n$ . Call $\hat{p}_n(Y|X) = \frac{1}{N_{n,X}} \sum_{m=1}^{N_{n,X}} \delta_{Y_{X,m}^n}$ . We have + +$$ +E _ {X \sim q _ {n}} \operatorname {f l u x} _ {p} (X) E _ {\hat {p} _ {n} (Y | X)} \sqrt {k ((X , Y) , (X , Y))} < \infty +$$ + +by assumption since $k$ is bounded. Thus the functional $\phi_n: \mathcal{H}_k \to \mathbb{R} \mid f \mapsto E_{X \sim q_n} \mathrm{flux}_p(X) E_{\hat{p}_n(Y|X)} f(X, Y)$ is bounded, and is thus in $\mathcal{H}_k$ by the Reisz representation theorem. Applying the definition of $K_{X \to Y}$ , we can derive, as in the proof of Proposition B.8, + +$$ +\widehat {\mathrm {K S D - B}} _ {p, k} ^ {n} (q _ {n}) = \| \phi_ {n} \| _ {k} = \sup _ {f \in \mathcal {F}} E _ {X \sim q _ {n}} \operatorname {f l u x} _ {p} (X) E _ {\hat {p} _ {n} (Y | X)} f (X, Y). +$$ + +We will show that $\sup_{X}\| p(Y|X) - \hat{p}_n(Y|X)\|_{\mathrm{TV}}\to 0$ as $n\to \infty$ almost surely below; for now, assume this is the case. Say $k$ is bounded by a number $c^2$ , so $\| f\| _k\leq 1$ implies $\| f\|_{\infty}\leq \sup_{(X,Y)\in M}|(f|k_{(X,Y)})_k|\leq c$ . Thus, using the definition of the total variation metric as an integral probability metric over bounded functions, + +$$ +\begin{array}{l} \left| \mathrm {K S D - B} _ {p, k} (q _ {n}) - \widehat {\mathrm {K S D - B}} _ {p, k} ^ {n} (q _ {n}) \right| \\ \leq \sup _ {\| f \| _ {k} \leq 1} E _ {X \sim q _ {n}} \operatorname {f l u x} _ {p} (X) \left| E _ {p (Y | X)} f (X, Y) - E _ {\hat {p} _ {n} (Y | X)} f (X, Y) \right| \\ \leq c E _ {X \sim q _ {n}} \operatorname {f l u x} _ {p} (X) \| p (Y | X) - \hat {p} _ {n} (Y | X) \| _ {\mathrm {T V}} \\ \leq c \left(E _ {X \sim q _ {n}} \operatorname {f l u x} _ {p} (X)\right) \sup _ {X} \| p (Y | X) - \hat {p} _ {n} (Y | X) \| _ {\mathrm {T V}} \\ \rightarrow 0. \\ \end{array} +$$ + +Now we will show that $\sup_X \| p(Y|X) - \hat{p}_n(Y|X)\|_{\mathrm{TV}} \to 0$ almost surely. Let $X \in \operatorname{supp}(p)$ and call $M|_X = \{Y \in S \mid YMX\}$ . Call $\mathcal{F}_X = \{h : M|_X \to \{-1, 1\}\}$ so + +$$ +\| p(Y|X) - \hat{p}_{n}(Y|X)\|_{\mathrm{TV}} = \frac{1}{2}\sum_{Y\in M|_{X}}|p(Y|X) - \hat{p}_{n}(Y|X)| = \frac{1}{2}\max_{h\in \mathcal{F}_{X}}E_{p(Y|X)}h(Y) - E_{\hat{p}_{n}(Y|X)}h(Y). +$$ + +Note for each $h \in \mathcal{F}_X$ , $E_{\hat{p}_n(Y|X)}[h(Y) - E_{p(Y'|X)}h(Y')]$ is an average of $N_{n,X}$ mean-zero iid random variables that take values $[-2, 2]$ and are therefore sub-Gaussian; the average is thus also a sub-Gaussian random variable, with variance-proxy $C' / \sqrt{N_{n,X}}$ for some $C'$ (Vershynin, 2020, Prop. 2.6.1). Then by a union bound, since $|\mathcal{F}_X| \leq 2^{C''|X|}$ for some $C'' > 0$ , + +$$ +P \left(\| p (Y | X) - \hat {p} _ {n} (Y | X) \| _ {\mathrm {T V}} > \epsilon_ {n}\right) \leq e ^ {C _ {1} | X |} \exp \left(- C _ {2} N _ {n, X} \epsilon_ {n} ^ {2}\right) +$$ + +for some constants $C_1, C_2 > 0$ . Pick a sequence of positive numbers $\epsilon_1, \epsilon_2, \ldots$ and choose $N_{n,X}$ such that $N_{n,X} / (|X| + (\log n)) \to \infty$ . If $\epsilon_n$ decreases slowly enough, then eventually $C_2 N_{n,X} \epsilon_n^2 \geq (C_1 + \log |\mathcal{B}| + 1)|X| + 2 \log n$ , so, + +$$ +\begin{array}{l} \sum_ {X, n} P \left(\| p (Y | X) - \hat {p} _ {n} (Y | X) \| _ {\mathrm {T V}} > \epsilon_ {n}\right) \leq \sum_ {n} \sum_ {X} e ^ {C _ {1} | X |} \exp \left(- C _ {2} N _ {n, X} \epsilon_ {n} ^ {2}\right) \\ \lesssim \sum_ {n} \sum_ {L} \sum_ {| X | = L} | \mathcal {B} | ^ {- L} e ^ {- L} \exp (- 2 \log n) \\ \lesssim \sum_ {n} n ^ {- 2} < \infty . \\ \end{array} +$$ + +By the Borel-Cantelli lemma, the probability that $\| p(Y|X) - \hat{p}_n(Y|X)\|_{\mathrm{TV}} > \epsilon_n$ for infinitely many $X, n$ is 0. Thus, with probability 1, as $n \to \infty$ , + +$$ +\sup _ {X} \| p (Y | X) - \hat {p} _ {n} (Y | X) \| _ {\mathrm {T V}} \rightarrow 0 +$$ + +![](images/b6d9acb79dc2a747af9200724cc3e1e09847c1094b3688ef01721d516b84ec06.jpg) + +# B.4. Designing Kernels for the KSD-B + +In this section we design practical kernels for the KSD-B. Our theoretical results suggest that we want kernels that (1) have discrete masses, (2) have thick tails (Assumption B.18) and (3) are not very large (Proposition B.21). We also want our kernels to capture a sensible biological notion of sequence similarity, so that sequences which are closer together, as judged by the kernel, are more likely to be functionally similar and evolutionarily related. + +Section B.4.1 reviews existing results on scalar field sequence kernels that (a) have discrete masses and (b) capture biological notions of sequence similarity. Section B.4.2 develops vector field kernels with the same virtues. Section B.4.3 adds thick tails; it relies in part on a combinatoric analysis of alignment kernels, which is deferred to Section B.4.5. Section B.4.4 confirms that the proposed kernels are not too large. + +# B.4.1. SCALAR FIELD BIOLOGICAL SEQUENCE KERNELS WITH DISCRETE MASSES + +In this section we review some scalar field kernels for biological sequences with discrete masses, proposed in Amin et al. (2023). + +Position-wise Comparison Kernels We start by introducing kernels that compare sequences position-by-position. Such kernels have strong biological justification for many problems; for instance, amino acids at the same position in related proteins are likely to have similar biological functions. A standard position-wise measure of sequence similarity is the Hamming distance $d_{H}(X,Y)$ which counts the number of positions $l$ at which $X_{(l)}$ does not match $Y_{(l)}$ ; if $Y$ is longer than $X$ , positions past the length of $X$ are counted as mismatches (i.e. we treat each sequence as ending with an infinite tail of stop symbols $). We consider two kernels that rely on the Hamming distance to measure sequence similarity: the exponential Hamming kernel (Exp-H), + +$$ +k _ {\mathrm {E x p} - \mathrm {H}} (X, Y) = \exp (- \lambda d _ {H} (X, Y)) +$$ + +with $\lambda >0$ , and the inverse multiquadric Hamming kernel (IMQ-H), + +$$ +k _ {\mathrm {I M Q - H}} (X, Y) = (C + d _ {H} (X, Y)) ^ {- \beta} +$$ + +with $C, \beta > 0$ . Amin et al. (2023) showed that both have discrete masses. + +Theorem B.26. (Theorem 21 and Example 6 of Amin et al. (2023)) $k_{\mathrm{Exp - H}}$ and $k_{\mathrm{IMQ - H}}$ have discrete masses. + +Alignment Kernels We next consider alignment kernels, which compare sequences based on pairwise alignments (Hausser, 1999). Position-wise comparison kernels will judge two sequences to be very different even if they differ by a single insertion; in many biological settings, however, a small insertion is unlikely to change a sequence's biological function dramatically. Alignment kernels, by contrast, consider sequences that differ by a small number of inserted or deleted letters to be similar. + +To define alignment kernels, we first introduce two simpler kernels that will later be combined. The first is for comparing letters; the second is for penalizing insertions. To compare letters, let $k_{s}(X,Y) = \sigma^{-1}|\mathcal{B}|\delta_{X}(Y)\times \mathbb{1}(|X| = 1)\mathbb{1}(|Y| = 1)$ ; note that $k_{s}$ is only non-zero if $X$ and $Y$ both consist of the same single letter. To penalize insertions, let $k_{I}(X,Y) = \exp (-\mu (|X| + |Y|) - \Delta \mu (\mathbb{1}(|X|\geq 1) + \mathbb{1}(|Y|\geq 1)))$ for $0 < \mu < \infty$ and $0\leq \Delta \mu \leq \infty$ . Sequences compared by $k_{I}$ are interpreted as insertions, and penalized with an insertion start penalty $\Delta \mu$ and insertion length penalty $\mu$ . We also consider a variant without the start penalty, $\tilde{k}_I(X,Y) = \exp (-\mu (|X| + |Y|))$ . + +The alignment kernel sums over all possible pairwise alignments of two sequences, and for each pairwise alignment scores matched positions with $k_{s}$ and insertions with $k_{I}$ . It can be written as, + +$$ +\tilde {k} _ {\mathrm {a l i}} (X, Y) = \sum_ {l, X ^ {(1)} + \dots + X ^ {(2 l + 1)} = X, Y ^ {(1)} + \dots + Y ^ {(2 l + 1)} = Y} k _ {I} \left(X ^ {(1)}, Y ^ {(1)}\right) \prod_ {i = 1} ^ {l} k _ {s} \left(X ^ {(2 i)}, Y ^ {(2 i)}\right) k _ {I} \left(X ^ {(2 i + 1)}, Y ^ {(2 i + 1)}\right) \tag {13} +$$ + +where the sum is over all numbers $l$ and partitions of $X$ and $Y$ into $2l + 1$ substrings. The even substrings $X^{(2)}, X^{(4)}, \ldots, Y^{(2)}, Y^{(4)}, \ldots$ correspond to the matched positions, while the odd substrings $X^{(1)}, X^{(3)}, \ldots, Y^{(1)}, Y^{(3)}, \ldots$ are the intervening insertions. (See e.g. Amin et al. (2023) for a detailed explanation.) We + +also define the local alignment kernel, which does not penalize the creation of an insertion at the beginning or end of the alignment, + +$$ +\tilde {k} _ {\mathrm {l a}} (X, Y) = \sum \tilde {k} _ {I} (X ^ {(1)}, Y ^ {(1)}) \left(\prod_ {i = 1} ^ {l - 1} k _ {s} (X ^ {(2 i)}, Y ^ {(2 i)}) k _ {I} (X ^ {(2 i + 1)}, Y ^ {(2 i + 1)})\right) k _ {s} (X ^ {(2 l)}, Y ^ {(2 l)}) \tilde {k} _ {I} (X ^ {(2 l + 1)}, Y ^ {(2 l + 1)}). +$$ + +Amin et al. (2023) showed that $k_{\mathrm{ali}}$ and $k_{\mathrm{la}}$ have discrete masses, provided the hyperparameters $\Delta \mu$ and $\zeta = 2\mu - \log \sigma + \log |\mathcal{B}|$ are set appropriately. + +Theorem B.27. (Theorems 23 and 25 of Amin et al. (2023)) $\tilde{k}_{\mathrm{ali}}$ and $\tilde{k}_{\mathrm{la}}$ have discrete masses if and only if $\Delta \mu = \infty$ ; or $\Delta \mu > 0$ and $\zeta \geq \log |\mathcal{B}|$ ; or $\Delta \mu = 0$ and $\zeta > \log |\mathcal{B}|$ . + +It is common in practice to work with tilted versions of the alignment kernel, for instance to normalize the kernel. To enable our theoretical analysis of the alignment kernel's tails, we will use the tilting $\tilde{A}(X) = \exp(\mu |X|)$ , which gives the kernels $k_{\mathrm{ali}}(X,Y) = \tilde{A}(X)\tilde{k}_{\mathrm{ali}}(X,Y)\tilde{A}(Y)$ and $k_{\mathrm{la}} = \tilde{A}(X)\tilde{k}_{\mathrm{la}}(X,Y)\tilde{A}(Y)$ . Note that this particular tilting is equivalent to setting $\mu = 0$ . Thus $k_{\mathrm{ali}}$ and $k_{\mathrm{la}}$ in effect have only two parameters, $\Delta \mu$ and $\zeta$ , since the choice of $\zeta$ determines $\sigma$ . + +Infinite kmer Spectrum Kernels Next we consider kmer spectrum kernels, which compare sequences based on how many times substrings (kmers) occur in each sequence (Leslie et al., 2004). Like alignment kernels, kmer spectrum kernels judge two sequences to be similar even if they differ by insertions or deletions rather than just substitutions. In fact, Amin et al. (2023) showed that for a particular tilting and parameter choice, the local alignment kernel is equivalent to a kmer spectrum kernel. The kmer counts of a sequence are its features under the local alignment kernel. + +Proposition B.28. (Proposition 27 in Amin et al. (2023)) Say $\Delta \mu = \infty$ and $\zeta = 0$ . For $X, Z \in S$ call $\phi_Z(X)$ the number of times $Z$ appears in $X$ . Then, $k_{\mathrm{la}}(X,Y) = \sum_{Z \in S} \phi_Z(X) \phi_Z(Y)$ . This kernel has discrete masses. + +We refer to this special case of the local alignment kernel, with $\Delta \mu = \infty$ and $\zeta = 0$ , as an infinite kmer spectrum kernel $k_{\mathrm{ISK}}$ , since it sums over an infinite number of kmers $Z \in S$ (typical kmer spectrum kernels just consider all kmers shorter than a given length, see Leslie et al. (2004)). + +**Embedding Kernels** Finally we consider embedding kernels. These kernels are built using a learned embedding of sequences into Euclidean space $F: S \to \mathbb{R}^D$ . We compare embedded sequences using a translation invariant kernel $k_E(z, z') = \Psi(z - z')$ with $\Psi$ a positive continuous function on $\mathbb{R}^D$ that has a strictly positive Fourier transform. The embedding kernel is defined as $k_{F,\mathrm{Emb}}(X, Y) = k_E(F(X), F(Y))$ . In this paper we always use Unirep64 as $F$ , for which $D = 64$ (Alley et al., 2019). + +Now we look at when embedding kernels have discrete masses. Amin et al. (2023) proved that $k$ has discrete masses if the image of $F$ doesn't have accumulation points. + +Proposition B.29. (Proposition 31 in Amin et al. (2023)) $k_{E}$ has discrete masses if and only if $F(S)$ has no accumulation points, that is, there is no $X \in S$ such that $F(X)$ is in the closure of $F(S \setminus \{X\})$ . + +Amin et al. (2023) suggests that $F$ from regularized representation learning methods may struggle to avoid accumulation points in their image as $S$ is infinite and $F$ outputs representations with small norm. However they suggest this can be solved by rescaling embeddings so that longer sequences have embeddings with larger norm. They demonstrate this theoretically for a random embedding: + +Proposition B.30. Consider an embedding $\tilde{F}$ where each $\tilde{F}(X)$ for $X \in S$ is drawn from the uniform distribution on the sphere, $\{x \in \mathbb{R}^D \mid \|x\| \leq 1\}$ . Then, a kernel using the rescaled embedding $F(X) = |\mathcal{B}|^{(1+\epsilon)|X|/D} \tilde{F}(X)$ , for $\epsilon > 0$ , has discrete masses almost surely. + +To make it likely that our kernels have discrete masses, we use a rescaled embedding below $F_{\mathrm{rescaled}}(X) = |B|^{1.1 \times |X| / 64} F(X)$ . + +By picking different $k_{E}$ we can build different Embedding kernels. For example we define 1) the IMQ embedding kernel $k_{F,IMQ,\sigma}(X,Y) = (1 + \| F_{\mathrm{rescaled}}(X) - F_{\mathrm{rescaled}}(Y)\|^{2} / \sigma^{2})^{-0.5}$ and 2) the EXP embedding kernel $k_{F,EXP}(X,Y) = \exp (-\| F_{\mathrm{rescaled}}(X) - F_{\mathrm{rescaled}}(Y)\|^{2} / (2\sigma^{2}))$ , where the scaling $\sigma$ is a bandwidth parameter. + +Kernel Transformations Tilting a kernel with discrete masses preserves discrete masses. So do several other common kernel transformations. + +Proposition B.31. (Section 6.2 in Amin et al. (2023)) If $k$ is a kernel with discrete masses and $A: S \to (0, \infty)$ , then the tilted kernel $k^A(X, Y) = A(X)k(X, Y)A(Y)$ has discrete masses. If $k, k'$ are kernels with discrete masses, the tensorized kernel $k \otimes k'((X, Y), (X', Y')) = k(X, X')k'(X', Y')$ has discrete masses. If $k$ is a kernel with discrete masses and $k'$ is another kernel then $k + k'$ has discrete masses. If $k$ is a kernel with discrete masses and $S' \subseteq S$ then $k$ restricted to $S'$ has discrete masses. + +We can use these transformations to craft new scalar field kernels with discrete masses out of existing ones. + +# B.4.2. VECTOR FIELD KERNELS WITH DISCRETE MASSES + +In this section we construct vector field kernels that have discrete masses and that capture biological notions of sequence similarity. The basic idea is to develop transformations from scalar field to vector field kernels that preserve the discrete mass property. Recall that we cannot simply take the gradient of the scalar field kernel (Proposition B.9). Instead, our approach is to tensorize scalar field kernels so that they can be applied to pairs of sequences, and then to enforce anticommutativity. + +We first explain how anticommutativity is enforced. The idea is to first define a canonical ordering of sequences; once we have chosen a value of the kernel for the canonical ordering, its value for all other orderings follows by anticommutativity. The canonical ordering itself is defined in terms of a sign. + +Definition B.32. A sign on $M$ is a $\sigma : M \to \{-1, 1\}$ such that $\sigma(X, Y) = -\sigma(Y, X)$ for all $(X, Y) \in M$ . Define $M^{\sigma} = \{(X, Y) \in M \mid \sigma(X, Y) = 1\}$ . For a $(X, Y) \in M$ , define $(X, Y)^{\sigma} = (X, Y)$ if $\sigma(X, Y) = 1$ and $(Y, X)$ otherwise. We say $\sigma$ is "proper" if $\sigma(X, Y) = 1$ if $|Y| = |X| - 1$ for $(X, Y) \in M$ . + +Once we have chosen a value of the kernel for the canonical ordering (i.e. $M^{\sigma}$ ) we can extend it to all orderings (i.e. $M$ ) by symmetry. If the kernel has discrete masses on $M^{\sigma}$ then its extension to $M$ will be a vector field kernel with discrete masses. + +Proposition B.33. Let $\sigma$ be a sign on $M$ . There is a bijective correspondence between kernels on $M^{\sigma}$ and vector field kernels, such that a kernel $k$ on $M^{\sigma}$ corresponds to the vector field kernel + +$$ +\left(\left(X, Y\right), \left(X ^ {\prime}, Y ^ {\prime}\right)\right) \mapsto \sigma (X, Y) \sigma \left(X ^ {\prime}, Y ^ {\prime}\right) k \left(\left(X, Y\right) ^ {\sigma}, \left(X ^ {\prime}, Y ^ {\prime}\right) ^ {\sigma}\right) \tag {14} +$$ + +and a vector field kernel corresponds to its restriction to $M^{\sigma}$ . Kernels $k$ on $M^{\sigma}$ with discrete masses, i.e. kernels such that $\delta_{(X,Y)}\in \mathcal{H}_k$ for all $(X,Y)\in M^{\sigma}$ , correspond to vector field kernels with discrete masses. + +Proof. We first show that any given kernel on $M^{\sigma}$ corresponds to a well-defined, non-negative definite vector field kernel. Provided this holds, it is clear that the correspondence indeed describes a bijection between kernels on $M^{\sigma}$ and vector field kernels, since every vector field kernel must fit the form of Eqn. 14 due to the anticommutivity property (Eqn. 3). Let $k$ be a kernel on $M^{\sigma}$ , $(Z_{n})_{n = 1}^{N}\subset M$ be distinct, and $(\alpha_{n})_{n = 1}^{N}\subset \mathbb{R}$ . For $Z\in M$ , call $\alpha_{Z} = \alpha_{n}$ if $Z = Z_{n}$ and 0 if $Z\neq Z_{n}$ for any $n$ . We have non-negative definiteness of the proposed kernel on $M$ from: + +$$ +\begin{array}{l} \sum_ {n} \sum_ {m} \sigma (Z _ {n}) \sigma (Z _ {m}) \alpha_ {n} \alpha_ {m} k (Z _ {n} ^ {\sigma}, Z _ {m} ^ {\sigma}) = \sum_ {Z \in M} \sum_ {Z ^ {\prime} \in M} \sigma (Z) \sigma (Z ^ {\prime}) \alpha_ {Z} \alpha_ {Z ^ {\prime}} k (Z ^ {\sigma}, Z ^ {\prime^ {\sigma}}) \\ = \sum_ {Z \in M ^ {\sigma}} \sum_ {Z ^ {\prime} \in M ^ {\sigma}} \left(\alpha_ {Z} - \alpha_ {Z ^ {- \sigma}}\right) \left(\alpha_ {Z ^ {\prime}} - \alpha_ {Z ^ {\prime - \sigma}}\right) k \left(Z, Z ^ {\prime}\right) \geq 0. \\ \end{array} +$$ + +Next we need to check that the kernel on $M$ is indeed a vector field kernel, which satisfies anticommutivity. Call $\tilde{k}$ the extension of the kernel $k$ to $M$ . Then if $\tilde{f} \in \mathcal{H}_{\tilde{k}}$ , + +$$ +\tilde {f} (X, Y) = \left(\tilde {f} \Big | \tilde {k} ((X, Y), \cdot)\right) _ {\tilde {k}} = - \left(\tilde {f} \Big | \tilde {k} ((Y, X), \cdot)\right) _ {\tilde {k}} = - \tilde {f} (Y, X). +$$ + +The first equality follows from the fact that for any $f \in \mathcal{H}_k$ there is a $\tilde{f} \in \mathcal{H}_{\tilde{k}}$ such that $\tilde{f}(X,Y) = \sigma(X,Y)f((X,Y)^\sigma)$ . To see this, note that $k_{(X,Y)} \mapsto \tilde{k}_{(X,Y)}$ defines a unitary linear transformation on finite linear combinations of $\{k_{(X,Y)}\}_{(X,Y) \in M^\sigma}$ . This transformation takes $f$ that are finite linear combinations of $\{k_{(X,Y)}\}_{(X,Y) \in M^\sigma}$ to $\tilde{f}$ as defined above, and can be extended to all of $\mathcal{H}_k$ to obey the same property. + +The discrete mass property of the vector field kernel on $M$ follows by setting $f = \delta_{(X,Y)}$ , in which case the corresponding $\tilde{f}$ is a delta function on $M$ (Def. B.11) + +Now, to construct a kernel on $M^{\sigma}$ with discrete masses, we can tensorize two scalar field kernels with discrete masses. In the following proposition, we also give some examples of kernels on $M^{\sigma}$ that are valid kernels (they are non-negative definite) but are not guaranteed to have discrete masses. + +Proposition B.34. Let $k, k'$ be scalar field kernels on $S$ . The following are all valid kernels on $M^{\sigma}$ . + +$$ +\begin{array}{l} \left(\left(X, Y\right), \left(X ^ {\prime}, Y ^ {\prime}\right)\right) \mapsto k (X, X ^ {\prime}) k ^ {\prime} \left(Y, Y ^ {\prime}\right) \\ \left(\left(X, Y\right), \left(X ^ {\prime}, Y ^ {\prime}\right)\right) \mapsto \left(k \left(X, X ^ {\prime}\right) + k ^ {\prime} \left(Y, Y ^ {\prime}\right)\right) ^ {2} \\ \left(\left(X, Y\right), \left(X ^ {\prime}, Y ^ {\prime}\right)\right) \mapsto k \left(X + Y, X ^ {\prime} + Y ^ {\prime}\right) \\ \left(\left(X, Y\right), \left(X ^ {\prime}, Y ^ {\prime}\right)\right) \mapsto k \left(X, X ^ {\prime}\right) \\ \left(\left(X, Y\right), \left(X ^ {\prime}, Y ^ {\prime}\right)\right) \mapsto k (Y, Y ^ {\prime}) \mathbb {1} \left(\left| X \right| \neq \left| Y \right|, \left| X ^ {\prime} \right| \neq \left| Y ^ {\prime} \right|\right). \\ \end{array} +$$ + +If $k, k'$ have discrete masses, then the first two kernels have discrete masses on $M^{\sigma}$ . By Proposition B.33, their extension to $M$ is a vector field kernel with discrete masses. + +Proof. The first four of these kernels are non-negative definite because they are restrictions of non-negative definite kernels on $S \times S$ . The last kernel can be constructed by first defining the kernel $((X, Y), (X', Y')) \mapsto k(Y, Y')$ on $S \times S$ , restricting to $\{(X, Y) \in M^{\sigma} \mid |X| \neq |Y|\}$ and then extending to the rest of $M^{\sigma}$ by setting $k_{(X,Y)} = 0$ if $|X| = |Y|$ . Each of these operations preserves non-negative definiteness. + +If $k, k'$ have discrete masses, the first kernel described above has discrete masses by Proposition B.31 as it is the restriction of $k \otimes k'$ on $S \times S$ . The second kernel also has discrete masses by Proposition B.31 as $(k(X, X') + k'(Y, Y'))^2 = (k(X, X')^2 + k'(Y, Y'))^2 + 2k \otimes k'((X, Y), (X', Y'))$ so that is the sum of two kernels, one with discrete masses. + +We can now employ scalar field kernels with discrete masses to construct vector field kernels with discrete masses. If the scalar field kernel we use captures sensible biological notions of sequence similarity – such as Hamming distance or alignment distance – then the resulting vector field kernel will also. + +# B.4.3. THICK TAILED VECTOR FIELD KERNELS WITH DISCRETE MASSES + +So far, we have shown how to construct vector field kernels with discrete masses that satisfy biological notions of sequence similarity. The next step is to add thick tails. Recall that the thickness of a kernel's tails is measured with respect to the base distribution $p$ (Assumption B.18). Our analysis in this section will focus on the setting where $p$ is a pHMM, and take $\chi(t) = t \wedge 1$ , so that Proposition B.6 and Corollary B.7 hold. + +Establishing Thick Tails in Practice For the KSD-B to be guaranteed to detect non-convergence, the kernel must satisfy Assumption B.18. To prove this holds for our actual proposed kernels, we will use the following technical lemma. The lemma asks for an $h \in \mathcal{H}_k$ that can be written as a small perturbation $g(X,Y)$ of a vector field $f(X,Y)$ that is zero if $X$ and $Y$ are the same length, and which depends only on $|X|$ when $X$ and $Y$ have different lengths. So long as $f$ does not increase or decrease too quickly with $|X|$ , we will be guaranteed that $\mathcal{T}_ph$ is large, and Assumption B.18 satisfied. + +Lemma B.35. Say $p$ is a $pHMM$ and $k$ is either (a) a vector field kernel with a $h \in \mathcal{H}_k$ such that $h = f + g$ for vector fields $f, g$ which satisfy the following conditions, or (b) a scalar field kernel with a $h \in \mathcal{H}_k$ such that $\nabla h = f + g$ for $f, g$ that satisfy the following conditions. + +1. (f only detects differences of sequence length) $f(X,Y) = 0$ if $|X| = |Y|$ and $f(X,Y)$ depends only on $|X|$ if $|X| > |Y|$ . Call $f(L) = f(X,Y)$ for $|X| = L$ and $|Y| = L - 1$ . +2. $(g$ is small) $g(X,Y) = o\big(f(|X|)\big)$ as $|X|\to \infty$ . Note $g = 0$ satisfies this condition. +3. ( $f$ does not increase too fast) As $L \to \infty$ , $f(L)$ is eventually positive. Moreover, for small enough $c > 0$ , $(f(L + 1) - f(L)) \leq cf(L)$ eventually. Note this latter condition is satisfied if $f$ is non-increasing in $L$ . + +4. ( $f$ does not decrease too fast) $f(L)\gtrsim L^{-(1 - \delta)}$ for some $\delta >0$ . Note this condition is satisfied if $f$ is non-decreasing in $L$ . + +Then, $k$ satisfies Assumption B.18 for $p$ . + +Proof. Note first that if $c$ is small enough, since $\mathrm{gap}_p(L) \gtrsim \sup_{|X| = L} \mathrm{ins}_p(X)$ by Proposition B.6, we eventually have $\mathrm{gap}_p(L) \geq c \sup_{|X| = L} \mathrm{ins}_p(X)$ . Thus, eventually, + +$$ +\left(\sup _ {| X | = L} \operatorname {i n s} _ {p} (X)\right) \left(f (L + 1) - f (L)\right) \leq \operatorname {g a p} _ {p} (L) f (L). +$$ + +Say $X \in S$ and $|X| = L$ . Since $\mathrm{flux}_p(X) \sim \mathrm{gap}_p(|X|)$ by Proposition B.6, for large enough $L$ , + +$$ +\begin{array}{l} \mathcal {T} _ {p} (f + g) (X) = - \operatorname {i n s} _ {p} (X) f (L + 1) + \operatorname {d e l} _ {p} (X) f (L) + \operatorname {f l u x} _ {p} (X) o (f (L)) \\ \geq \operatorname {g a p} _ {p} (L) f (L) + \operatorname {i n s} _ {p} (X) \left(f (L) - f (L + 1)\right) + \operatorname {f l u x} _ {p} (X) o (f (L)) \\ \geq \mathrm {g a p} _ {p} (L) f (L) (1 + o (1)). \\ \end{array} +$$ + +Since $\operatorname{ins}_p(X) \lesssim \operatorname{gap}_p(|X|) \sim \operatorname{flux}_p(X)$ and we can set $V_p(X) = (\log |X|)^{2 + \epsilon}$ for some $\epsilon > 0$ by Proposition B.6 and Corollary B.7, + +$$ +\sum_ {L} \frac {\inf _ {| X | = L} \mathcal {T} _ {p} (f + g) (X)}{\left(\sup _ {| X | = L} \mathrm {f l u x} _ {p} (X)\right) V _ {p} (L)} \gtrsim \sum_ {L} \frac {f (L)}{(\log L) ^ {2 + \epsilon}} \gtrsim \sum_ {L} (\log L) ^ {- (2 + \epsilon)} L ^ {- (1 - \delta)} = \infty +$$ + +and the same is true replacing $\mathrm{flux}_p$ with $\mathrm{ins}_p$ . + +Alignment Kernel Tails In the case of the alignment kernel, finding an $h$ that satisfies Lemma B.35 is non-trivial. In Section B.4.5 we deploys tools from combinatorics to unpack the asymptotic behavior of the alignment kernel; this allows us to construct such an $h$ . Here, we summarize the key conclusions of Section B.4.5. Define $\xi = 1 - e^{-\Delta \mu} < 1$ and the function, + +$$ +r _ {1} (x, \xi) = \frac {1}{2} \left(1 + x + \sqrt {(1 + x) ^ {2} - 4 \xi x}\right). +$$ + +Note that to ensure discrete masses, we need $\zeta \geq \log |\mathcal{B}|$ (Theorem B.27); in this case, $r_1(e^{\zeta /2},\xi)\geq r_1(e^{\zeta /2}|\mathcal{B}|^{-1 / 2},\xi) > 1$ We first study the alignment and local alignment kernels. + +Proposition B.36. Say $\Delta \mu < \infty$ . Then, + +$$ +L ^ {1 / 2} r _ {1} (e ^ {\zeta / 2}, \xi) ^ {| X |} \leq \sqrt {k _ {\mathrm {a l i}} (X , X)} \leq r _ {1} (e ^ {\zeta / 2}, \xi) ^ {| X |} +$$ + +and for any $\pi < 1$ , there is a $h \in \mathcal{H}_k$ such that $h(X)$ depends only on $|X|$ and + +$$ +h (X) = r _ {1} \left(\pi e ^ {\zeta / 2} | \mathcal {B} | ^ {- 1 / 2}, \xi\right) ^ {| X |} + O \left(| X |\right). +$$ + +As well, for any $X \in S$ , $k_{\mathrm{ali},X} \lesssim h$ . The same proposition is true replacing $k_{\mathrm{ali}}$ with $k_{\mathrm{la}}$ . + +We next study the infinite kmer spectrum kernel. (Recall that this is equivalent to a local alignment kernel with $\Delta \mu = \infty$ and $\zeta = 0$ .) + +Proposition B.37. Let $A(X) = |X|^{-3/2}$ . $k_{\mathrm{ISK}}^A$ is a bounded $C_0$ kernel, i.e. for all $f \in \mathcal{H}_k$ , $f \in C_0(S)$ . $k_{\mathrm{ISK}}^A$ is also non-vanishing, i.e. $\sqrt{k_{\mathrm{ISK}}^A(X,X)} \nrightarrow 0$ as $|X| \to \infty$ . Moreover, if we set $h = \sum_{Y \in \mathcal{B}} k_{\mathrm{ISK},Y}^A$ , then $h(X)$ depends only on $|X|$ and $h(X) = |X|^{-1/2} + |\mathcal{B}| |X|^{-3/2}$ . + +Scalar Field Kernels with Thick Tails We now describe our proposed scalar field kernels. These kernels have discrete masses, capture biological notions of sequence similarity, and possess thick tails. + +Proposition B.38. The following kernels have discrete masses and satisfy Assumption B.18 for any $pHMM$ $p$ with $\chi (t) = t\wedge 1$ + +1. Unbounded IMQ-H (IMQ-H $(\mathbf{U})$ ): $k(X,Y) = A(X)k_{\mathrm{IMQ - H}}(X,Y)A(Y)$ for $A(X) = (|X| + C)^{\beta +1}$ . +2. Unbounded alignment kernel $(\mathbf{A}\boldsymbol{\mathrm{li}}(\mathbf{U}))$ : $k(X,Y) = A(X)k_{\mathrm{ali}}(X,Y)A(Y)$ for $0 < \Delta \mu < \infty, \zeta \geq \log |\mathcal{B}|$ and $A(X) = r_1(e^{\zeta / 2}|\mathcal{B}|^{-1/2}, \xi)^{(1 - \epsilon)|X|}$ for some $\epsilon > 0$ . One can replace $k_{\mathrm{ali}}$ with $k_{\mathrm{la}}$ . +3. Unbounded infinite kmer spectrum kernel $(\mathbf{ISK}(\mathbf{U}))$ : $k(X,Y) = k_{\mathrm{ISK}}(X,Y)$ . + +Proof. First note all three of these kernels have discrete masses by Theorems B.26, B.27 and B.28, since tilting preserves discrete masses (Proposition B.31). Now we show they satisfy the conditions of Lemma B.35. + +1. IMQ-H (U) Let $h = -k_{\emptyset}$ , so $h(X) = -(|X| + C)$ . $\nabla h(X,Y) = |X| - |Y|$ depends only on $|X|, |Y|$ and is 0 if $|X| = |Y|$ . Setting $f = \nabla h$ and $g = 0$ we have $f(L) = 1$ for all $L$ and we satisfy the conditions of Lemma B.35. +2. Ali (U) Let $h$ be as defined in Proposition B.36, picking $\pi < 1$ such that $\frac{r_1(\pi e^{\zeta/2} |\mathcal{B}|^{-1/2}, \xi)}{r_1(e^{\zeta/2} |\mathcal{B}|^{-1/2}, \xi)^{1-\epsilon}} = 1 + \delta$ for a small $\delta$ . $h$ is a function only of the length of the sequence and $h(X) = (1 + \delta)^{|X|} + o(1)$ . Call $f = -\nabla \left( X \mapsto (1 + \delta)^{|X|} \right)$ and $g = f - \nabla (-h) = o(1)$ . Now, + +$$ +f (X, Y) = 0 \text {i f} | X | = | Y | +$$ + +$$ +f (X, Y) = \delta (1 + \delta) ^ {| X | - 1} \text {i f} | Y | = | X | - 1 +$$ + +$$ +f (Z, X) - f (X, Y) = \delta^ {2} (1 + \delta) ^ {| X | - 1} = \delta (1 + \delta) ^ {- 1} f (X, Y) \mathrm {i f} | Y | < | X | < | Z |. +$$ + +Clearly $f$ and $g$ satisfy conditions 1, 2, and 4 of Lemma B.35 and, picking small enough $\delta$ , condition 3 is also satisfied. + +3. ISK (U) Let $h$ be as defined in Proposition B.37 so that $h$ is a function only of sequence length and $A(X)h(X) = |X| + 4$ . By similar reasoning to the IMQ-H (U), setting $f = \nabla h$ and $g = 0$ we satisfy the conditions of Lemma B.35. + +Vector Field Kernels with Thick Tails We now describe our proposed vector field kernels. These kernels have discrete masses, capture biological notions of sequence similarity, and possess thick tails. To construct the kernels, we add together a thick tailed kernel that does not have discrete masses and a thin tailed kernel that does. In this section, we assume $\sigma$ is a proper ordering (Definition B.32). We may for example let $\sigma$ be the lexicographic ordering for some ordering of the letters in $\mathcal{B}$ . + +Proposition B.39. Let $k = k_{\mathrm{HT}} + k_{\delta}$ for vector field kernels $k_{\mathrm{HT}}, k_{\delta}$ . Any of the following choices of $k_{\mathrm{HT}}$ and $k_{\delta}$ result in a vector field kernel $k$ that is bounded and satisfies Assumption B.18 for a pHMM $p$ and $\chi(t) = t \wedge 1$ . $k_{\mathrm{HT}}$ can be, + +1. IMQ-H (IMQ-H): $k_{\mathrm{HT}}((X,Y),(X',Y')) = k_{\mathrm{IMQ - H}}(Y,Y') \mathbb{1}(|X| \neq |Y|, |X'| \neq |Y'|)$ for $(X,Y) \in M^{\sigma}$ , with $\beta < 1$ in $k_{\mathrm{IMQ - H}}$ . +2. Infinite kmer spectrum kernel (ISK): $k_{\mathrm{HT}}((X,Y),(X',Y')) = A(Y)k_{\mathrm{ISK}}(Y,Y')A(Y')\mathbb{1}(|X| \neq |Y|, |X'| \neq |Y'|)$ for $(X,Y) \in M^{\sigma}$ with $A(X) = |X|^{-3/2}$ . + +$k_{\delta}$ can be, + +1. Alignment kernel $(\mathbf{A}\boldsymbol{\mathrm{li}})$ : $k_{\delta}((X,Y),(X',Y')) = (k_{\mathrm{ali}}^{A}(X,X') + k_{\mathrm{ali}}^{A}(Y,Y'))^{2}$ for $(X,Y)\in M^{\sigma}$ , with $0 < \Delta \mu < \infty$ and $\zeta \geq \log |\mathcal{B}|$ , and $A(X) = (r_1(e^{\zeta /2},\xi))^{-|X|}$ . One can also use $k_{\mathrm{la}}$ instead of $k_{\mathrm{ali}}$ . +2. Exponential Hamming kernel $(\mathbf{Exp - H})$ : $k_{\delta}((X,Y),(X',Y')) = (k_{\mathrm{Exp - H}}(X,X') + k_{\mathrm{Exp - H}}(Y,Y'))^2$ for $(X,Y)\in M^{\sigma}$ . + +Proof. Note the vector field alignment kernel $\mathbf{Ali}$ has discrete masses by Propositions B.33 and B.34, since the tilted scalar field alignment kernel has discrete masses by Theorem B.27 and tilting preserves discrete masses by Proposition B.31. Similarly, by Theorem B.26 the vector field exponential Hamming kernel $\mathbf{Exp - H}$ has discrete masses. Finally, since $k_{\delta}$ has discrete masses, by Proposition B.31, $k = k_{\mathrm{HT}} + k_{\delta}$ has discrete masses when restricted to $M^{\sigma}$ . Finally by Proposition B.33 $k$ has discrete masses as a vector field kernel on $M$ . + +Also note that all of these kernels are bounded: the alignment vector field kernel is bounded by Proposition B.36 and the ISK kernel is bounded by Proposition B.37. + +We prove that $k = k_{\mathrm{HT}} + k_{\delta}$ satisfies the conditions of Lemma B.35 when $k_{\delta}$ is chosen to be the alignment kernel Ali, using Proposition B.36. The logic is similar when we choose $k_{\delta}$ to be the exponential Hamming kernel Exp-H. + +1. Let $k_{\mathrm{HT}}$ be the IMQ-H kernel. Let $h = k_{(A,\emptyset)}$ for some $A \in \mathcal{B}$ . Now let $f = k_{\mathrm{HT},(A,\emptyset)}$ so that $f(X,Y) = (|Y| + C')^{-\beta} = (|X| - 1 + C')^{-\beta}$ if $|Y| < |X|$ and $f(X,Y) = 0$ if $|X| = |Y|$ , which satisfies conditions 1, 3, and 4 of Lemma B.35. Define $g = h - f = k_{\delta,(A,\emptyset)}$ . Let $\tilde{h}$ be as defined as the $h$ in Proposition B.36 for some $0 < \pi < 1$ . We have, applying the bound on $k_{\mathrm{ali}}$ from Proposition B.36, that + +$$ +k _ {\mathrm {a l i}} ^ {A} (X, X ^ {\prime}) \lesssim \tilde {h} (X) A (X) \sim \left(\frac {r _ {1} (\pi e ^ {\zeta / 2} | \mathcal {B} | ^ {- 1 / 2} , \xi)}{r _ {1} (e ^ {\zeta / 2} , \xi)}\right) ^ {| X |} + O (| X | r _ {1} (e ^ {\zeta / 2}, \xi) ^ {- | X |}) \sim \exp (- c | X |) +$$ + +for some $c > 0$ when $|X'| = 0$ or $|X'| = 1$ . Thus, $g(X,Y) = k_{\delta,(A,\emptyset)}(X,Y) = O(e^{-2c|X|})$ . Thus, condition 2 of Lemma B.35 is also satisfied. + +2. Let $k_{\mathrm{HT}}$ be the ISK kernel. Let $h = \sum_{b \in \mathcal{B}} k_{(b + b, b)}$ . Now let $h_{\mathrm{HT}} = \sum_{b \in \mathcal{B}} k_{\mathrm{HT}, (b + b, b)}$ so that, by Proposition B.37, $h_{\mathrm{HT}}(X, Y) = (|X| - 1)^{-1/2} + C o(|X|^{-1/2})$ if $|Y| < |X|$ and $h_{\mathrm{HT}}(X, Y) = 0$ if $|X| = |Y|$ . Let $f(X, Y) = (|X| - 1)^{-1/2}$ if $|Y| < |X|$ and $f(X, Y) = 0$ if $|X| = |Y|$ . Finally define $g(X, Y) = (h - f)(X, Y) = \sum_{b \in \mathcal{B}} k_{\delta, (b + b, b)} (X, Y) + o(|X|^{-1/2})$ . As in the previous example, $g(X, Y) = O(e^{-c|X|}) + o(|X|^{-1/2})$ . Thus, $k$ satisfies the conditions of Lemma B.35. + +# B.4.4. KERNEL INTEGRABILITY + +For the KSD-B to reliably detect convergence to $p$ , Proposition 5.3 says we need $k$ to be too large. At a minimum, the KSD-B should be zero when evaluated exactly at $p$ itself. Proposition 5.1 says that for $\mathrm{KSD - B}_{p,k}(p) = 0$ we need $p$ to be $p, k$ -integrable. In this section we show that for all of our proposed vector field kernels, if $\chi(t) = 1 \wedge t$ then any subexponential $p$ is $p, k$ -integrable (recall $p$ is subexponential if $E_{X \sim p} e^{t|X|} < \infty$ for small enough $t$ (Amin et al., 2021)). All pHMMs are subexponential (Proposition B.6). Another important class of subexponential distributions are autoregressive models (Proposition 5 of Amin et al. (2021)). + +Only some of our proposed scalar field kernels, however, have the same guarantee. This reflects a fundamental disadvantage of scalar field kernels: they must be unbounded, and hence very large, to detect non-convergence. + +Our results rely on the following lemma, which gives conditions on the kernel that ensure $p, k$ -integrability when $p$ is sub-exponential. + +Lemma B.40. Say $\chi(t) = t \wedge 1$ and $p$ is subexponential. If $\sqrt{k((X,Y), (X,Y))} \leq e^{t' |X|}$ for small enough $t'$ then $p$ is $p, k$ integrable. + +Proof. Note that since $\chi(t) = t \wedge 1$ , $T_{p,X \to Y} \leq |X|$ for all $XY$ . Now we have + +$$ +E _ {X \sim p} \sum_ {Y M X} T _ {p, X \rightarrow Y} \sqrt {k ((X , Y) , (X , Y))} \leq E _ {X \sim p} | X | ^ {2} e ^ {t ^ {\prime} | X |} < \infty +$$ + +if $t^\prime$ is small enough. + +We can now guarantee integrability for all of our proposed kernels, besides Ali (U). + +Corollary B.41. Say $\chi(t) = t \wedge 1$ and $p$ is subexponential. If $k$ is any of the vector field kernels considered in Proposition B.39 then $p$ is $p, k$ integrable. If $k$ is the IMQ- $\mathbf{H}$ ( $U$ ) or ISK ( $U$ ) kernels considered in Proposition B.38 then $p$ is $p, k^{\nabla}$ integrable. + +Proof. The first statement follows from Lemma B.40 and the fact that the kernels in Proposition B.39 are bounded. The second statement follows from the fact that for the IMQ-H (U) kernel, $\sqrt{k(X,X)} = (1 + |X|)^{1 + \beta}$ and for the ISK (U) kernel $\sqrt{k(X,X)} \leq (1 + |X|)^{3/2}$ by Proposition B.37. + +Next we consider the unbounded scalar field alignment kernel, Ali (U). If $p$ is a pHMM and $k$ is Ali (U), it is possible that under certain conditions $p$ is not $p, k$ -integrable. By Proposition B.36, + +$$ +L ^ {- 1 / 2} \left(\frac {r _ {1} (e ^ {\zeta / 2} , \xi)}{r _ {1} (e ^ {\zeta / 2} | \mathcal {B} | ^ {- 1 / 2} , \xi) ^ {1 - \epsilon}}\right) ^ {L} \leq \sup _ {| X | = L} \sqrt {k (X , X)}. +$$ + +One can check that the ratio on the left hand side is minimized in the limit $\epsilon = 0, \xi = 0, \zeta = \log |\mathcal{B}|$ in which case + +$$ +\frac {r _ {1} (e ^ {\zeta / 2} , \xi)}{r _ {1} (e ^ {\zeta / 2} | \mathcal {B} | ^ {- 1 / 2} , \xi) ^ {1 - \epsilon}} \geq \frac {r _ {1} (| \mathcal {B} | ^ {1 / 2} , 0)}{r _ {1} (1 , 0)} = \frac {| \mathcal {B} | ^ {1 / 2} + 1}{2}. +$$ + +Now, $\frac{1}{2}\left(|\mathcal{B}|^{1/2} + 1\right)$ is $3/2$ in the case when $\mathcal{B}$ is the set of nucleotides (where $|\mathcal{B}| = 4$ ) and approximately 2.74 in the case when $\mathcal{B}$ is the set of amino acids (where $|\mathcal{B}| = 20$ ). So on real biological sequence data, $\sup_{|X| = L} \sqrt{k(X,X)}$ grows exponentially, and thus the unbounded scalar field alignment kernel may be too large to ensure $p, k$ integrability. + +# B.4.5. PROOFS FOR THE ALIGNMENT KERNEL + +In this section we bound $\sqrt{k(X, X)}$ and find a thick tailed $h \in \mathcal{H}_k$ for the alignment kernel $k$ . + +Let us review some results for the case when $|\mathcal{B}| = 1$ that will be useful. If $\mathcal{B} = \{A\}$ , call $k(L,L^{\prime}) = k(L\times A,L^{\prime}\times A)$ . Section 9 of Amin et al. (2023) showed that there is an orthogonal basis $(u_{L})_{L}$ such that $\| u_{L}\|_{k} = e^{-\zeta L / 2}$ where $\zeta = 2\mu +\log k(A,A)$ . In this case, $(u_{L'},k_L)_k\geq 0$ for all $L,L^{\prime}$ and $(u_{L'},k_L)_k = 0$ if $L^{\prime} > L$ . Then, defining the infinite upper triangular matrix $Q$ such that $Q_{L',L} = (u_{L'},k_L)_k$ , we get + +$$ +k \left(L, L ^ {\prime}\right) = \left(k _ {L} \mid k _ {L ^ {\prime}}\right) _ {k} = \left(\sum_ {L ^ {\prime \prime}} Q _ {L ^ {\prime \prime}, L} e ^ {\zeta L ^ {\prime \prime}} u _ {L ^ {\prime \prime}} \Bigg | \sum_ {L ^ {\prime \prime}} Q _ {L ^ {\prime \prime}, L ^ {\prime}} e ^ {\zeta L ^ {\prime \prime}} u _ {L ^ {\prime \prime}}\right) _ {k} = \sum_ {L ^ {\prime \prime} = 0} ^ {\infty} Q _ {L ^ {\prime \prime}, L} Q _ {L ^ {\prime \prime}, L ^ {\prime}} e ^ {L ^ {\prime \prime} \zeta}. \tag {15} +$$ + +The same equation holds for $k_{\mathrm{la}}$ for another matrix $Q_{\mathrm{la}}$ . The exact values of the entries of the matrix $Q$ and the matrix $Q_{\mathrm{la}}$ will be important to achieve bounds on the tails of the alignment kernel. Amin et al. (2023) showed in Appendix I and J that if we define $\xi = 1 - e^{-\Delta \mu}$ , $f_{\xi}(y) = \frac{1 - \xi y}{1 - y}$ , and the formal power series + +$$ +F _ {\xi} (x, y) = \frac {f _ {\xi} (y)}{1 - x y f _ {\xi} (y)} = \frac {1 - \xi y}{1 - (1 + x) y + \xi x y ^ {2}} +$$ + +$$ +F _ {\xi , \mathrm {l a}} (x, y) = x y \frac {\left(\frac {1}{1 - y}\right) ^ {2}}{1 - x y f _ {\xi} (y)} + \frac {1}{1 - y} = \frac {x y}{(1 - y) (1 - (1 + x) y + \xi x y ^ {2})} + \frac {1}{1 - y} +$$ + +then $Q_{L',L} = [x^{L'}y^{L}]F_{\xi}(x,y)$ and $Q_{\mathrm{la},L',L} = [x^{L'}y^{L}]F_{\xi,\mathrm{la}}(x,y)$ where $[x^{L'}y^{L}]$ denotes the coefficient in front of the term $x^{L'}y^{L}$ of the formal power series. + +We now show that we can use these formal power series to describe the size of $k(X, X)$ . + +Proposition B.42. Calling $C_L = [y^L]F_\xi(e^{\zeta/2}, y)$ , we have $L^{-1/2}C_L \leq \sup_{|X|=L} \sqrt{k(X, X)} \leq C_L$ . The same inequality is true for $k_{\mathrm{la}}$ and $F_{\xi,\mathrm{la}}$ . + +Proof. First, by equation 13, if $A \in \mathcal{B}$ , we clearly have $k(X, X) \leq k(|X| \times A, |X| \times A)$ , since the alignment kernel takes its largest value if every letter in the sequence matches every other. $k_{s}(A, A) = \sigma^{-1}|\mathcal{B}|$ , so $k$ restricted to $\{\emptyset, A, AA, AAA, \ldots\}$ is identical to the string kernel in the case $|\mathcal{B}| = 1$ and with $\zeta = 2\mu - \log \sigma + \log |\mathcal{B}|$ . Thus, by + +equation 15, + +$$ +\begin{array}{l} k (L, L) = \sum_ {L ^ {\prime} = 0} ^ {L} e ^ {L ^ {\prime} \zeta} Q _ {L ^ {\prime}, L} ^ {2} \\ \leq \left(\sum_ {L ^ {\prime} = 0} ^ {\infty} e ^ {L ^ {\prime} \zeta / 2} Q _ {L ^ {\prime}, L}\right) ^ {2} \\ = \left(\sum_ {L ^ {\prime} = 0} ^ {\infty} \left(e ^ {\zeta / 2}\right) ^ {L ^ {\prime}} \left[ x ^ {L ^ {\prime}} y ^ {L} \right] F _ {\xi} (x, y)\right) ^ {2} \\ = \left(\left[ y ^ {L} \right] F _ {\xi} \left(e ^ {\zeta / 2}, y\right)\right) ^ {2}. \\ \end{array} +$$ + +The result is identical with $F_{\xi, \mathrm{la}}$ . On the other hand, using Jensen's inequality, + +$$ +\begin{array}{l} k (L, L) = L \left(\frac {1}{L} \sum_ {L ^ {\prime} = 0} ^ {L} \left(e ^ {L ^ {\prime} \zeta} Q _ {L ^ {\prime}, L}\right) ^ {2}\right) \\ \geq L \left(\frac {1}{L} \sum_ {L ^ {\prime} = 0} ^ {L} e ^ {L ^ {\prime} \zeta / 2} Q _ {L ^ {\prime}, L}\right) ^ {2} \\ = \frac {1}{L} \left(\left[ y ^ {L} \right] F _ {\xi} \left(e ^ {\zeta / 2}, y\right)\right) ^ {2}. \\ \end{array} +$$ + +![](images/aaf2d5c430ad6596a50523e6ce045b31ac99437105e63dffca263c49df62a905.jpg) + +Now we build a thick tailed $h \in \mathcal{H}_k$ . + +Proposition B.43. Say $0 < \pi < 1$ . For the alignment kernel, there is an $h \in \mathcal{H}_k$ such that $(h|k_X)_k = [y^{|X|}]F_{\xi}\left(\pi e^{\zeta/2}|\mathcal{B}|^{-1/2}, y\right)$ . For the local alignment kernel, there is a $h \in \mathcal{H}_{k_{\mathrm{la}}}$ such that $(h|k_{\mathrm{la},X})_k = C + [y^{|X|}]F_{\xi,\mathrm{la}}\left(\pi e^{\zeta/2}|\mathcal{B}|^{-1/2}, y\right)$ for some constant $C$ . In both cases, for any $X \in S$ , we have $k_X \lesssim h$ . + +Proof. Define $k_{L} = |\mathcal{B}|^{-L}\sum_{|X| = L}k_{X}$ . If $Y, Y' \in S$ and $|Y| = |Y'| = L'$ , one can check that $(k_{L}|k_{Y})_{k} = (k_{L}|k_{Y'})_{k}$ thus $(k_{L}|k_{Y})_{k} = (k_{L}|k_{L'})_{k}$ . One can show that $k$ restricted to $\{k_0, k_1, \ldots\}$ is identical to the string kernel in the case $|\mathcal{B}| = 1$ with $\zeta = -\log |\mathcal{B}|$ . We will create a $h$ for this kernel with $(h|k_L)_k = [y^L]F_\xi \left(e^{\zeta /2}|\mathcal{B}|^{-1 / 2}\pi ,y\right)$ and the Proposition will follow from the fact that $(h|k_Y)_k = (h|k_{|Y|})_k$ . + +We define $h = \sum_{L} \alpha^{L} k_{L}$ for some $\alpha > 0$ . Note that since $k(X, Y) \geq 0$ for all $X, Y, k_{X} \lesssim h$ for any $X$ . Now write + +$$ +\left(h \mid u _ {L ^ {\prime}}\right) _ {k} = \sum_ {L} \alpha^ {L} \left[ x ^ {L ^ {\prime}} y ^ {L} \right] F _ {\xi} (x, y) = \left[ x ^ {L ^ {\prime}} \right] F _ {\xi} (x, \alpha). +$$ + +Thus, since $F_{\xi}(x,y) = f_{\xi}(y)(1 - xyf_{\xi}(y))^{-1} = f_{\xi}(y)\sum_{L = 0}^{\infty}x^{L}(yf_{\xi}(y))^{L}$ , + +$$ +\| h \| _ {k} ^ {2} = \sum_ {L} e ^ {\zeta L} | \mathcal {B} | ^ {- L} \left([ x ^ {L} ] F _ {\xi} (x, \alpha)\right) ^ {2} = f _ {\xi} (\alpha) ^ {2} \sum_ {L} e ^ {\zeta L} | \mathcal {B} | ^ {- L} \left(\alpha f _ {\xi} (\alpha)\right) ^ {2 L} +$$ + +which is finite as long as $\pi = \alpha f_{\xi}(\alpha)e^{\zeta /2}|\mathcal{B}|^{-1 / 2} < 1$ . We can pick $\alpha$ to let $\pi$ be any positive value $< 1$ . In this case + +$$ +\begin{array}{l} (h | k _ {L ^ {\prime}}) _ {k} = \sum_ {L} e ^ {\zeta L} | \mathcal {B} | ^ {- L} (h | u _ {L}) _ {k} Q _ {L, L ^ {\prime}} \\ = \sum_ {L} e ^ {\zeta L} | \mathcal {B} | ^ {- L} \left(f _ {\xi} (\alpha) \left(\alpha f _ {\xi} (\alpha)\right) ^ {L}\right) \left[ x ^ {L} y ^ {L ^ {\prime}} \right] F _ {\xi} (x, y) \\ = f _ {\xi} (\alpha) \sum_ {L} \left(\pi e ^ {\zeta / 2} | \mathcal {B} | ^ {- 1 / 2}\right) ^ {L} \left[ x ^ {L} y ^ {L ^ {\prime}} \right] F _ {\xi} (x, y) \\ = f _ {\xi} (\alpha) \left[ y ^ {L ^ {\prime}} \right] F _ {\xi} \left(\pi e ^ {\zeta / 2} | \mathcal {B} | ^ {- 1 / 2}, y\right). \\ \end{array} +$$ + +We now turn to the very similar case of $k_{\mathrm{la}}$ . The norm of $h = \sum_{L} \alpha^{L} k_{\mathrm{la},L}$ is + +$$ +\sum_ {L} e ^ {\zeta L} | \mathcal {B} | ^ {- L} \left([ x ^ {L} ] F _ {\xi , \mathrm {l a}} (x, \alpha)\right) ^ {2} = \left(\frac {1}{1 - \alpha}\right) ^ {2} + \left(\frac {\alpha}{1 - \alpha}\right) ^ {2} \sum_ {L = 1} ^ {\infty} e ^ {\zeta L} | \mathcal {B} | ^ {- L} \left(\alpha f _ {\xi} (\alpha)\right) ^ {2 (L - 1)} +$$ + +which is finite again as long as $\pi = \alpha f_{\xi}(\alpha)e^{\zeta /2}|\mathcal{B}|^{-1 / 2} < 1$ + +$$ +\begin{array}{l} \left(h \mid k _ {\mathrm {l a}, L ^ {\prime}}\right) _ {k} = \sum_ {L} e ^ {\zeta L} Q _ {\mathrm {l a}, L, L ^ {\prime}} \left([ x ^ {L} ] F _ {\xi , \mathrm {l a}} (x, \alpha)\right) \\ = \frac {1}{1 - \alpha} Q _ {\mathrm {l a}, 0, L ^ {\prime}} + \frac {\alpha}{(1 - \alpha) ^ {2} \alpha f _ {\xi} (\alpha)} \sum_ {L = 1} ^ {\infty} e ^ {\zeta L / 2} | \mathcal {B} | ^ {- L / 2} Q _ {\mathrm {l a}, L, L ^ {\prime}} \pi^ {L} \\ = \frac {1}{1 - \alpha} - \frac {\alpha}{(1 - \alpha) \alpha f _ {\xi} (\alpha)} Q _ {\mathrm {l a}, 0, L} + \frac {\alpha}{(1 - \alpha) \alpha f _ {\xi} (\alpha)} [ y ^ {L ^ {\prime}} ] F _ {\xi , \mathrm {l a}} (e ^ {\zeta / 2} | \mathcal {B} | ^ {- 1 / 2} \pi , y). \\ \end{array} +$$ + +Finally note $Q_{\mathrm{la},0,L} = 1$ . + +![](images/151a54844114311f3c2e0acccea5e3d6ac6971ccb3e8bba3b66134d96764740e.jpg) + +Thus, to analyze the tails of the alignment kernel, we will need to analyze $[y^L]F_{\xi}(x,y)$ and $[y^L]F_{\xi,\mathrm{la}}(x,y)$ . The coefficients will turn out to depend on the polynomial $1 - (1 + x)y + \xi y^2$ . We rewrite the polynomial $1 - (1 + x)y + \xi y^2 = (1 - r_1y)(1 - r_2y)$ for roots $r_1(\xi,x) \geq r_2(\xi,x)$ , which are + +$$ +\frac {1}{2} \left(1 + x \pm \sqrt {(1 + x) ^ {2} - 4 \xi x}\right). +$$ + +These values are decreasing with $\xi$ , positive, and distinct when $\xi < 1$ since $(1 + x)^2 - 4\xi x > (1 + x)^2 - 4x = (x - 1)^2 \geq 0$ . When $\xi < 1$ , $r_1$ is also always $> 1$ since it is $> \frac{1}{2} (1 + x + |x - 1|) = x \vee 1$ . When $\xi = 0$ , $r_1 = x + 1$ , $r_2 = 0$ . We now see that if $\Delta \mu < \infty$ then the coefficients grow exponentially. However, if $\Delta \mu = \infty$ the coefficients may grow or shrink exponentially or, in the case of $F_{\xi,\mathrm{la}}$ grow exponentially or polynomially. + +Proposition B.44. If $\xi < 1$ and $x > 0$ , both $[y^L]F_{\xi}(x,y)$ and $[y^L]F_{\xi,\mathrm{la}}(x,y)$ are equal to $Cr_1(x,\xi)^L + O(L)$ for some (different) $C > 0$ . If $\xi = 1$ then for the alignment kernel, $[y^L]F_1(x,y) = x^L$ ; for the local alignment kernel, if $x > 1$ , then $[y^L]F_{1,\mathrm{la}}(x,y) = x^L + O(L)$ ; if $x < 1$ , then $[y^L]F_{1,\mathrm{la}}(x,y) = CL + C' + o(1)$ for some $C > 0, C'$ ; if $x = 1$ , then $[y^L]F_{1,\mathrm{la}}(x,y) = L(L + 1)/2 + 1$ . + +Proof. First let us consider the case of $\xi = 0$ . + +$$ +F _ {0} (x, y) = \frac {1}{1 - (1 + x) y} = \sum_ {L = 0} ^ {\infty} (1 + x) ^ {L} y ^ {L} +$$ + +$$ +F _ {0, \mathrm {l a}} (x, y) = \frac {x y}{(1 - y) (1 - (1 + x) y)} + \frac {1}{1 - y}. +$$ + +By partial fraction decomposition, for some $A,B$ with $A,B\neq 0$ and, constant $c_{1},c_{2}$ + +$$ +\begin{array}{l} F _ {0, \mathrm {l a}} (x, y) = \frac {A x y}{1 - (1 + x) y} + \frac {B x y}{1 - y} + \frac {1}{1 - y} \\ = c _ {1} + c _ {2} y + \sum_ {L = 2} ^ {\infty} \left(A x (1 + x) ^ {L - 1} + B x + 1\right) y ^ {L}. \\ \end{array} +$$ + +The leading term in the brackets is $Ax(1 + x)^{L - 1}$ and, since the coefficients of $F_{0,\mathrm{la}}$ are positive, $A > 0$ . + +Now we consider the case when $0 < \xi < 1$ . + +$$ +F _ {\xi} (x, y) = \frac {(1 - \xi y)}{(1 - r _ {1} y) (1 - r _ {2} y)} +$$ + +so by partial fraction decomposition, for $A,B\neq 0$ + +$$ +F _ {\xi} (x, y) = (1 - \xi y) \left(\frac {A}{1 - r _ {1} y} + \frac {B}{1 - r _ {2} y}\right) = c _ {1} \sum_ {L = 0} ^ {\infty} \left(A r _ {1} ^ {L} - A \xi r _ {1} ^ {L - 1} + B r _ {2} ^ {L} - B \xi r _ {2} ^ {L - 1}\right) y ^ {L}. +$$ + +Since $r_1 > 1 > \xi$ , the leading term in the brackets is $A(1 - \xi / r_1) r_1^L$ . Similarly, $[y^L] F_{\xi, \mathrm{la}} = Cr_1^L + O(L)$ for some $C > 0$ . Now we look at when $\xi = 1$ . Here $f_{\xi}(y) = 1$ . Thus, + +$$ +F _ {1} (x, y) = \frac {1}{1 - x y} = \sum_ {L = 0} ^ {\infty} x ^ {L} y ^ {L} +$$ + +$$ +F _ {1, \mathrm {l a}} (x, y) = \frac {x y}{(1 - y) ^ {2} (1 - x y)} + \frac {1}{1 - y}. +$$ + +If $x \neq 1$ , again, by partial fraction decomposition, for some $A, B, C$ with $A, B \neq 0$ and $C \neq 1$ , and constants $c_1, c_2,$ + +$$ +\begin{array}{l} F _ {1, \mathrm {l a}} (x, y) = \frac {A x y}{1 - x y} + \frac {B x y (y - C)}{(1 - y) ^ {2}} + \frac {1}{1 - y} \\ = c _ {0} + c _ {1} y + \sum_ {L = 2} ^ {\infty} \left(A x ^ {L} + B x \binom {L + 1 - 2} {1} - B C x \binom {L + 1 - 1} {1} + 1\right) y ^ {L} \\ \end{array} +$$ + +So that the leading term is $Ax^L$ if $x > 1$ or $Bx\binom{L-1}{1} - BCx\binom{L}{1}$ if $x < 1$ . Since $C \neq 1$ , the latter term is $= CL + C'$ for some $C, C' > 0$ . If $x = 1$ , + +$$ +F _ {1, \mathrm {l a}} (x, y) = \frac {1}{1 - y} + \frac {y}{(1 - y) ^ {3}} = 1 + \sum_ {L = 1} ^ {\infty} \left(\binom {L + 2 - 1} {2} + 1\right) y ^ {L} +$$ + +so that $[y^L]F_{1,\mathrm{la}}(x,y) = L(L + 1) / 2 + 1$ . + +Now, combining the results of these last three propositions, we have proven Proposition B.36. To begin proving Proposition B.37, we first tighten our estimate of $\sqrt{k(X,X)}$ in the case $\Delta \mu = \infty$ . + +Proposition B.45. Say $\Delta \mu = \infty$ . $\sup_{|X| = L}\sqrt{k(X,X)} = e^{L\zeta /2}$ and $\sup_{|X| = L}\sqrt{k_{\mathrm{la}}(X,X)}$ is $\sim e^{L\zeta /2}$ if $\zeta >0$ , is $\sim L^{3 / 2}$ if $\zeta = 0$ , and is $\sim L$ if $\zeta < 0$ . + +Proof. When $\Delta \mu = \infty$ , $Q$ is the identity matrix, so that + +$$ +k (L \times A, L \times A) = e ^ {L \zeta}. +$$ + +On the other hand, $Q_{\mathrm{la},0,L} = 1$ for all $L$ and, for $L \geq L' > 0$ , + +$$ +[ x ^ {L ^ {\prime}} y ^ {L} ] F _ {1, \mathrm {l a}} (x, y) = [ y ^ {L} ] \frac {y}{(1 - y) ^ {2}} y ^ {L ^ {\prime} - 1} = [ y ^ {L - L ^ {\prime}} ] (1 - y) ^ {- 2} = L - L ^ {\prime} + 1. +$$ + +Thus, + +$$ +\begin{array}{l} k (L \times A, L \times A) = 1 + \sum_ {L ^ {\prime} = 1} ^ {L} e ^ {L ^ {\prime} \zeta} (L - L ^ {\prime} + 1) ^ {2} \\ = 1 + e ^ {(L + 1) \zeta} \sum_ {L ^ {\prime} = 1} ^ {L} e ^ {- (L - L ^ {\prime} + 1) \zeta} (L - L ^ {\prime} + 1) ^ {2} \\ = 1 + e ^ {(L + 1) \zeta} \sum_ {L ^ {\prime} = 1} ^ {L} e ^ {- L ^ {\prime} \zeta} L ^ {\prime 2}. \\ \end{array} +$$ + +If $e^{\zeta} > 1$ , the sum is increasing and bounded, so, $k(L \times A, L \times A) = 1 + Ce^{L\zeta}(1 + o(1))$ for some $C > 0$ . If $e^{\zeta} = 1$ , we have $k(L \times A, L \times A) = 1 + CL^3(1 + o(1))$ for some $C > 0$ . Finally, if $e^{\zeta} < 1$ , since + +$$ +k (L \times A, L \times A) = 1 + L ^ {2} \sum_ {L ^ {\prime} = 1} ^ {L} e ^ {L ^ {\prime} \zeta} \left(1 - \frac {L ^ {\prime} - 1}{L}\right) ^ {2}, +$$ + +the sum is increasing and bounded with $L$ so that $k(L \times A, L \times A) = 1 + CL^2(1 + o(1))$ for some $C > 0$ . + +Next we must look at when a tilted alignment kernel is $C_0$ . + +Proposition B.46. Say $\tilde{A}:\mathbb{N}\to (0,\infty)$ and $A(X) = \tilde{A} (|X|)$ . If $k^A$ is a bounded kernel, then it is $C_0$ if and only if $\tilde{A} (L)[y^{L}]F_{\xi}(\pi e^{\zeta /2}|\mathcal{B}|^{-1 / 2},y)\to 0$ for some $\pi < 1$ . If the latter condition holds for some value of $\pi$ then it holds for any value of $0 < \pi < 1$ . The same is true with $k_{\mathrm{la}}$ and $F_{\xi ,\mathrm{la}}$ . + +Proof. Let $h$ be defined as in Proposition B.43 for some $1 > \pi > 0$ . $h \in \mathcal{H}_k$ so $Ah \in \mathcal{H}_{k^A}$ . $k_X^A \lesssim Ah$ for all $X$ by Proposition B.43 so $k^A$ is $C_0$ if and only if $Ah \in C_0(S)$ . Finally, $Ah \in C_0(S)$ if and only if $\tilde{A}(L)[y^L]F_{\xi}(\pi e^{\zeta/2}|\mathcal{B}|^{-1/2}, y) \to 0$ . Similar logic proves the same for $k_{\mathrm{la}}$ and $F_{\xi,\mathrm{la}}$ . + +Finally we prove Proposition B.37. Recall that $k_{\mathrm{ISK}}$ is the special case of $k_{\mathrm{la}}$ with $\zeta = 0$ and $\Delta \mu = \infty$ . + +Proposition B.47. (Proof of Proposition B.37) Let $A(X) = |X|^{-3/2}$ . $k_{\mathrm{ISK}}^A$ is a bounded $C_0$ kernel, i.e. for all $f \in \mathcal{H}_k$ , $f \in C_0(S)$ . $k_{\mathrm{ISK}}^A$ is also non-vanishing, i.e. $\sqrt{k_{\mathrm{ISK}}^A(X,X)} \nrightarrow 0$ as $|X| \to \infty$ . Moreover, if we set $h = \sum_{X \in \mathcal{B}} k_{\mathrm{ISK},X}^A$ , then $h(X)$ depends only on $|X|$ and $h(X) = |X|^{-1/2} + 4|X|^{-3/2}$ . + +Proof. Note by proposition B.45, $\sup_{|X| = L}\sqrt{k_{\mathrm{ISK}}(X,X)}\sim L^{3 / 2}$ . Thus, $k_{\mathrm{ISK}}^A$ is bounded and non-vanishing. On the other hand, if $\pi < 1$ , $[y^L ]F_\xi (\pi e^{\zeta /2}|\mathcal{B}|^{-1 / 2},y)\sim L$ Proposition B.44, so, by Proposition B.46, $k_{\mathrm{ISK}}^A$ is $C_0$ . + +Finally, letting $h = \sum_{X \in \mathcal{B}} k_{\mathrm{ISK}, X}$ and noting that if $X \in \mathcal{B}$ , $k_{\mathrm{ISK}, X}(Y) = \#(X \text{ in } Y) + 1$ (the plus one for $\phi_{\emptyset}$ ), we have that $h(Y) = |Y| + |\mathcal{B}|$ . After tilting by $A(X)$ , we obtain the proposition statement. + +# C. Experimental Details + +In this section we describe the details of our experiments. Note we used $\chi(t) = t \wedge 1$ in all cases unless otherwise specified. + +# C.1. Goodness of Fit Test Bootstrap + +To get $p$ -values for our goodness of fit tests, we used a bootstrap procedure as in Liu et al. (2016). Given $X_{1},\ldots ,X_{n}\in S$ for each $i$ we sampled $Y_{X_{i},1},\dots ,Y_{X_{i},N_{n}}$ drawn by taking a single step of a Markov chain with transition matrix $K_{X\to Y}$ defined in Section B.3.7. We defined our test U-statistic as + +$$ +U = \frac {1}{n ^ {2}} \sum_ {i \neq j} \mathrm {f l u x} _ {p} (X _ {i}) \mathrm {f l u x} _ {p} (X _ {j}) \frac {1}{N _ {n} ^ {2}} \sum_ {m, m ^ {\prime}} k ((X _ {i}, Y _ {X _ {i}, m}), (X _ {j}, Y _ {X _ {j}, m})). +$$ + +We bootstrapped by sampling, for each $b = 1,\dots ,B$ $(w_{(b),i})_i^n\sim$ Multinomial $((1 / n,\ldots ,1 / n),n)$ and defining + +$$ +U _ {(b)} = \frac {1}{n ^ {2}} \sum_ {i \neq j} (w _ {(b), i} - 1) (w _ {(b), j} - 1) \mathrm {f l u x} _ {p} (X _ {i}) \mathrm {f l u x} _ {p} (X _ {j}) \frac {1}{N _ {n} ^ {2}} \sum_ {m, m ^ {\prime}} k ((X _ {i}, Y _ {X _ {i}, m}), (X _ {j}, Y _ {X _ {j}, m})). +$$ + +Then we defined a $p$ -value $p = \# \{b \mid U_{(b)} \geq U\} / B$ . Throughout we use $B = 1000$ . + +# C.2. Kernel Parameters + +In every case we used $\lambda = 1 / 5$ for the Exp-H kernel, $\beta = 1 / 2$ for the IMQ-H kernel, $\zeta = \log |\mathcal{B}|$ and $\Delta \mu = 0.2$ for the alignment kernel, and $\epsilon = 0.2$ for the tilting parameter of the alignment kernel. We set $C = 1$ for the IMQ-H kernel when in a vector field kernel and $C = 3$ when in a scalar field kernel. For embedding kernels, we set the bandwidth parameter $\sigma$ to be the median distance between rescaled embeddings. + +Given a kernel $k$ we define its normalized tilting as $\tilde{k}(X,Y) = k(X,Y) / \sqrt{k(X,X)k(Y,Y)}$ . This is how we define the IMQ (N), Ali (N) and ISK (N) kernels. + +![](images/8be46084f10cdcc75c4967c664c47911bc98f52483d926798923ea5c0e10f25e.jpg) +Figure 6. Dependence of the Power of the Goodness of Fit Test on Sequence Length Performance of the KSD-B, using a vector field kernel (vf KSD-B), with increasing sequence length. Dotted lines show the results for data sampled from the unperturbed model $p$ , to evaluate calibration. + +# C.3. pHMM Model + +Throughout our simulations we used a random profile hidden Markov models (pHMMs). We now define the prior from which we draw random pHMMs. + +We first defined a site-wise independent model for sequences of length $L$ . First we sampled the logits + +$$ +h _ {l, b} ^ {- 2} \sim \beta_ {h} ^ {- 1} \mathrm {G a m m a} (\alpha_ {h}). +$$ + +Then we defined our site-wise independent model as + +$$ +\tilde {p} (X) \propto \exp \left(\sum_ {l, b} h _ {l, b} \mathbb {1} \left(X _ {(l)} = b\right)\right). +$$ + +$\tilde{p}$ is supported on the set of sequences of length $L$ + +Next we added insertions and deletions using a MuE model we sampled with indel_prior.bias = 5.0 (Weinstein & Marks, 2021). In particular, we set the latent sequence of the MuE model to be a sample from $\tilde{p}$ . The final distribution we call $p$ . $p$ is a pHMM and its likelihoods can be calculated in closed form (Weinstein & Marks, 2021). To sample from and calculate likelihoods for this pHMM we used code from https://github.com/pyro-ppl/pyro/tree/dev/pyro/contrib/mue under the MIT licence. + +# C.4. Powerful Goodness of Fit Test with the KSD-B (Fig. 2(a)) + +For this experiment only we used $\chi(t) = \sqrt{t}$ . We sampled a pHMM $p$ with $L = 20, |\mathcal{B}| = 4, \alpha_h = 1, \beta_h = 0.5$ . We then set $h_4 = 500\mathbb{1}(X_{(4)} = C)$ so that the 5-th letter of the latent sequence very likely is a $C$ (recall indexing of the sequence begins at 0). To make position 5 of the latent sequence coincide with position 5 of the sampled sequence, we subtracted 5 from the first 5 positions of the insertion and deletion logit matrices of the MuE indel model. We finally made a $C$ in position 6 unlikely by subtracting 500 from $h_{5,C}$ . + +We then perturbed the pHMM by making other letters more probable as the 5-th letter: we defined, for perturbation weight $\gamma$ , $\tilde{h}_{4,b} = \log (\gamma /3)$ if $b\neq C$ and $\tilde{h}_{4,C} = \log (1 - \gamma)$ so that the probability of the 5-th letter being a $C$ is $1 - \gamma$ . Finally we sampled and tested from a pHMM with $\tilde{h}$ for a variety of perturbation weights $\gamma$ . We sampled and tested 25 times from the same models and reported the fraction of null hypotheses rejected. + +# C.5. Testing pHMM Models (Figures 2(b) and 6) + +We now define the Potts model for sequences of length $L$ that we used in the experiment. First we sampled the mean field parameters $h_{l,b}^{-2} \sim \beta_h^{-1} \mathrm{Gamma}(\alpha_h)$ and the couplings parameters $e_{l,l',b,b'} \sim \sigma_{l,l'} N(0,1.2^2)$ where $\sigma_{l,l'} \sim \mathrm{Bern}(0.9)$ . + +Then, for a cross-term perturbation weight $\gamma$ , we defined our Potts model as + +$$ +\tilde {p} (X) \propto \exp \left(\sum_ {l, b} h _ {l, b} \mathbb {1} \left(X _ {(l)} = b\right) + \gamma \sum_ {l, l ^ {\prime}, b, b ^ {\prime}} e _ {l, l ^ {\prime}, b, b ^ {\prime}} \mathbb {1} \left(X _ {(l)} = b, X _ {(l ^ {\prime})} = b ^ {\prime}\right)\right). +$$ + +$\tilde{p}$ is supported on the set of sequences of length $L$ . We then sampled from a Potts model using code from https://github.com/debbiemarkslab/plmc under the MIT licence (Hopf et al., 2017). + +We used $\alpha_{h} = 1, \beta_{h} = 3, |\mathcal{B}| = 4$ to define a Potts model. We added insertions and deletions using a MuE model we sampled with indel_prior.bias $= 5.0$ to get a distribution $p$ that depends on $\gamma$ . Note when $\gamma = 0$ , $p$ is a pHMM and we can calculate its likelihoods. Finally, we sample and test for a variety of perturbation weights $\gamma$ , and we repeat the experiment 25 times. We sampled and tested 50 times from the same models and reported the fraction of null hypotheses rejected. + +To produce Fig. 6 we first sampled $3\mathrm{pHMMs}$ for each latent sequence length with the parameters above. For $\gamma = 0,0.3$ we performed the same testing procedure this time using $N_{n} = 10$ and repeating the sampling and testing 25 times. + +# C.6. Testing Autoregressive Models (Fig. 2(c)) + +We defined a lag 2 linear autoregressive model as + +$$ +X _ {1} = C +$$ + +$$ +X _ {L} \sim \text {C a t e g o r i c a l} \left(\mathcal {B} \cup \{\$ \}, (q (X _ {(L - 2: L)}, b ^ {\prime \prime})) _ {b ^ {\prime \prime} \in \mathcal {B} \cup \{\$ \}}\right) +$$ + +where, for tensors $A,B$ and perturbation weight $\gamma >0$ + +$$ +\begin{array}{l} \left(q \left(X _ {(L - 2: L)}, b ^ {\prime \prime}\right)\right) _ {b ^ {\prime \prime} \in \mathcal {B} \cup \{\$ \}} = \operatorname {s o f t m a x} \left(\sum_ {l = 1} ^ {2} \sum_ {b \in \mathcal {B}} A _ {l, b, b ^ {\prime \prime}} \mathbb {1} \left(X _ {(L - l)} = b\right) \right. \\ + \gamma \sum_ {l = 1} ^ {2} \sum_ {l ^ {\prime} = 1} ^ {2} \sum_ {b, b ^ {\prime} \in \mathcal {B}} B _ {l, l ^ {\prime}, b, b ^ {\prime}, b ^ {\prime \prime}} \mathbb {1} (X _ {(L - l)} = b \text {a n d} X _ {(L - l ^ {\prime})} = b ^ {\prime}) \Big) _ {b ^ {\prime \prime} \in \mathcal {B} \cup \{\$ \}} \\ \end{array} +$$ + +where $ represents a stop. We set X1 = C to avoid empty sequences. We set Al,b,$ = 5 and Bl,l',b,b',$ = 1/2. We sampled + +$$ +(A _ {l, b, b ^ {\prime}}) _ {b ^ {\prime} \in \mathcal {B}} \sim \frac {5}{2} \mathrm {M u l t i n o m i a l} ((1 / | \mathcal {B} |) _ {b \in \mathcal {B}}), +$$ + +$$ +(B _ {l, l ^ {\prime}, b, b ^ {\prime}, b ^ {\prime \prime}}) _ {b ^ {\prime \prime} \in \mathcal {B}} \sim \frac {5}{4} \text {M u l t i n o m i a l} ((1 / | \mathcal {B} |) _ {b \in \mathcal {B}}). +$$ + +We set $p$ to be this autoregressive model and perturb $p$ by increasing $\gamma$ . We sampled and tested 100 times from the same models and reported the fraction of null hypotheses rejected. + +# C.7. Testing without Normalized Likelihoods (Fig. 2(d)) + +First we sampled a pHMM model with $L = 10$ , $|\mathcal{B}| = 4$ , $\alpha_h = 1$ , $\beta_h = 0.5$ which we call $\pi$ . We then sampled $X_0 \sim \pi$ . + +We now define a distribution for sequences "descended from $X_0$ " given a parameter $t$ that controls the substitution rate, $\kappa(\cdot | X, t)$ . We let $\kappa(\cdot | X, t)$ be a pHMM with the same indel process as $\pi$ but with + +$$ +h _ {l, b} | X, t = \log \left(1 + t ^ {- 1} \mathbb {1} (X _ {(l)} = b)\right) +$$ + +so that sequences are less likely to have substitutions when $t$ is smaller, i.e. "the sequence is more closely related to $X$ ". + +Next we sampled $Y_{1},\ldots ,Y_{5}\sim \kappa (\cdot |X_{0},t = 1)$ . Finally, we defined the posterior reconstruction of $X_0$ as $p(X|Y_1,\dots ,Y_5,t)\propto \pi (X)\prod_{i = 1}^{5}\kappa (Y_i|X,t)$ . To sample from this model, we performed path-auxiliary MCMC sampling (Sun et al., 2022). Finally we sampled from $p(X|Y_1,\dots ,Y_5,t)$ for a variety of perturbation weights $\gamma = t - 1$ and test whether sequences came from the correct posterior $p(X|Y_1,\dots ,Y_5,t = 1)$ for which we can calculate unnormalized likelihoods. We sampled and tested 25 times from the same models and reported the fraction of null hypotheses rejected. + +# C.8. Comparing Approximating the KSD-B and Shrinking the Graph (Fig. 3) + +Baum et al. (2022) suggested constructing a computationally tractable KSD-B by replacing the graph $M$ with a smaller altered graph. To do so, we identify each letter in $\mathcal{B}$ with a number modulo $|\mathcal{B}|$ , $0,1,\ldots ,19\in Z_{|\mathcal{B}|}$ . Conceptually, we assume letters that are closer to 0 in $Z_{|\mathcal{B}|}$ , that is $0,1, - 1 = 19,2, - 2 = 18,\dots$ are more hydrophobic and those close to 10 are hydrophilic. Next we define the altered graph $M_{(\tau)}$ so that $XM_{(\tau)}Y$ if $X$ and $Y$ differ by a single substitution of a letter $b$ to $b^{\prime}$ with $|b - b^{\prime}|\leq \tau$ . For instance, if $\tau = 2$ , $b = 1$ is connected only to $b^{\prime}\in \{-1,0,1,2,3\}$ in $M_{(\tau)}$ . Then the KSD according to this method is + +$$ +E _ {X, X ^ {\prime} \sim q} \sum_ {Y M _ {(\tau)} X Y ^ {\prime} M _ {(\tau)} X ^ {\prime}} T _ {p, X \rightarrow Y} T _ {p, X ^ {\prime} \rightarrow Y ^ {\prime}} k ((X, Y), (X ^ {\prime}, Y ^ {\prime})). +$$ + +We sampled a pHMM $p$ with $L = 15$ , $|\mathcal{B}| = 20$ , $\alpha_h = 0.1$ , $\beta_h = 0.5$ . We then perturbed $p$ to generate sequences with more hydrophobic residues by defining a perturbed mean field parameter $\tilde{h}$ with $\tilde{h}_{l,b} = h_{l,b} + 0.16|b - 10|$ , so that $\tilde{h}_{l,b}$ were made larger when $b$ is far from 10, i.e. close to 0. We then sampled and tested 100 samples from the perturbed and unperturbed pHMM. We then calculated the KSD as in Baum et al. (2022) with $\tau = 1,2,3$ . We sampled and tested 100 times from the same model and reported the fraction of null hypotheses rejected. + +# C.9. Designing Synthesis Procedures (Fig. 4) + +First we downloaded 115 thousand CDR3 protein sequences varying in length from 10 to 27 from patient 1 from 10x Genomics (2022). We train a MuE model with latent sequence length 17 with $|\mathcal{B}| = 20$ using the code from https://github.com/pyro-ppl/pyro/tree/dev/pyro/contrib/mue. We then train 16 stochastic synthesis models described in Weinstein et al. (2022b) varying $N_{\mathrm{templates}} = 1, 10, 100, 1000$ and the synthesis strategy from finite nucleotide mixtures with alphabet size 8, enzymatic mutagenesis with mutazymeII, finite codon mixtures with alphabet size 24 and arbitrary codon mixtures. We trained these models using the code at https://github.com/debbiemarkslab/variational-synthesis under the MIT licence. After training each model, we sampled 100 sequences from each and performed a goodness of fit test on each set of 100 sequences. We sampled and tested 25 times using $N_{n} = 20$ from the same models and reported the fraction of null hypotheses rejected. + +# C.10. Evaluating Large Models Fit to Protein Families (Fig. 5) + +We first gathered data sets of evolutionarily related sequences of four protein regions studied in Shin et al. (2021): YAP1_HUMAN (36 AA), IF1_ECOLI (72 AA), CALM1_HUMAN (149 AA), and TRPC_THEMA (252 AA). We held out $20\%$ of the sequence from each set and trained a deep generative autoregressive model (Wavenet; Shin et al. (2021)) on the held-in sequences. To do so we used code from https://github.com/debbiemarkslab/SeqDesign/. We then performed the KSD-B goodness of fit test, comparing the trained model to the held-out sequences. We also tested the goodness of fit of a large transformer model trained on a data set of all known proteins (Trancection; Notin et al. (2022)) using code from https://github.com/OATML-Markslab/Trancection. For $n$ varying from 10 to 1000, we sample 5 sets of $N_{n} = 10$ mutants for each observed sequence and calculate the KSD-B for Wavenet and Trancection. We then performed a goodness of fit test for each set of sequences and models. + +We chose $k$ to be a scalar or vector field embedding kernel in our KSD-B. In particular, we considered the scalar field kernel $k_{F,\mathrm{IMQ},\sigma}$ . We also build a vector field kernel $k_{F,\mathrm{vf}} = k_{\delta} + k_{\mathrm{HT}}$ with $k_{\mathrm{HT}}((X,Y),(X',Y')) = k_{F,\mathrm{IMQ},\mathrm{sigma}}(X,Y')\mathbb{1}(|X|\geq |Y|)\mathbb{1}(|X'|\geq |Y'|)$ and $k_{\delta}((X,Y),(X',Y')) = (k_{F,\mathrm{EXP},\sigma}(X,X') + k_{F,\mathrm{EXP},\sigma}(Y,Y'))^2$ for $(X,Y),(X',Y')\in M^{\sigma}$ . Note that our theoretical results on discrete masses in vector field kernels show that these kernels, like the scalar field kernels, are likely to have discrete masses; moreover, though we have not proven that the proposed kernel satisfies the assumptions of Thm. 5.2 (detecting non-convergence), its form is chosen based on our theoretical analysis in Section 7 and B.4.3. + +We also compare the power of goodness of fit tests of two scalar field kernels $k_{F,\mathrm{IMQ},\sigma}$ and $k_{F,\mathrm{vf}}$ in Fig. 5. In Fig. 7 we evaluate how accurate our KSD-B approximations are. Finally we also compare the power of goodness of fit tests of two scalar field kernels $k_{F,\mathrm{IMQ},\sigma}$ and $k_{F,\mathrm{EXP},\sigma}$ in Fig. 8. + +![](images/7b0a252ddfe9e713038b1ad60b814e32a66915cddad26d3c8810fc1c1d87b862.jpg) +Figure 7. Reliability of KSD-B Approximations We examine the variance of KSD-B estimated across 5 independent samples of the $N_{n} = 10$ mutants for four protein families. We plot our estimates for different $n$ for Wavenet (Blue) and Trancection (Orange) and for the vector field KSD-B (top row) and the scalar field KSD-B (bottom row). + +# C.10.1. TESTING PROTEIN UNIVERSE MODELS WITH THE KSD-B + +There has been some debate over the value of family-specific protein models versus models of the entire "protein universe", such as Tranception (Notin et al., 2022), Progen2 (Nijkamp et al., 2022), ESM1v (Meier et al., 2021), and UniRep (Alley et al., 2019). Of course, a model trained to fit the set of all proteins will achieve much smaller likelihoods on a particular protein family than a model trained to only fit that family. Despite their low likelihoods, "protein universe" models are able to predict the effects of mutations on a protein just as well as generative models trained on a single protein family (Notin et al., 2022). It is hypothesised that this is due to the fact that protein-universe models fit the distribution of sequences well "locally". + +The KSD-B can be used to help compare family-specific and protein universe models, and understand their relative performance. In particular, the KSD-B is an excellent tool for evaluating protein universe models' performance on specific protein families, because it uses only local differences in likelihood (the likelihood's "slope") to evaluate goodness of fit, rather than the likelihood itself (Eqn. 1). Formally, one can hypothesise that for a particular protein family, say YAP1, the distribution learned by a protein universe model may be written as $p_{\mathrm{ProteinUniverse}} = \alpha \mu + (1 - \alpha) \nu$ , where $\mu$ is a distribution just over the YAP1 protein family, $\nu$ is a distribution over everything else (which has little mass on YAP1), and $\alpha$ is a very small number that represents the fraction of the protein universe that is in the YAP1 family. We can think of $\mu$ as describing the "local fitness landscape" of YAP1; it is the part of the model responsible for accurate mutation effect predictions for YAP1. Now, if $q$ is the true distribution of YAP1 sequences found in nature, we have $\mathrm{KSD - B}_{p_{\mathrm{ProteinUniverse}}, k}(q) \approx \mathrm{KSD - B}_{\mu, k}(q)$ , where equality holds when the support of $\nu$ is not connected to $\mu$ at all, i.e. the protein family is completely isolated in sequence space. Thus, the KSD-B can be used to check the local fit of the model, $\mu$ , to the protein family distribution $q$ . + +In Fig.5 we see no evidence that the protein universe model (Trancection) learns a better model of local, family sequence distributions than a family-specific model (Wavenet). + +# D. Supplementary Code + +The supplementary code (https://github.com/AlanNawzadAmin/KSD-B/) provides a Jupyter notebook (KSD-B theory example.ipynb) recreating Fig. 1(a) and 1(b) using the IMQ-H (U), IMQ-H (N), and IMQ-H+Exp-H kernels. + +![](images/02119e61859c11745ed8ae66c57fe7a7bd50e1ab46367c99397eddf4fc91b250.jpg) +Figure 8. Scalar Field KSD-B Power using Embedding Kernels We perform a goodness of fit test with two scalar field embedding kernels across 5 independent samples of the $N_{n} = 10$ mutants for four protein families. We use the IMQ (solid) and EXP (dashed) embedding kernels and see that our goodness of fit tests have similar power for the two kernels. \ No newline at end of file diff --git a/akernelizedsteindiscrepancyforbiologicalsequences/images.zip b/akernelizedsteindiscrepancyforbiologicalsequences/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a38f6f8ffad9293ac007c88fb4c5ea422a150b40 --- /dev/null +++ b/akernelizedsteindiscrepancyforbiologicalsequences/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5fec4c67da8e70d9ff95b3a6d092a9d5ee8f0510dcf4a8fa2bd499caec77b9f7 +size 2110433 diff --git a/akernelizedsteindiscrepancyforbiologicalsequences/layout.json b/akernelizedsteindiscrepancyforbiologicalsequences/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..029deb794d95078077e45449df2e8c12e6b31b5c --- /dev/null +++ b/akernelizedsteindiscrepancyforbiologicalsequences/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:581ee3c6c7f2b5eda63046debf1d57b67064c1f8ca8bf04239e9bd2b3ddc69b3 +size 3519510 diff --git a/akernelsteintestofgoodnessoffitforsequentialmodels/e2eb09a1-c941-4302-8709-a02a3cdf8b9d_content_list.json b/akernelsteintestofgoodnessoffitforsequentialmodels/e2eb09a1-c941-4302-8709-a02a3cdf8b9d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1c83daeafb7e6e6cce7d83e20c98dc52c5a01cd2 --- /dev/null +++ b/akernelsteintestofgoodnessoffitforsequentialmodels/e2eb09a1-c941-4302-8709-a02a3cdf8b9d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00dfd1d8767cad53349887eb3ed899453583f4eb667b1d3488becdac9acb8b28 +size 131480 diff --git a/akernelsteintestofgoodnessoffitforsequentialmodels/e2eb09a1-c941-4302-8709-a02a3cdf8b9d_model.json b/akernelsteintestofgoodnessoffitforsequentialmodels/e2eb09a1-c941-4302-8709-a02a3cdf8b9d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..696b09eb99c91795b186c2e971b2000db045902c --- /dev/null +++ b/akernelsteintestofgoodnessoffitforsequentialmodels/e2eb09a1-c941-4302-8709-a02a3cdf8b9d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93f4abe23787e797fd2beea83054d40deecd44c8ee69cfa4a01e401580f15bf1 +size 157285 diff --git a/akernelsteintestofgoodnessoffitforsequentialmodels/e2eb09a1-c941-4302-8709-a02a3cdf8b9d_origin.pdf b/akernelsteintestofgoodnessoffitforsequentialmodels/e2eb09a1-c941-4302-8709-a02a3cdf8b9d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..46da68365a42e27988907994b0be4c9c1ded259c --- /dev/null +++ b/akernelsteintestofgoodnessoffitforsequentialmodels/e2eb09a1-c941-4302-8709-a02a3cdf8b9d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1527e5994019456ecc000079987b5d7cf8bba2ee01f3625deece4ea64cde03f0 +size 383756 diff --git a/akernelsteintestofgoodnessoffitforsequentialmodels/full.md b/akernelsteintestofgoodnessoffitforsequentialmodels/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a886f6f7630e9a803492801df5d98f66e61ec0d2 --- /dev/null +++ b/akernelsteintestofgoodnessoffitforsequentialmodels/full.md @@ -0,0 +1,550 @@ +# A Kernel Stein Test of Goodness of Fit for Sequential Models + +Jerome Baum $^{*1}$ Heishiro Kanagawa $^{*23}$ Arthur Gretton + +# Abstract + +We propose a goodness-of-fit measure for probability densities modeling observations with varying dimensionality, such as text documents of differing lengths or variable-length sequences. The proposed measure is an instance of the kernel Stein discrepancy (KSD), which has been used to construct goodness-of-fit tests for unnormalized densities. The KSD is defined by its Stein operator: current operators used in testing apply to fixed-dimensional spaces. As our main contribution, we extend the KSD to the variable-dimension setting by identifying appropriate Stein operators, and propose a novel KSD goodness-of-fit test. As with the previous variants, the proposed KSD does not require the density to be normalized, allowing the evaluation of a large class of models. Our test is shown to perform well in practice on discrete sequential data benchmarks. + +# 1. Introduction + +This paper addresses the problem of evaluating a probabilistic model for observations with variable dimensionality. This problem commonly arises in sequential data analysis, where sequences of different lengths are observed, and generative models (e.g., Markov models) are used to draw inferences. Our problem setting also concerns a scenario where a data point is a collection of observations that may not be sequentially ordered, as in the topic modeling of text documents. Our task is formalized as follows: given a sample $\{X_{i}\}_{i = 1}^{n}$ from a distribution $Q$ on a sample space $\mathcal{X}$ (e.g., the set of all possible sequences), we aim to quantify the discrepancy of distribution $P$ modeling $Q$ . + +One approach to this problem is generating samples from the + +*Equal contribution $^{1}$ Department of Computer Science, University College London, UK $^{2}$ Gatsby Computational Neuroscience Unit, UCL $^{3}$ School of Mathematics, Statistics and Physics, Newcastle University, UK. Correspondence to: Jerome Baum , Heishiro Kanagawa . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +model and computing a sample-based discrepancy measure, such as the maximum mean discrepancy (MMD) (Gretton et al., 2012; Lloyd & Ghahramani, 2015). A disadvantage of this approach is that it potentially requires many samples to see the departure of the model from the data. For example, if a sequence model misspecifies a particular transition from a given history, we would need to generate sequences sharing this history to observe the disparity. Unfortunately, the MMD approach becomes less efficient as the state space enlarges or the length of the history grows, necessitating repeated sampling. This observation motivates using a measure that exploits information provided by the model, such as dependence relations between different time points. + +A model-dependent measure may be derived based on Stein's method (Stein, 1972), a technique from probability theory developed originally to obtain explicit rates of convergence to normality. The key construct from Stein's method is a distribution-specific operator called a Stein operator that modifies a function to have zero expectation under the distribution. A model-specific Stein operator $\mathcal{A}_P$ may be defined to construct a zero expectation function $\mathcal{A}_Pf$ ; its expectation $\mathbb{E}_{X\sim Q}[\mathcal{A}_Pf(X)]$ under the sample distribution serves as a discrepancy measure, since a non-zero expectation indicates the deviation from the model. One may generalize this idea to a family of functions $\mathcal{F}$ , and the resulting worst-case summary $\sup_{f\in \mathcal{F}}|\mathbb{E}_{X\sim Q}[\mathcal{A}_Pf(X)]|$ is called a Stein discrepancy, proposed by Gorham & Mackey (2015). An appropriate choice of the operator and the function class yields a computable Stein discrepancy. This paper focuses on an instance called the kernel Stein discrepancy (KSD) (Oates et al., 2016; Chwialkowski et al., 2016; Liu et al., 2016; Gorham & Mackey, 2017), where functions from a reproducing kernel Hilbert space (RKHS) are used. + +The KSD is a versatile framework for designing goodness-of-fit measures. Given a Stein operator and a reproducing kernel, the KSD is expressed by an expectation of a Stein-modified kernel, leading to tractable estimators only involving sample evaluations of the modified kernel. This feature allows us to use a wealth of kernels from the literature: by designing a Stein operator, one can adapt the kernel to the model and define a bespoke discrepancy measure. Based on the zero-mean kernel theory by Oates et al. (2016), Chwialkowski et al. (2016) and Liu et al. (2016) originated this line of work, where they combined the Langevin Stein + +operator proposed by Gorham & Mackey (2015) with an RKHS; remarkably, the resulting (Langevin) KSD is computable for densities with unknown normalizing constants. There have been numerous extensions to models in other data domains: categorical data (Yang et al., 2018), point-pattern data (Yang et al., 2019), censored data (Fernandez et al., 2020), directional data (Xu & Matsuda, 2020), and functional data (Wynne et al., 2022). We refer the reader to Anastasiou et al. (2021) for a review on the KSD's applications outside goodness-of-fit testing. + +A limitation of the preceding Stein works is that they require the model distribution to be defined on a fixed-dimension space. For example, Kanagawa et al. (2023) evaluate latent Dirichlet allocation models (Blei et al., 2003) for text documents assuming a fixed document length, an assumption unlikely to hold in practice. Relevant work has been accomplished by Wynne et al. (2022), where they propose a KSD for distributions on an (infinite-dimensional) Hilbert space, and treat continuous-time models, such as Brownian motions and stochastic differential equation models. While their test applies to sequential models, our target setting is different, as we focus on discrete-time models such as Markov chains, and do not require a density with respect to a Gaussian measure. + +In the present work, we extend the KSD framework to the variable-dimension setting. Specifically, we identify a Stein operator for distributions on the set of all univariate sequences of finite alphabets. This is to our knowledge the first such operator, and it may of independent interest. Our Stein operator builds on the class of Zanella-Stein operators recently proposed by Hodgkinson et al. (2020), which are derived from a Markov jump process having the target distribution as invariant measure; we review the Zanella-Stein operator in Section 2. In Section 3, we derive a new Stein operator using a Markov process admitting transitions between sets of different sizes. Based on our Stein operator and the associated KSD, we propose a novel goodness-of-fit test for sequential models. As in previous variants, the proposed KSD does not require the model to be normalized, allowing the evaluation of intractable models, including Markov random fields (Section 4.5) and conditional generative models (Section 4.6). As the proposed operator involves a number of tunable parameters, as our second contribution, we offer guidance on how to select these based on empirical power analysis. + +# 2. Background + +All Stein discrepancies are build around a particular Stein operator. In this section, we recall a class of operators, the Zanella-Stein operators, and describe the challenges in constructing operators of this class in the sequential setting. We also review the associated kernel Stein discrepancy (KSD), + +a goodness-of-fit measure that leverages a given Stein operator to construct a test statistic. + +Zanella-Stein Operator The Zanella-Stein (ZS) operators are a class of Stein operators for distributions on a countable set $\mathcal{X}$ , proposed by Hodgkinson et al. (2020) following the generator method of Barbour (1988). In the following, we assume that the model $P$ has probability mass function $p$ positive everywhere in $\mathcal{X}$ . For a real-valued function $f$ on $\mathcal{X}$ , a ZS operator $\mathcal{A}_P$ is defined by + +$$ +\left(\mathcal {A} _ {P} f\right) (x) := \sum_ {y \in \partial x} g \left(\frac {p (y)}{p (x)}\right) (f (y) - f (x)), \tag {1} +$$ + +where $\partial x\subset \mathcal{X}$ is a set of neighborhood points, and $g:(0,\infty)\to (0,\infty)$ is a balancing function, which must satisfy $g(t) = tg(1 / t)$ for $t > 0$ . Several choices of $g$ are known in the literature: the minimum probability flow operator discussed in (Barp et al., 2019) uses $g(t) = \sqrt{t}$ ; the choice $g(t) = t / (1 + t)$ is known as the Barker Stein operator (Barker, 1965; Shi et al., 2022). + +A Zanella-Stein operator is the infinitesimal generator of a Markov jump process called a Zanella process (Zanella, 2019; Power & Goldman, 2019) designed to have the target distribution $P$ as an invariant distribution. The process is based on the idea of locally informed proposals: these jump from a given point $x$ to any of its neighbors $y$ , at a rate $g(p(y) / p(x))$ so that detailed balance is satisfied for $P$ , ensuring that $P$ is an invariant distribution. For the operator to characterize $P$ , we must specify our notion of neighborhood. Let $(\mathcal{V},\mathcal{E})$ be the directed graph with vertices $\mathcal{V} = \mathcal{X}$ and edges $\mathcal{E} = \{(x,y)|y\in \partial x\}$ induced by the chosen neighborhood structure. The graph $(\mathcal{V},\mathcal{E})$ must have the following properties: + +1. Symmetry: $(x,y)\in \mathcal{E}\Leftrightarrow (y,x)\in \mathcal{E}$ +2. Strong connectivity: the transitive closure of $\mathcal{E}$ is $\mathcal{X} \times \mathcal{X}$ , so that there is a path from every point to every other point. + +In fact, symmetry alone implies that $P$ is in the set of invariant distributions; strong connectivity is required to ensure that $P$ is the unique invariant distribution, as otherwise the corresponding Stein discrepancy cannot distinguish $P$ from other distributions even for a rich test function class. In Hodgkinson et al. (2020), the aperiodicity condition is additionally required. For a pure jump-type process, we only need the irreducibility of the process (Kallenberg, 2021, Theorem 13.12), and hence we may dispense with this condition. Note that we can conveniently choose a sparse neighborhood to accelerate the computation, although this may result in reduced sensitivity of the discrepancy. In Section 4, we discuss in more detail the tradeoff between computational cost and test power. + +Challenges in the Variable-Dimension Setting The generator method (as in the ZS operator above) allows us to obtain an operator for any state space with an appropriate Markov process. Indeed, the Zanella process is not the only allowable choice: other Markov chains or processes have also been considered for discrete spaces, including birth-death processes (Shi et al., 2022), the Glauber dynamics Markov chain (Reinert & Ross, 2019; Bresler & Nagaraj, 2019) (we refer the reader to Shi et al., 2022, for a review). Defining a Markov process is often a challenge, however; prior work has thus only considered processes in fixed-dimensional spaces, and has not dealt with sequential models such as those considered here. The ZS operators are particularly attractive in our setting, since they may be defined on any discrete set. The main challenge in deriving an operator in this class is the construction of an appropriate neighborhood for models of variable-length sequences, which is highly nontrivial. Our contribution is to develop an effective neighborhood definition for the sequential setting, which satisfies the three requirement outlined above, and yields an associated test that has good statistical power against alternatives. + +Kernel Stein Discrepancy We next review the construction of a test statistic from the Stein operator. A computable discrepancy measure may be defined using a reproducing kernel Hilbert space (RKHS) (Aronszajn, 1950; Steinwart & Christmann, 2008). Following (Oates et al., 2016; Chwialkowski et al., 2016; Liu et al., 2016; Gorham & Mackey, 2017), we define the kernel Stein discrepancy (KSD) associated with our (yet to be specified) Zanella-Stein operator as follows: + +$$ +\operatorname {K S D} [ Q \| P ] := \sup _ {\| f \| _ {\mathcal {H}} \leq 1} \left| \mathbb {E} _ {X \sim Q} \left[ \mathcal {A} _ {P} (f) (X) \right] \right|, \tag {2} +$$ + +where $\mathcal{H}$ is the RKHS of real-valued functions on $\mathcal{X}$ corresponding to a positive definite kernel $k:\mathcal{X}\times \mathcal{X}\to \mathbb{R}$ with the natural norm $\| f\|_{\mathcal{H}} = \sqrt{\langle f,f\rangle_{\mathcal{H}}}$ defined by the inner product $\langle \cdot ,\cdot \rangle_{\mathcal{H}}$ . By the vanishing property $\mathbb{E}_{Y\sim P}[\mathcal{A}_Pf(Y)] = 0$ , the KSD is zero if $Q = P$ . The other direction requires further assumptions on the operator and the RKHS. According to Hodgkinson et al. (2020, Proposition 4), when the corresponding Zanella process is exponentially ergodic and the RKHS is $C_0$ -universal, we have $\mathrm{KSD}[Q\| P] = 0$ only if $P = Q$ . For a finite state space $\mathcal{X}$ , exponential ergodicity is satisfied by any irreducible Markov jump process. The $C_0$ -universality is equivalent to the integrally strict positive definiteness (Sripereumbudur et al., 2011); in the particular case $|\mathcal{X}| < \infty$ , this condition states that the Gram matrix defined over all the configurations on $\mathcal{X}$ is strictly positive definite. + +The use of an RKHS yields a closed-form expression of the KSD (Hodgkinson et al., 2020, see Proposition 1 and the + +paragraph following the proof): + +$$ +\operatorname {K S D} ^ {2} [ Q \| P ] = \mathbb {E} _ {X, X ^ {\prime} \sim Q \otimes Q} [ h _ {p} (X, X ^ {\prime}) ], \tag {3} +$$ + +where $X$ , $X'$ are i.i.d. random variables with law $Q$ , provided $\mathbb{E}_{X \sim Q}[h_p(X, X)^{1/2}] < \infty$ . The proof of this result follows as in (Gorham & Mackey, 2017, Proposition 2), noting that the inner product between $f$ and $\mathcal{A}_P k(x, \cdot)$ reproduces $\mathcal{A}_P f(x)$ for any $f \in \mathcal{H}$ . The function $h_p$ is called a Stein kernel, defined by + +$$ +\begin{array}{l} h _ {p} (x, y) \\ = \sum_ {\nu \in \mathcal {N} _ {x}} \sum_ {\tilde {\nu} \in \mathcal {N} _ {y}} g _ {\nu} (x) g _ {\tilde {\nu}} (y) \left\{k (\nu (x), \tilde {\nu} (y)) + k (x, y) \right. \\ \left. - k (x, \tilde {\nu} (y)) - k (\nu (x), y) \right\} \\ \end{array} +$$ + +where we identify each point in the neighborhood $\partial x$ as a mapping from $x$ to itself, and $g_{\nu}(x) = g\{p(\nu (x)) / p(x)\}$ . + +Given a sample $\{X_{i}\}_{i = 1}^{n}\stackrel {\mathrm{i.i.d.}}{\sim}Q$ , the squared KSD expression (3) admits a simple unbiased estimator + +$$ +U _ {n, P} \left(\left\{X _ {i} \right\} _ {i = 1} ^ {n}\right) := \frac {1}{n (n - 1)} \sum_ {i \neq j} ^ {n} h _ {p} \left(X _ {i}, X _ {j}\right), \tag {4} +$$ + +which is a U-statistic (Hoeffding, 1948). By the zero-mean property of the Stein kernel, the U-statistic is degenerate if $Q = P$ (Chwialkowski et al., 2016; Liu et al., 2016). In this case, the scaled statistic $nU_{n,P}(\{X_i\}_{i=1}^n)$ asymptotically follows the law of $\sum_{j=1}^{\infty} \lambda_j(Z_j^2 - 1)$ , where $Z_j$ are i.i.d. standard normal variables, and $\lambda_j$ are eigenvalues given by the eigenvalue problem $\sum_{x' \in \mathcal{X}} h_p(x, x') a(x') p(x') = \lambda a(x)$ with $\sum_{x \in \mathcal{X}} a(x)^2 p(x) < \infty$ (see, e.g., Serfling, 2009, Section 5.5). One of our proposed tests in Section 3 simulates this distribution using a wild bootstrap procedure. + +Kernels for Sequences The performance of the test developed in the next section depends on the choice of the reproducing kernel. As mentioned above, the $C_0$ -universality is a condition that guides kernel selection since it renders the test consistent (along with the exponential ergodicity). A trivial example of $C_0$ -universal kernels is the Dirac kernel that outputs 1 if two inputs are identical, and 0 otherwise; but this kernel might not be useful in practice, since all data points in the corresponding feature space are orthogonal, and provide no information about each other. As this example shows, kernels need not be universal to provide powerful tests, as long as they encode relevant features to the setting where they are used. The literature on sequence kernels is well-established (see, e.g., Király & Oberhauser, 2019, for a recent account), and one could use existing kernels, such as global alignment kernels (Cuturi et al., 2007; Cuturi, 2011) and string kernels (Haussler, 1999; Lodhi et al., 2002; Leslie & Kuang, 2004). Alternatively, one may also define a kernel by the inner product of explicit features (e.g., + +neural network embedding of sequences) instead of known kernels. + +# 3. Testing Sequential Models + +We now address the design of a Stein operator for the variable-dimension setting. We begin by formally defining the sample space $\mathcal{X}$ . Let $\mathcal{S}$ be a nonempty finite set of symbols. For an integer $\ell \geq 1$ , we denote by $S^{(\ell)}$ the set of all length- $\ell$ sequences formed by symbols in $\mathcal{S}$ ; i.e., $S^{(\ell)} = \prod_{j=1}^{\ell} S$ . Then, the sample space $\mathcal{X}$ is given by the set of all sequences $\bigsqcup_{\ell=1}^{\ell_{\max}} S^{(\ell)}$ , where $\bigsqcup$ denotes the disjoint union, and $\ell_{\max}$ is $\infty$ or a finite positive integer. In this setting, a random sample in $\mathcal{X}$ is a sequence whose length is randomly determined. + +As noted in Section 2, to obtain a KSD in this setting, we need to design a neighborhood structure which specifies our ZS operator. We first introduce a simple structure for sequences, which edits only the end of the string, to illustrate the required connectivity principles. We then describe a more advanced neighborhood structure, allowing for greater modifications, which will be the foundation of our tests. + +Neighborhood Choice For the Stein operator to uniquely characterize the model, we require the strong connectivity of the induced graph. We achieve this by introducing inter- and intra-length state transitions. Specifically, for a length- $\ell$ sequence $x_{(1:\ell)} = (x_1,\ldots ,x_\ell)\in S^{(l)}$ , we propose the following neighborhood: + +$$ +\partial x _ {(1: \ell)} = \mathcal {I} _ {x} \cup \left\{x _ {(1: \ell - 1)} \right\} \cup \mathcal {R} _ {x} \tag {5} +$$ + +with + +$$ +\mathcal {I} _ {x} = \left\{\left(x _ {1}, \dots , x _ {\ell}, s\right): s \in S \right\}, +$$ + +$$ +\mathcal {R} _ {x} = \left\{\left(x _ {1}, \dots , x _ {l - 1}, s\right): s \in \mathcal {N} _ {\mathrm {r e p}} \left(x _ {\ell}\right) \right\}, +$$ + +where $\mathcal{N}_{\mathrm{rep}}(s)$ denotes a neighborhood chosen for a symbol $s \in S$ ; this is chosen such that the induced graph is strongly connected. The first two sets in (5) represent the inter-length transitions: Each point of $\mathcal{I}_x$ corresponds to adding a symbol to the end of the given sequence $x_{(1:\ell)}$ , whereas $\{x_{(1:l-1)}\}$ stands for the deletion of the last symbol and shortening the sequence. The third set $\mathcal{R}_x$ represents intra-length transitions, where the last symbol of the sequence is replaced. A trivial choice for satisfying the strong connectivity is using the whole alphabet set $\mathcal{S}$ for $\mathcal{N}_{\mathrm{rep}}(s)$ for each $s \in S$ . This approach may be permitted when the alphabet size is small. One might otherwise want to consider using smaller sets to reduce the computational cost. For example, we can use a smaller $\mathcal{N}_{\mathrm{rep}}(s)$ , provided that the graph (with vertices $\mathcal{S}$ ) induced by the choice satisfies the three conditions required in Section 2; e.g., as in (Yang et al., 2018), + +we may introduce a cyclic order $\{0, \ldots, |\mathcal{S}| - 1\}$ to $\mathcal{S}$ , and take two adjacent points $\mathcal{N}_{\mathrm{rep}}(s) = \{s + 1, s - 1\}$ , where $s \pm 1$ denotes $(s \pm 1) \bmod |\mathcal{S}| - 1$ . + +The structure proposed above is a minimal neighborhood choice that guarantees strong connectivity. Indeed, we can edit the length of a sequence, and change the character in each position to a desired one because of the strong connectivity in the intra-length transition. As we will see in Section 4, however, this minimal choice may not produce a powerful test, as it only uses information of neighboring length sequences. For example, the tail insertion operation by $\mathcal{I}_x$ only encodes the structure of the target distribution in terms of the end of the sequence. For sequential models such as Markov chain models, the longer a sequence is, the more evidence we should have in favor of, or against, a given model. Editing only the final element of a sequence would therefore not make use of such accumulated evidence. Indeed, this approach performs relatively weakly in our benchmark experiments (Figure 2b, case of $J = 1$ ). + +Proposed Neighborhood Based on the above observation, we generalize the above neighborhood set. Specifically, we consider expanding the neighborhood set by modifying sequence elements at different locations: we propose the $J$ -location modification neighborhood + +$$ +\partial_ {x _ {(1: \ell)}} ^ {J} = \cup_ {j = 1} ^ {J} \left(\mathcal {I} _ {x _ {(1: \ell)}, j} \cup \mathcal {D} _ {x _ {(1: \ell)}, j} \cup \mathcal {R} _ {x _ {(1: \ell)}, j}\right), \tag {6} +$$ + +where + +$$ +\mathcal {I} _ {x _ {(1: \ell)}, j} \subset \mathcal {S} ^ {(\ell + 1)} = \left\{\operatorname {i n s} \left(x _ {(1: \ell)}, j, s\right): s \in \mathcal {N} _ {\text {i n s}} \right\}, +$$ + +$$ +\mathcal {D} _ {x _ {(1: \ell)}, j} \subset \mathcal {S} ^ {(\ell - 1)} = \{\operatorname {d e l} (x _ {(1: \ell)}, j) \text {i f} x _ {\ell - j + 1} \in \mathcal {N} _ {\text {i n s}} \}, +$$ + +$$ +\mathcal {R} _ {x _ {(1: \ell)}, j} \subset \mathcal {S} ^ {(\ell)} = \{\operatorname {r e p} (x _ {(1: \ell)}, j, s): s \in \mathcal {N} _ {\operatorname {r e p}} (x _ {\ell}) \}. +$$ + +Here, $\operatorname{ins}(x_{(1:\ell)}, j, s)$ denotes the sequence extending $x_{(1:\ell)}$ by inserting $s$ at the $(j-1)$ -th location counting back from the end of the string, using symbols from a fixed set $\mathcal{N}_{\mathrm{ins}} \subset S$ ; $\operatorname{del}(x_{(1:\ell)}, j)$ deletes the $(j-1)$ -th element counting back from the end of $x_{(1:\ell)}$ ; and $\operatorname{rep}(x_{(1:\ell)}, j, s)$ replaces by $s$ the $(j-1)$ -th element from the end of $x_{(1:\ell)}$ . Note that the deletion and insertion operations must be paired to ensure the symmetry of the graph. As with the substitution set $\mathcal{N}_{\mathrm{rep}}$ , we may choose $\mathcal{N}_{\mathrm{ins}}$ such that it depends on the point $x$ ; the resulting neighborhood requires additional care to maintain symmetry and (optionally) strong connectivity. See Section 4.6 for an example. + +Test Procedure Using one of the proposed neighborhoods for the Zanella-Stein operator, we can define a KSD goodness-of-fit test for sequential models. Given an i.i.d. data $\{X_{i}\}_{i = 1}^{n}\stackrel {\mathrm{i.i.d.}}{\sim}Q$ , we consider testing the null $H_0:\mathrm{KSD}[Q\| P] = 0$ against the alternative $H_{1}:$ $\mathrm{KSD}[Q\| P]\neq 0$ . Note that when the KSD distinguishes + +# Algorithm 1 Parametric bootstrap test + +Require: $P$ ; a target distribution on $\mathcal{X}$ . + +Require: $D_{n} = \{X_{1},\ldots ,X_{n}\}$ ; data. + +Require: $\alpha$ ; the desired level of the test. + +1: for $b = 1, \dots, B$ do +2: Sample $D_{n}^{(b)} = \{X_{1}^{(b)},\ldots ,X_{n}^{(b)}\} \stackrel {\mathrm{i.i.d.}}{\sim}P$ +3: $\Delta_{b}\gets U_{n,P}\left(D_{n}^{(b)}\right)$ +4: end for +5: $\hat{t}_{\alpha} \gets (1 - \alpha)$ -quantile of $\{\Delta_1, \ldots, \Delta_B\}$ +6: if $U_{n,P}(D_n) > \hat{t}_{\alpha}$ then +7: Reject $H_0$ +8: end if + +any distributions, the null and alternative become $H_0: P = Q$ and $H_1: P \neq Q$ , respectively. Below, we present two tests, which differ in how they compute test thresholds. + +The first test is the parametric bootstrap test in Algorithm 1. The parametric bootstrap repeatedly draws samples from the model $P$ and simulates the distribution of the statistic (4). Because of this feature, this procedure does not apply if sampling from the model is infeasible. + +The second test is the wild bootstrap test described in Algorithm 2, which is analogous to the existing KSD tests (Chwialkowski et al., 2016; Liu et al., 2016; Yang et al., 2018). As we have seen in Section 2, the asymptotic distribution of the (scaled) KSD estimate (4) is known. The wild bootstrap test simulates this distribution by repeatedly computing the following quantity: + +$$ +U _ {n, P} ^ {W} \left(D _ {n}\right) := \frac {\sum_ {i \neq j} ^ {n} \left(W _ {i} - 1\right) \left(W _ {j} - 1\right) h _ {p} \left(X _ {i} , X _ {j}\right)}{n (n - 1)} \tag {7} +$$ + +where $W = (W_{1},\ldots ,W_{n})$ is a sample from a multinomial distribution Multi(n; 1/n, . . ., 1/n) with $n$ trials and $n$ events and independent of the data $D_{n} = \{X_{1},\dots,X_{n}\}$ . The test has an only asymptotically correct size (Dehling & Mikosch, 1994, Theorem 3.1), and may not be suitable for a small sample size $n$ . In contrast to the parametric bootstrap test, this test does not require sampling from the model, and only needs single evaluation of the Stein kernel $h_p$ over the data. Treating the evaluation of the Stein kernel as constant, the test's computational complexity is $O(n^{2})$ , since the computation of the KSD statistic is $O(n^{2})$ ; this computation can be parallellized. + +# 4. Experiments + +We investigate the performance of the proposed test through synthetic experiments with defined ground truths. In par + +# Algorithm 2 Wild bootstrap test + +Require: $P$ ; a target distribution on $\mathcal{X}$ . + +Require: $D_{n} = \{X_{1},\ldots ,X_{n}\}$ ; data. + +Require: $\alpha$ ; the desired level of the test. + +1: for $b = 1, \dots, B$ do +2: Draw $W_{b} \sim \mathrm{Multi}(n; 1/n, \ldots, 1/n)$ +3: $\Delta_{b}\gets U_{n,P}^{W_{b}}(D_{n})$ +4: end for +5: $\hat{t}_{\alpha} \gets (1 - \alpha)$ -quantile of $\{\Delta_1, \ldots, \Delta_B\}$ +6: if $U_{n,P}(D_n) > \hat{t}_{\alpha}$ then +7: Reject $H_0$ +8: end if + +ticular, we aim to answer the following questions: (a) how leveraging model structure using a Stein kernel improves test performance over the maximum mean discrepancy (MMD) (Gretton et al., 2012, a purely sample-based kernel discrepancy where no model information is used); (b) the effect of neighborhood choice on the KSD test power. In the following, we use the Barker balancing function $g(t) = t / (1 + t)$ to define the KSD. + +Throughout our experiments, we will make use of two simple kernels. The first is the exponentiated Hamming kernel, $k_{\mathrm{H}}(x_{(1:\ell)},y_{(1:\ell)}) \coloneqq \exp \{-\ell^{-1}\sum_{i = 1}^{\ell}\delta_{x_i}(y_i)\}$ , where $\delta_x(y) = 1$ if $x = y$ and 0 otherwise. Additionally, we define $k_{\mathrm{H}}(x_{(1:\ell)},y_{(1:\ell')}) = 0$ whenever $\ell \neq \ell'$ . The second is a string kernel which we refer to as the contiguous subsequence kernel (CSK). This kernel counts the number of contiguous subsequences of a particular length, present in the two input sequences, + +$$ +k _ {\mathrm {C S K}} (x, y) := \frac {k _ {\mathrm {u}} (x , y)}{\sqrt {k _ {\mathrm {u}} (x , x) \cdot k _ {\mathrm {u}} (y , y)}}, +$$ + +where + +$$ +k _ {\mathrm {u}} \left(x _ {(1: \ell)}, y _ {(1: \ell^ {\prime})}\right) := \sum_ {i = 1} ^ {\ell - t + 1} \sum_ {j = 1} ^ {\ell^ {\prime} - t + 1} \delta_ {x _ {(i: i + t - 1)}} \left(y _ {(j: j + t - 1)}\right), +$$ + +with parameter $t$ for the subsequence length. We use the normalized version so that the kernel does not overly depend on the sequence lengths. The Hamming kernel does not take into account the sequence structure and can be considered as a naive choice, whereas the CSK kernel does. + +# 4.1. Demonstration of Test Power + +We first show that the KSD indeed benefits from information supplied by the model. For this purpose, we construct a problem where the alternative is given by adding a slight perturbation to the model. We choose $S = \{0, \dots, m - 1\}$ for our alphabet, with $m = 8$ . The model distribution is a random + +![](images/e4e6bf4b718bbf1f206fa4180389f41400340c3ccffeb136ba5d1b96596a9f70.jpg) +Figure 1: Estimated rejection rate (test power) of the proposed test when the model distribution and ground truth differ by a perturbation that is unlikely to be observed in any given sample. + +walk through $S$ , starting from a random uniform element of $S$ , with increments drawn uniformly from $\{-1, +1\}$ . We use a cyclic structure, where we identify 0 with $m$ such that the random walk wraps around. The chain terminates with probability $1/20$ after each step. We construct an alternative distribution as follows: At every step, if and only if the current element is $0 \in S$ , we randomly replace the increment with 0 with a probability $p_{\mathrm{hold}} \in [0, 1/4]$ . That is, we introduce a random holding step that occurs on average in $1/(4m) = 1/32$ of steps, when the perturbation parameter is at its maximum. The perturbation is unlikely to be observed in any given sample, making the problem challenging. + +For the KSD test we use the $J$ -location modification neighborhood (6) with $J = \infty$ ; i.e., we allow edits anywhere in the sequence. As baselines we use an MMD test that draws 100 samples from the model, a likelihood-ratio test where the alternative consists of all first-order Markov chains on $S$ , and an oracle likelihood-ratio test where the alternative is the (generally unknown) ground truth distribution. Both kernel tests are built using the CSK kernel. We use parametric bootstrap for all four tests. Tests are evaluated on $n = 30$ i.i.d. sample points from the perturbed distribution. + +In Figure 1 we can see that the KSD test outperforms the two non-oracle baselines. At low degrees of perturbation, the problem is challenging because perturbations are rarely observed. The KSD test is able to overcome this difficulty by exploiting information provided by the model. + +# 4.2. Choice of Neighborhood Graph + +Our test in the preceding experiment performed well, in part due to a good choice of neighborhood. We now further investigate the choice of neighborhood for the Zanella-Stein operator. In order to compare the performance of different choices of Stein kernel, we constructed 12 synthetic testing + +scenarios with various properties, ensuring our conclusions are broadly applicable, and not specific to properties of a particular model. The experiments in this section will evaluate the average performance of each test across these scenarios. We use discrete-time Markov chains for the model $P$ and ground truth $Q$ . The testing scenarios are fully specified in Appendix A.3 (see Appendix A.4 for individual results). + +More densely connected neighborhood graphs should yield more powerful tests, but such tests are slower to evaluate. To understand the tradeoff between compute budget and test power, we construct a family of KSD tests parameterized by the size of the neighborhood of each point. We consider three classes of neighborhood: (a) the $J$ -location modification neighborhood $\partial_{x_{(1:\ell)}}^{J}$ , (b) the substitution neighborhood $\cup_{j\leq J}\mathcal{R}_{x_{(1:\ell)},j}$ , and (c) the neighborhood $\cup_{j\leq J}(\mathcal{I}_{x_{(1:\ell)},j}\cup \mathcal{D}_{x_{(1:\ell)},j})$ of insertions and deletions. In each case, the symbol neighborhoods $\mathcal{N}_{\mathrm{ins}}$ and $\mathcal{N}_{\mathrm{rep}}$ are set to the entire alphabet $S$ . We denote the resulting Stein operators by ZS, $\mathrm{ZS}'$ , and $\mathrm{ZS}''$ , respectively. We construct KSDs by applying these operators on the two kernels specified earlier. To estimate the critical value, we use parametric bootstrap. We also include a pair of MMD tests, one for each kernel, as a baseline. + +Results are in Figure 2a. Larger neighborhoods lead to more powerful tests, as expected. However, Figure 2b shows that we face diminishing returns when trading off compute budget against test power. The CSK kernel always does better than the Hamming kernel. Interestingly, for the CSK kernel, the simpler $\mathrm{ZS^{\prime}}$ test performs slightly better than the ZS test, since in this instance, the larger neighbourhood of the latter does not yield significant additional information, while entailing a small increase in variance of the test statistic. Both of these Stein tests outperform the CSK MMD for $J = \infty$ . + +Another approach to controlling the size and connectivity of the neighborhood is to control the symbol neighborhoods $\mathcal{N}_{\mathrm{ins}}$ and $\mathcal{N}_{\mathrm{rep}}$ . We consider the operator $\mathrm{ZS}'$ (a neighborhood consisting of substitutions) with a cyclic structure over the alphabet $S$ and the symbol neighborhood $\mathcal{N}_{\mathrm{rep}}(s)$ limited by the distance in the cyclic alphabet. In Figure 3 we can see no notable impact of symbol neighborhood on test power. This implies that we may construct tests using a sparse neighborhood. The experiment also shows that changes to the size of the neighborhood are not sufficient to fully explain our previous results. Test power is evidently also impacted by the location of edits rather than just the number. + +# 4.3. Use of Evidence in Long Sequences + +Edits across the entire sequence lead to higher test power. One explanation for this effect is that editing the entire sequence, instead of just editing near the end, allows the test to make better use of the evidence available from longer + +![](images/94851bebad948800f1ed054ec4eb3cf38b59fde75da85fbdb4c6a965ccc19523.jpg) +(a) Rejection rate against the neighborhood size $J$ + +![](images/8f954a08fdc5628cd022245497aab2aa3ab756f90ec9459afddb2eaad72e5434.jpg) +(b) Rejection rate against median computation time + +![](images/30e22ad6443954eae8efe6a844502fd579138a73c566011f7813a68de29be2ca.jpg) +Figure 2: Estimated rejection rate of three different families of Zanella-Stein test with varying neighborhood size and underlying kernel. We evaluate the tests on 12 different synthetic testing scenarios, evaluating each test multiple times on independently sampled synthetic datasets. To estimate the overall rejection rate, we report a simple average across all test evaluations. Test power increases as we increase the neighborhood size. We see diminishing returns particularly when the Hamming kernel is used to construct the test. + +sequences. Where the data are distributed according to a Markov chain or some other model that mixes quickly, longer sequences provide more information about the distribution. We now show that the proposed test can indeed make use of such information. + +In order to illustrate this feature, we construct the following problem: The model distribution and ground truth are second-order Markov chains over a finite alphabet with $|S| = 8$ . We select two transition kernels randomly from a Dirichlet distribution with concentration parameter $\alpha = 1$ . The model distribution uses one of these two kernels, whereas the ground truth uses an even mixture of the two kernels, so that at each step each of the kernels is used with equal probability. The two chains both stop with a fixed probability $1 / \lambda$ after each step. We fix the sample size $n$ at 8 and vary the stopping probability in order to control + +![](images/35665c6cbbb3bf17f90f924f59ac2d4a844589cad65a09b22ba61eb69a124b8e.jpg) +Figure 3: Estimated rejection rate of several Zanella-Stein tests based on the substitution neighborhood. We control the size of the symbol neighborhood $\mathcal{N}_{\mathrm{rep}}$ and estimate rejection rate as in Figure 2. +Figure 4: Estimated rejection rate of the proposed test, where we control the expected length of the samples, while holding the sample count fixed at 8. As the expected length increases, test power grows. + +the length of the samples. + +The KSD test for this experiment is constructed using a large, strongly connected neighborhood: specifically, the $J$ -location modification neighborhood with $J = \infty$ , which allows insertion, deletion, and substitution anywhere in the sequence. As a baseline, we compare against an MMD test using 100 samples from the model distribution, as well as two likelihood-ratio tests: an oracle test, and a test where the alternative consists of all second-order Markov chains over $S$ . The two kernel tests are constructed using the CSK kernel with subsequence length $t = 3$ , and we use parametric bootstrap for all tests. + +The results are shown in Figure 4. We can see that the test is able to reject more often on longer sequences, without the need for more samples, and outperforms the MMD test. + +![](images/5ba75d53dd071dc3bf551ce0a3fe272c2222b3a2d8bfa27c6c3d5fb8fc31f69e.jpg) +Figure 5: Estimated rejection rate of the proposed KSD test. We use a randomly generated Markov chain as the model distribution and perturb the distribution of lengths for the data. The proposed KSD test fails to reject in this setting. + +# 4.4. Sensitivity to Changes in Length + +There is a caveat with the family of neighborhoods we have described: these neighborhoods are not particularly sensitive to changes in the distribution over sequence lengths. We demonstrate this point by choosing a randomly generated Markov chain over a finite alphabet $|S| = 10$ as the model distribution, with the transition kernel again chosen from a Dirichlet distribution as above. Here, the ground truth and model use the same transition kernel, but we vary the stopping probability. The stopping probability is selected such that the expected sequence length is 8 under the model distribution, whereas we vary the sequence length between 8 and 20 under the ground truth. + +We use the same tests as in the previous experiment, except that we use a subsequence length $t$ of 2 for the CSK kernel and the alternative for the LR baseline now consists of first-rather than second-order Markov chains. + +In Figure 5, we can see that the KSD test fails to reject entirely, outperformed by both types of baseline. This experimental result highlights the need for bespoke tests, when we desire that a test is sensitive to a specific aspect of the distribution. A test user who is particularly interested in the length distribution would likely benefit from directly testing the distribution over lengths using a classical test. + +# 4.5. Testing Intractable Models + +A major benefit of the proposed test is that it does not require us to sample from the model distribution, nor to compute a normalized density. Here, we showcase our test using a Markov Random Field (MRF) model, a class of intractable models. An MRF model $P$ is defined by normalizing an exponentiated potential function $h$ , i.e., we have $p(x) \propto \exp(h(x))$ . The normalization constant is often intractable, and hence so is the probability mass function $p$ . + +![](images/762866e369efd18f840f352ee3226eac704c2d2a6994e94415f8eca9fe30b753.jpg) +Figure 6: Performance comparison using a simple MRF model with a concentration parameter $\theta$ varied; the parameter $\theta$ is fixed at $\theta = 1$ for the model, and is chosen from 0.75 to 1.25 for the sample distribution. Our proposed test outperforms both non-oracle baselines in one region and outperforms MMD in the other region. + +We design an MRF model by specifying a single potential $h$ across the entire sequence space $\mathcal{X}$ : + +$$ +h \left(x _ {(1: \ell)}\right) := \left\{ \begin{array}{l l} C \cdot \ell + \theta \sum_ {i = 1} ^ {\ell - 1} \delta_ {x _ {i}} \left(x _ {i + 1}\right), & \text {i f} \ell \leq M \\ - \infty , & \text {e l s e}. \end{array} \right. +$$ + +In our experiment below, we hold the length parameters $C$ and $M$ fixed, while perturbing the distribution by varying $\theta$ . The parameter $\theta$ is a concentration parameter controlling the correlation between successive elements of a sequence; symbol repetitions become more common as $\theta$ grows. + +Our problem is given as follows. We specify $\theta = 1$ for the model distribution and vary $\theta \in [0.75, 1.25]$ for the data distribution; the setup tests the sensitivity to this perturbation. We specify $M = 20$ in all cases, and vary $C$ with $\theta$ so that the mean length is fixed at 10 and the length distribution does not depend on $\theta$ . The alphabet size is 3. We compare the proposed test against the MMD and LR baselines. Note that although it is generally challenging to sample from MRF models or to compute their normalized densities, the above special model allows us to perform both operations, and hence MMD and LR tests. For the LR baseline we use an oracle test (with access to the data distribution), as well as a test with $H_{1}$ consisting of first-order Markov chains; the latter baseline does not contain the above MRF family in the alternative. The MMD and KSD tests use the CSK kernel with subsequence length $t = 2$ . + +The proposed test performs competitively with the non-oracle baselines. In Figure 6 we can see that the proposed test outperforms the MMD baseline across the full range of perturbations. In particular, it outperforms the non-oracle LR baseline in one region; the LR baseline fails as $\theta$ grows, due to lack of consistency. We emphasize that our proposed test applies more generally than the LR test, since we do + +![](images/06ca6972802bc6963029a464916476ab12ece55c6a15242c10376aec9dcf1374.jpg) +Figure 7: Performance of the proposed KSD test on a simple MRF model. We perform the same experiment as shown in Figure 6, in this case holding fixed $\theta = 0.9$ and varying the number of samples. In this region, our proposed test outperforms the MMD baseline but not the LR baseline. + +not require normalized densities. Moreover, using the wild bootstrap procedure, we do not need to sample from the model, unlike the MMD test. + +We also show how test power improves with the data size in Figure 7. We choose the region where our test outperforms the MMD baseline, but not the LR baseline. The parameter $\theta$ is fixed to 0.9, while we vary the sample size $n$ . We can see that the MMD baseline has a low sample efficiency for this problem, confirming again the MMD's inability to use the model structure. Both the LR baseline and our proposed KSD test make use of model information. Only the KSD test can work in the absence of a normalized density, however. + +# 4.6. Inspecting Output-Constrained Generative Models + +In practical applications, we are often interested in the ability of a model to output a particular set of patterns or values. For example, in language modeling, one might be interested in the ability to generate a subset of a language or sentences of a particular form (e.g., questions). Mathematically, we can state this objective as follows: for a model $P$ , we aim to evaluate the goodness-of-fit of its conditional distribution $P_A$ , derived by restricting the model to a set $A \subset \mathcal{X}$ (see below for an example of $A$ ). The probability mass function (pmf) $p_A$ of the conditional $P_A$ is given by + +$$ +p _ {A} (x) = \frac {p (x)}{\sum_ {\tilde {x} \in A} p (\tilde {x})}, +$$ + +where $p$ is the pmf of $P$ . The normalization constant appearing in $p_A$ is typically unknown (even when $p$ is normalized) and thus as in the MRF experiment, this task is challenging both for MMD and likelihood-based diagnostics. + +Following the above setup, we conduct an additional experiment. Specifically, we consider evaluating the ability of a language model to generate question sentences (i.e., $A$ is + +![](images/d71f6dfa32e10de94e0eb9b4a9b9f156b1aff17e88c6b9b67c8dfb2f171f76a8.jpg) +Figure 8: Performance comparison in the language model experiment. The two KSD tests outperform the MMD test. In particular, the ZS variant beats the $\mathrm{ZS^{\prime}}$ one when the mixture proportion $\pi$ is small, showing a benefit of the richer neighborhood structure. + +the set of sequences ending with symbol ‘?’). Our task is detecting the departure of one model configuration from its mixture with another (with the mixture parameter $\pi$ varied). We use the pretrained model gpt2 (Radford et al., 2019) to obtain two configurations: these are attained by inputting two different prompts: How are and Where is. To sample from the language model, we generate a sequence of up to 5 additional tokens following the prompt, stopping when we encounter the token ‘?’. The sample is rejected if this token is not encountered within 5 steps following the prompt. For each prompt, this gives a different distribution over question sentences. The model distribution is specified as the output with the prompt How are, while the ground truth is a mixture of outputs by randomly selecting between the two prompts. We draw $n = 50$ samples. + +We compare the KSD against the MMD test. For the MMD, we generate samples using rejection sampling. This is tractable in this experiment, because the prompts are designed to yield a low rejection probability (questions are likely to be generated). For the KSD, we use the ZS and $\mathrm{ZS^{\prime}}$ configurations as in Section 4.2 with $J = 1$ . We choose sparse, point-dependent insertion and substitution neighborhoods; we use 16 high-probability tokens for a given context, and thus do not use the sparsification approach from Section 4.2. Both MMD and KSD tests use wild bootstrap procedures. Here, we do not include likelihood-based tests as they are intractable in this setting. For further experimental details, see Appendix A.2. Figure 8 shows the result, showcasing the KSD's utility in a challenging task. + +# Acknowledgements + +We thank the three anonymous referees for their helpful feedback. HK and AG acknowledge the financial support by the Gatsby Charitable Foundation. + +# References + +Anastasiou, A., Barp, A., Briol, F.-X., Ebner, B., Gaunt, R. E., Ghaderinezhad, F., Gorham, J., Gretton, A., Ley, C., Liu, Q., Mackey, L., Oates, C. J., Reinert, G., and Swan, Y. Stein's method meets computational statistics: A review of some recent developments. Statistical Science, 38(1), 2021. doi: 10.1214/22-sts863. +Aronszajn, N. Theory of reproducing kernels. Transactions of the American mathematical society, 68(3):337-404, 1950. +Barbour, A. D. Stein's method and Poisson process convergence. Journal of Applied Probability, 25:175-184, 1988. Publisher: Cambridge University Press. +Barker, A. Monte Carlo calculations of the radial distribution functions for a proton-electron plasma. Australian Journal of Physics, 18(2):119, 1965. doi: 10.1071/ph650119. +Barp, A., Briol, F.-X., Duncan, A., Girolami, M., and Mackey, L. Minimum Stein discrepancy estimators. In Advances in Neural Information Processing Systems, volume 32, 2019. +Blei, D., Ng, A., and Jordan, M. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993-1022, 2003. +Bresler, G. and Nagaraj, D. Stein's method for stationary distributions of Markov chains and application to Ising models. Annals of Applied Probability, 29, 2019. +Chwialkowski, K., Strathmann, H., and Gretton, A. A kernel test of goodness of fit. In International conference on machine learning, pp. 2606-2615. PMLR, 2016. +Cuturi, M. Fast global alignment kernels. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML'11, pp. 929-936, Madison, WI, USA, 2011. Omnipress. ISBN 9781450306195. +Cuturi, M., Vert, J.-P., Birkenes, O., and Matsui, T. A kernel for time series based on global alignments. 2007. doi: 10.1109/icassp.2007.366260. +Dehling, H. and Mikosch, T. Random quadratic forms and the bootstrap for U-statistics. Journal of Multivariate Analysis, 51(2):392-413, 1994. Publisher: Elsevier. +Fernandez, T., Rivera, N., Xu, W., and Gretton, A. Kernelized Stein discrepancy tests of goodness-of-fit for timeto-event data. In Proceedings of the 37th International Conference on Machine Learning, pp. 3112-3122, 2020. + +Gorham, J. and Mackey, L. Measuring sample quality with Stein's method. In Advances in Neural Information Processing Systems, volume 28, 2015. +Gorham, J. and Mackey, L. Measuring sample quality with kernels. In Proceedings of The 34th International Conference on Machine Learning, pp. 1292-1301, 2017. +Gretton, A., Borgwardt, K. M., Rasch, M. J., Scholkopf, B., and Smola, A. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723-773, 2012. Publisher: JMLR.org. +Haussler, D. Convolution kernels on discrete structures, 1999. +Hodgkinson, L., Salomone, R., and Roosta, F. The reproducing stein kernel approach for post-hoc corrected sampling. 2020. +Hoeffding, W. A class of statistics with asymptotically normal distribution. Ann. Math. Statist., 19(3):293-325, September 1948. +Kallenberg, O. Foundations of Modern Probability. Springer International Publishing, 2021. doi: 10.1007/978-3-030-61871-1. +Kanagawa, H., Jitkrittum, W., Mackey, L., Fukumizu, K., and Gretton, A. A kernel Stein test for comparing latent variable models. To appear in Journal of the Royal Statistical Society. Series B, (Statistical Methodology), 2023. +Kiraly, F. J. and Oberhauser, H. Kernels for sequentially ordered data. 20, 2019. Publisher: Journal of Machine Learning Research. +Leslie, C. and Kuang, R. Fast string kernels using inexact matching for protein sequences. Journal of Machine Learning Research, 5:1435-1455, dec 2004. ISSN 1532-4435. +Liu, Q., Lee, J., and Jordan, M. A kernelized Stein discrepancy for goodness-of-fit tests. In International conference on machine learning, pp. 276-284. PMLR, 2016. +Lloyd, J. and Ghahramani, Z. Statistical model criticism using kernel two sample tests. In Advances in Neural Information Processing Systems, pp. 829-837, 2015. +Lodhi, H., Saunders, C., Shawe-Taylor, J., Cristianini, N., and Watkins, C. Text classification using string kernels. 2:419-444, 2002. +Oates, C. J., Girolami, M., and Chopin, N. Control functionals for Monte Carlo integration. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79 (3):695-718, 2016. doi: 10.1111/rssb.12185. + +Power, S. and Goldman, J. V. Accelerated sampling on discrete spaces with non-reversible markov processes. 2019. +Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. Language models are unsupervised multitask learners. 2019. +Reinert, G. and Ross, N. Approximating stationary distributions of fast mixing Glauber dynamics, with applications to exponential random graphs. Annals of Applied Probability, 29, 2019. +Serfling, R. J. Approximation theorems of mathematical statistics. John Wiley & Sons, 2009. +Shi, J., Zhou, Y., Hwang, J., Titsias, M. K., and Mackey, L. W. Gradient estimation with discrete Stein operators. In Advances in Neural Information Processing Systems, volume 35, 2022. +Sriperumbudur, B. K., Fukumizu, K., and Lanckriet, G. R. G. Universality, characteristic kernels and RKHS embedding of measures. Journal of Machine Learning Research, 12: 2389-2410, 2011. +Stein, C. M. A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. 1972. +Steinwart, I. and Christmann, A. Support Vector Machines. Information Science and Statistics. Springer, 2008 edition, 2008. +Wynne, G., Kasprzak, M., and Duncan, A. B. A spectral representation of kernel Stein discrepancy with application to goodness-of-fit tests for measures on infinite dimensional hilbert spaces. 2022. doi: https://doi.org/10.48550/arXiv.2206.04552. +Xu, W. and Matsuda, T. A Stein goodness-of-fit test for directional distributions. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, pp. 320-330, 2020. +Yang, J., Liu, Q., Rao, V. A., and Neville, J. Goodness-of-fit testing for discrete distributions via Stein discrepancy. In ICML, 2018. +Yang, J., Rao, V., and Neville, J. A Stein-Papangelou goodness-of-fit test for point processes. In Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, pp. 226-235, 2019. +Zanella, G. Informed proposals for local MCMC in discrete spaces. 115:852 - 865, 2019. + +# A. Experimental details + +We provide further details for the experiments that we conducted. + +# A.1. Likelihood Ratio Test + +The LR baselines use the likelihood ratio statistic: + +$$ +T _ {m} ^ {\mathrm {L R}} \left(\left\{Y ^ {(l)} \right\} _ {l = 1} ^ {m}\right) := 2 \left(\sup _ {p \in H _ {0} \cup H _ {1}} \log p \left(\left\{Y ^ {(l)} \right\} _ {l = 1} ^ {m}\right) - \sup _ {p \in H _ {0}} \log p \left(\left\{Y ^ {(l)} \right\} _ {l = 1} ^ {m}\right)\right), \tag {8} +$$ + +where $H_0$ and $H_1$ are the null and alternative hypotheses, respectively. In our experiments, we use a simple null hypothesis for the likelihood ratio statistic. The alternative used in the oracle baseline is a simple hypothesis containing only the (normally unknown) ground truth. The alternative used in the MC baseline is a composite hypothesis consisting of all Markov chains of some order, with the order depending on the experiment. In experiments where we use an alternative consisting of first-order Markov chains, the alternative consists of all initial distributions over the alphabet together with all transition kernels through the alphabet, with an additional stopping point: + +$$ +H _ {1} := \operatorname {S i m} _ {\mathcal {S}} \times \left(\operatorname {S i m} _ {\mathcal {S} \cup \{\text {s t o p} \}}\right) ^ {\mathcal {S}}, \tag {9} +$$ + +$$ +\operatorname {S i m} _ {\mathcal {S}} := \left\{\left(\theta_ {i}\right) _ {i \in \mathcal {S}}: \sum_ {i \in \mathcal {S}} \theta_ {i} = 1 \text {a n d} \theta_ {i} \geq 0 \text {f o r a n y} i \in \mathcal {S} \right\}. \tag {10} +$$ + +In experiments where we use an alternative consisting of second-order Markov chains, the alternative consists of the analogous collection of second-order kernels: + +$$ +H _ {1} := \operatorname {S i m} _ {\mathcal {S}} \times \left(\operatorname {S i m} _ {\mathcal {S} \cup \{\text {s t o p} \}}\right) ^ {\mathcal {S}} \times \left(\operatorname {S i m} _ {\mathcal {S} \cup \{\text {s t o p} \}}\right) ^ {\mathcal {S} \times \mathcal {S}}. \tag {11} +$$ + +# A.2. Further Details Regarding the Language Model Experiment in Section 4.6 + +In Section 4.6, we consider the problem of evaluating a generative model with an output constraint. Although the support of the distribution is restricted to the set of question sentences, our setting still covers this scenario because we may construct a question sentence from any sequence by adding the suffix '?' (i.e., the set of all sequences is bijective to that of all question sequences). In reality, however, the assumption that the model has positive probability everywhere is unlikely to hold; e.g., some tokens never appear in questions (our experiment limits possible sequences with prompts). This assumption is placed to ensure that the formalism is well-defined; in practice, to follow the formalism, we may assume that the model has extremely small probabilities that do not affect the model's output effectively. That said, it is more useful to consider a neighborhood structure that spans the model's support rather than the entire sequence space. As described below, our neighborhood design excludes any points which are zero-probability under the model. + +The model is configured using top- $k$ filtering with $k = 16$ . The sampling procedure is as follows: When the model samples the next token, $x_{n+1}$ , for an already-sampled prefix $s = (x_1, \ldots, x_n)$ , only the most likely $k$ tokens from the alphabet are considered. That is, we sort the alphabet $S$ using the conditional model density $\operatorname{Pr}(x_{n+1} = \bullet | x_1, \ldots, x_n)$ , and keep only the $k$ elements which have the largest probability using this ordering. Then, we renormalize the probabilities of these $k$ points and sample the next token from the resulting distribution. + +As mentioned in the main body, we use point-dependent insertion and substitution sets. Specifically, for a given sequence $s = (x_{1},\ldots ,x_{n},^{\prime}?^{\prime})$ , the substitution neighborhood $\mathcal{R}_{x_{(1:n)},n}$ consists of all sequences of the form $s^{\prime} = (x_{1},\dots,x_{n}^{\prime},?^{\prime})$ , where $x_{n}^{\prime}$ is chosen from any of the top- $k$ next tokens after the prefix $s_{(1:n - 1)}$ . Similarly, the insertion neighborhood $\mathcal{I}_{x_{(1:n)},n}$ consists of all top- $k$ sequences $s^{\prime} = (x_{1},\dots,x_{n},x_{n + 1}^{\prime},?^{\prime})$ , where $x_{n + 1}^{\prime}$ is any of the top- $k$ next tokens after the prefix $s_{(1:n)}$ . The deletion neighborhood consists of the single sequence $s^{\prime} = (x_{1},\dots,x_{n - 1},?^{\prime})$ . Since an observed point must have non-zero probability (otherwise we can immediately reject the null hypothesis), $x_{n}$ must itself be one of the top- $k$ next tokens after the prefix $s_{(1:n - 1)}$ . Hence, substitutions are symmetric. Insertions are similarly symmetric. + +This prefix-dependent neighborhood generalizes the approach described in Section 4.2, where we used a subset of the alphabet as a symbol neighborhood $\mathcal{N}_{\mathrm{rep}}(s) = \mathcal{N}_{\mathrm{rep}}(x_n)\subsetneq \mathcal{S}$ . Our generalization is that the substitution neighborhood depends not on the symbol being replaced, but on the point as a whole via the prefix. Similarly, the insertion neighborhood also depends on the point as a whole. + +The neighborhood graph used in the $\mathrm{ZS^{\prime}}$ variant is not strongly connected, rendering the resulting test inconsistent. The neighborhood graph used in the ZS variant is not guaranteed to be strongly connected. This is because the neighborhood graph that we use satisfies two conditions: the edges consist only of transitions with a bounded edit distance (all transitions have edit distance 1), and transitions to zero-probability sequences are excluded from the graph. Depending on the model structure, it may then be the case that the neighborhood graph is disconnected. For example, the sequences Where is my pigeon? and Where is the cat? may be in the support of the model. A path in the neighborhood graph used in the ZS variant would have to pass through the sequences Where is my? and Where is the?. We would not expect the these rather unnatural sentences to be in the support set of the model when top- $k$ filtering is used, and in that case the necessary transitions would be excluded from the neighborhood graph. This would cause the graph to be disconnected. + +We use the pretrained model from Hugging Face (https://huggingface.co/gpt2) to conduct the experiment. + +# A.3. Details of Testing Scenarios + +In several experiments, we referred to a collection of 12 synthetic testing scenarios to evaluate the test power. We provide the details of these scenarios. The scenarios are intended to be dissimilar to each other in form and difficulty, although all model distributions and ground truths are Markov chains of some order. + +Some of the distributions described here do not have support on the entire sequence space. In our earlier discussion, we required support everywhere as a technical condition relating to the connectivity of the Zanella process. To ensure that the probability of all sequences is positive, we introduce low-probability "restart" events into all of our Markov chains that resample uniformly from the alphabet with low probability, rather than sampling from the specified transition kernel. The probability of these events is small, $\varepsilon \approx 10^{-3}$ . We do not mention this further, in order to simplify our discussion. + +Binary i.i.d. sequences We draw sequences of i.i.d. Bernoulli variables. The model and ground truth differ in the parameter of the Bernoulli distribution. + +Alphabet: $\{0,1\}$ + +Model distribution: Length follows a Poisson distribution with mean 20. Contents of the sequence are i.i.d. Bernoulli with $p = 0.6$ . + +Ground truth: Length follows a Poisson distribution with mean 20. Contents of the sequence are i.i.d. Bernoulli with $p = 0.4$ . + +Sample size: 10. + +Binary sequences: misspecified Markov order We simulate the case where we are mistaken about the Markov order of the data. The model consists of i.i.d. Bernoulli variables, while the ground truth follows a simple first-order Markov chain. + +Alphabet: $\{0,1\}$ + +Model distribution: Length follows a geometric distribution with mean 20. Contents of the sequence are i.i.d. Bernoulli with $p = 0.6$ . + +Ground truth: First-order Markov chain. Initial distribution uniform on $\{0,1\}$ . The transition kernel is + +$$ +\Pr (\operatorname {s t o p} | X _ {t} = x) = 1 / \lambda , \tag {12} +$$ + +$$ +\Pr \left(X _ {t + 1} = y \mid X _ {t} = x\right) = (1 - 1 / \lambda) \delta_ {x} (y), \tag {13} +$$ + +with $\lambda = 20$ . This gives sequences alternating between $\{0,1\}$ . + +Sample size: 30. + +Random walk I We take a random walk through a cyclic alphabet and perturb it by letting the ground truth hold, or "skip a step", with low probability. + +Alphabet: $S = \{1,\dots ,8\}$ with a cyclic structure. + +Model distribution: Random walk through $S$ according to the cyclic structure. Start uniformly on $S$ . After each step, stop with probability $1 / \lambda$ , with $\lambda = 8$ . + +Ground truth: Random walk with holding. Start uniformly on $S$ . At each step, hold or "skip a step", with probability $p = 0.2$ , letting $X_{t+1} = X_t$ . Otherwise, sample a step from $\{-1, +1\}$ uniformly. After each step, stop with probability $1 / \lambda$ , with $\lambda = 8$ . + +Sample size: 30. + +Random walk II The same scenario, but with few long sequences instead of many short sequences. We only let the ground truth hold in a subset of states, which reduces the degree of perturbation and makes the problem more challenging. + +Alphabet: $S = \{1,\dots ,30\}$ with a cyclic structure. + +Model distribution: Same as in Random walk I with $\lambda = 30$ + +Ground truth: Same as in Random walk I but with $\lambda = 30$ and sampling the step from $\{-1,0, + 1\}$ according to: + +$$ +\Pr \left(X _ {t + 1} - X _ {t} = \Delta \mid X _ {t} = x\right) = \left\{ \begin{array}{l l} p, & \text {i f} \Delta = 0, x \leq i _ {\max }, \\ \frac {1 - p}{2}, & \text {i f} \Delta \in \{- 1, + 1 \}, x \leq i _ {\max }, \\ \frac {1}{2}, & \text {i f} \Delta \in \{- 1, + 1 \}, x > i _ {\max }, \\ 0, & \text {o t h e r w i s e ,} \end{array} \right. \tag {14} +$$ + +with $i_{\mathrm{max}} = 8$ and $p = 0.2$ . + +Sample size: 8. + +Random walk III We introduce memory into the random walk by correlating or anti-correlating successive steps, which gives us a simple second-order MC. + +Alphabet: $S = \{1,\dots ,10\}$ with a cyclic structure. + +Model distribution: Same as in Random walk I, but we modify the transition kernel to correlate successive steps. Specifically, letting $\delta = 0.95$ , we sample the step from $\{-1,0, + 1\}$ according to: + +$$ +\Pr \left(X _ {t + 2} - X _ {t + 1} = d | X _ {t + 1}, X _ {t}\right) = \left\{ \begin{array}{l l} \delta , & \text {i f} d = X _ {t + 1} - X _ {t}, | d | = 1, \\ 1 - \delta , & \text {i f} d = X _ {t} - X _ {t + 1}, | d | = 1, \\ \frac {1}{2}, & \text {i f} | d | = 1 < | X _ {t} - X _ {t + 1} |, \\ 0, & \text {o t h e r w i s e .} \end{array} \right. \tag {16} +$$ + +We have to treat the case where $|X_{t} - X_{t + 1}| > 1$ due to the low probability restart event mentioned earlier. + +Ground truth: Same as model distribution but with $\delta = 0.05$ . + +Sample size: 30. + +Random walk IV Same as in Random walk III but with $\lambda = 30$ and sample size 8. That is, few long sequences instead of many short ones. + +Random second-order MC I We define a distribution over second-order MC and sample two random ones. The stopping probability is constant so that we can control the length independently of the perturbation of the contents. We sample many short sequences. + +Alphabet: $\mathcal{S} = \{1,\dots ,10\}$ + +Model distribution: We sample a second-order Markov chain by sampling the initial distribution and transition kernels from Dirichlet distributions with concentration parameter $\alpha = 1$ . At each step, the chain stops with probability $1 / \lambda$ where $\lambda = 8$ . + +Ground truth: Same as model distribution, but the Markov chain is sampled from a different random number generator (RNG) seed in order to give a different MC. + +Sample size: 30. + +Random second-order MC II Same as in Random second-order MC I but with $\lambda = 20$ and sample size 8. That is, we sample a small number of long sequences. + +Random second-order MC III Same as in Random second-order MC I but with $\lambda = 8$ and sample size 8. That is, we sample a small number of short sequences. + +Random MC with varied initial distribution I We sample a single random first-order MC for the model and ground truth, and set a fixed initial distribution which we perturb. This poses a hard problem, as most of the observed data relates to transitions that come from an unperturbed transition kernel. + +Alphabet: $\mathcal{S} = \{1,\dots ,10\}$ + +Model distribution: We sample a first-order Markov chain by sampling the transition kernel from a Dirichlet distribution with concentration parameter $\alpha = 1$ . The initial distribution is uniform over $S$ . At each step, the chain stops with probability $1 / \lambda$ where $\lambda = 8$ . + +Ground truth: Same as model distribution, with the same RNG seed. in order to give the same transition kernel. The initial distribution is an equal mixture of Unif $(S)$ and Unif $\{\{1,2\}\}$ . + +Sample size: 30. + +Random MC with varied initial distribution II Same as in Random MC with varied initial distribution I but with $\lambda = 20$ and sample size 8. This problem is significantly harder, as we observe 8 rather than 30 instances of the initial distribution, and the longer sequences do not provide any additional evidence. + +Random MC with varied length distribution We vary the distribution over lengths while keeping the content distribution the same. + +Alphabet: $\mathcal{S} = \{1,\dots ,10\}$ + +Model distribution: We sample a first-order Markov chain by sampling the transition kernel from a Dirichlet distribution with concentration parameter $\alpha = 1$ . The initial distribution is uniform over $S$ . At each step, the chain stops with probability $1 / \lambda$ where $\lambda = 8$ . + +Ground truth: We use the same initial distribution and transition kernel as in the model. At each step, the chain stops with probability $1 / \lambda$ where $\lambda = 20$ . + +Sample size: 30. + +# A.4. Results of Individual Testing Scenarios + +We report the individual per-scenario rejection rates for an earlier experiment, for which we reported the average rate across scenarios in Figure 2. The per-scenario rates are shown in Table 1. + +Two of the scenarios are designed to highlight the failure of the inconsistent test variants. These are Binary sequences: True distribution is not i.i.d. sequence, which shows the failure of the inconsistent CSK kernel; and Random MC w/ varied length dist, which shows the failure of the inconsistent variant $\mathrm{ZS^{\prime}}$ . + +
TestJ13579CSK ZSCSK ZS'CSK ZS''CSK MMD
135791357913579
Scenario
Binary sequences: Few long i.i.d. sequences0.2370.4150.7150.8700.9321.0000.2420.5070.7920.9020.9601.0000.2250.3750.6230.6900.810
Binary sequences: True distribution is not i.i.d. sequence0.0750.0220.0500.0300.0300.0400.0330.0070.0130.0180.0130.0050.0650.0480.0500.0800.090
Random 2nd-order MC: Few long sequences0.0400.1580.2820.4380.5000.9070.0750.1720.3350.4200.5170.9170.0450.1800.2530.3520.472
Random 2nd-order MC: Few short sequences0.0650.1170.2250.2800.3570.4770.0680.1550.2070.3150.3600.4430.0330.1350.2550.2970.375
Random 2nd-order MC: Many short sequences0.1770.4200.7200.8350.9070.9730.1750.4900.7600.8870.9100.9680.0800.4220.7100.8770.935
Random MC w/ varied initial dist: Few long sequences0.0520.0650.0600.0680.0480.0430.0430.0430.0550.0280.0480.0400.0700.0650.0600.0550.062
Random MC w/ varied initial dist: Many short sequences0.0750.0520.1020.0480.0600.0500.0550.0800.0900.0680.1500.1050.0680.0620.0650.0980.075
Random MC w/ varied length dist0.0330.0180.0130.0180.0250.0350.0250.0070.0280.0450.0300.0250.0130.0000.0100.0180.010
Random walk with memory: Few long sequences0.2200.3230.4870.5700.5800.9780.3950.4650.5170.5800.6550.9580.0850.0600.1330.2230.263
Random walk with memory: Many short sequences0.5250.7100.8920.9430.9680.9950.6930.7620.9070.9430.9800.9980.1350.2070.3930.5000.733
Random walk: Few long sequences0.1300.3830.7400.8100.8801.0000.1470.4050.6030.7550.8001.0000.0300.4170.6520.8380.925
Random walk: Many short sequences0.4380.9120.9780.9931.0001.0000.6470.8180.9630.9950.9980.9980.0720.8100.9700.9931.000
TestJ135791357913579
Scenario
Binary sequences: Few long i.i.d. sequences0.0300.0650.0700.0430.0400.1000.1000.0920.1330.2200.2150.1950.0330.0370.0400.0520.037
Binary sequences: True distribution is not i.i.d. sequence0.0550.1170.1700.2150.1800.1520.1330.3470.4920.6030.7450.8150.0600.1200.1380.1770.220
Random 2nd-order MC: Few long sequences0.0580.2250.2250.2630.2070.2370.0620.0850.1170.1200.1280.1130.0350.1400.2200.1900.237
Random 2nd-order MC: Few short sequences0.0770.1770.1900.2800.2050.2100.0770.1420.0900.1500.1350.1420.0750.1600.2280.2370.223
Random 2nd-order MC: Many short sequences0.1150.1880.2330.2330.3150.2870.2050.2450.2730.2850.2800.2470.0520.2170.2050.2650.273
Random MC w/ varied initial dist: Few long sequences0.0650.0220.0600.0700.0620.0350.0370.0580.0450.0770.0620.0900.0480.0620.0700.0520.040
Random MC w/ varied initial dist: Many short sequences0.1400.0850.0300.0700.0620.0650.1630.2100.1720.1630.2170.2300.0400.0600.0680.0680.060
Random MC w/ varied length dist0.0050.0150.0250.0370.0330.2250.0150.0130.0050.0100.0030.0000.0130.0070.0100.0300.030
Random walk with memory: Few long sequences0.0800.0600.0650.0350.0680.0850.0680.1130.0770.0900.0450.0600.0520.0370.0700.0520.075
Random walk with memory: Many short sequences0.0300.0480.1250.0750.0800.0770.0330.0650.0620.0830.0500.0750.0830.0870.1020.0800.110
Random walk: Few long sequences0.0430.0720.1130.1380.1750.2250.0680.0550.0870.0430.0650.0750.0800.1020.1050.1470.122
Random walk: Many short sequences0.0500.1520.1550.1820.1500.2150.0430.1100.1150.1020.1280.0770.0800.1700.1600.1700.245
+ +Table 1: Estimated rejection rate of three different families of Zanella-Stein test with varying neighborhood size and underlying kernel. We evaluate the tests on 12 different synthetic testing scenarios, evaluating each test multiple times on independently sampled synthetic datasets. To estimate the per-scenario rejection rate, we report an average across the test evaluations for each scenario. The data shown here is averaged across scenarios in Figure 2 above. + +# B. Additional Experiments + +# B.1. Balancing Function + +We conduct experiments to show the relative performance of the Barker balancing function $t \mapsto t / (1 + t)$ against the minimum probability flow (MPF) $t \mapsto \sqrt{t}$ . Table 2 shows performance of a KSD on the 12 testing scenarios detailed in Appendix A.3, comparing the two balancing functions. Figure 9 shows performance of a KSD on the MRF experiment, comparing the two balancing functions against various baselines. We observe that the MPF operator tends to yield higher power than the Barker operator. A drawback of the MPF operator is that the ratio in Equation (1) can be numerically unstable when $p(x)$ is extremely small. The Barker operator circumvents this issue as the ratio becomes $p(y) / \{p(x) + p(y)\}$ (Shi et al., 2022, also mentions this point). + +
BarkerMPF
Binary i.i.d. sequences0.880.80
Binary sequences: misspecified Markov order0.010.00
Random walk I0.981.00
Random walk II0.690.82
Random walk III0.930.96
Random walk IV0.540.66
Random second-order MC I0.840.90
Random second-order MC II0.380.53
Random second-order MC III0.280.45
Random MC with varied initial distribution I0.070.05
Random MC with varied initial distribution II0.050.09
Random MC with varied length distribution0.050.02
+ +Table 2: Estimated test power of the Zanella-Stein kernel goodness-of-fit test, constructed from the CSK kernel using two different balancing functions. We choose either the Barker or the minimum probability flow (MPF) function. + +![](images/bf7879ae33d4156ca2c1d0f9e49e3c7af186a17be255dcbfb3d969381688568c.jpg) +Figure 9: Performance comparison of two choices of balancing function in the construction of the KSD: Barker against minimum probability flow. + +# B.2. Larger Neighborhoods + +In Section 3, we considered a neighborhood structure where we modified one element of the sequence. Here, we conduct an experiment to see the effect of using a larger edit distance. We use the MRF experiment in Section 4.5 as a benchmark and compare the original KSD used in Section 4.5 against a KSD that allows for larger edit distance. The neighborhood for the latter KSD is extended by adding double-substitution, that is, substitution in two locations. Figure 10 shows the result, where the oracle and MMD baselines are included as a reference. We can see that test sensitivity is at best improved by a negligible amount in one region, while similarly it worsens in the other region. For this problem, expanding the neighborhood as above does not improve test sensitivity despite a higher computational cost. + +![](images/37bd904589caa32a8de94da403419b8474b896faa720b804e63b885047cde803.jpg) +Figure 10: Performance comparison of the single-edit neighbourhood test versus a variant which allows for double-edits. \ No newline at end of file diff --git a/akernelsteintestofgoodnessoffitforsequentialmodels/images.zip b/akernelsteintestofgoodnessoffitforsequentialmodels/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b364454b07cb75fb376ee7f827b67800084a6b5b --- /dev/null +++ b/akernelsteintestofgoodnessoffitforsequentialmodels/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:be76dc03ffec5ccdb6955a9a78b5127a9b49d8df42f6ce6ccbcc552f7c607031 +size 720101 diff --git a/akernelsteintestofgoodnessoffitforsequentialmodels/layout.json b/akernelsteintestofgoodnessoffitforsequentialmodels/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2977e7cbd2d5d86043aff955c86eb951d0f44af2 --- /dev/null +++ b/akernelsteintestofgoodnessoffitforsequentialmodels/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6a8b91e1bc7ee8dedd12518fd0d48e47cd659d1d987f424f1fa92e001c46dc1b +size 781398 diff --git a/alargescalestudyofprobabilisticcalibrationinneuralnetworkregression/f8706187-b68b-4f40-89c5-82612c43ad90_content_list.json b/alargescalestudyofprobabilisticcalibrationinneuralnetworkregression/f8706187-b68b-4f40-89c5-82612c43ad90_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3b9174b12b5cd38a892cbadcc2f9793a4b4c94c3 --- /dev/null +++ b/alargescalestudyofprobabilisticcalibrationinneuralnetworkregression/f8706187-b68b-4f40-89c5-82612c43ad90_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7adc52dc050db33fdde196f0dda1eca45968a6279759c6ada68b5c0dd364da2 +size 138625 diff --git a/alargescalestudyofprobabilisticcalibrationinneuralnetworkregression/f8706187-b68b-4f40-89c5-82612c43ad90_model.json b/alargescalestudyofprobabilisticcalibrationinneuralnetworkregression/f8706187-b68b-4f40-89c5-82612c43ad90_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ae9055117d86d4465f365167a6c2445dec8d76b1 --- /dev/null +++ b/alargescalestudyofprobabilisticcalibrationinneuralnetworkregression/f8706187-b68b-4f40-89c5-82612c43ad90_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd74d814dcd5a9431ea196e053d9c7dec1e04642314db85d9e7431b751d286cf +size 167173 diff --git a/alargescalestudyofprobabilisticcalibrationinneuralnetworkregression/f8706187-b68b-4f40-89c5-82612c43ad90_origin.pdf b/alargescalestudyofprobabilisticcalibrationinneuralnetworkregression/f8706187-b68b-4f40-89c5-82612c43ad90_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7fafb694b044215cc8116041f12ef6073ed7f555 --- /dev/null +++ b/alargescalestudyofprobabilisticcalibrationinneuralnetworkregression/f8706187-b68b-4f40-89c5-82612c43ad90_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:199abb46210c2270b65d6b0c24d8e3822a4154dd8cf9dc5729b1cb8db1339af7 +size 1822121 diff --git a/alargescalestudyofprobabilisticcalibrationinneuralnetworkregression/full.md b/alargescalestudyofprobabilisticcalibrationinneuralnetworkregression/full.md new file mode 100644 index 0000000000000000000000000000000000000000..150682938800e8ae0dde15aff238fcaee4c495d9 --- /dev/null +++ b/alargescalestudyofprobabilisticcalibrationinneuralnetworkregression/full.md @@ -0,0 +1,658 @@ +# A Large-Scale Study of Probabilistic Calibration in Neural Network Regression + +Victor Dheur1 Souhaib Ben Taieb1 + +# Abstract + +Accurate probabilistic predictions are essential for optimal decision making. While neural network miscalibration has been studied primarily in classification, we investigate this in the less-explored domain of regression. We conduct the largest empirical study to date to assess the probabilistic calibration of neural networks. We also analyze the performance of recalibration, conformal, and regularization methods to enhance probabilistic calibration. Additionally, we introduce novel differentiable recalibration and regularization methods, uncovering new insights into their effectiveness. Our findings reveal that regularization methods offer a favorable tradeoff between calibration and sharpness. Post-hoc methods exhibit superior probabilistic calibration, which we attribute to the finite-sample coverage guarantee of conformal prediction. Furthermore, we demonstrate that quantile recalibration can be considered as a specific case of conformal prediction. Our study is fully reproducible and implemented in a common code base for fair comparisons. + +# 1. Introduction + +Neural network predictions affect critical decisions in many applications, including medical diagnostics and autonomous driving (Gulshan et al., 2016; Guizilini et al., 2020). However, effective decision making often requires accurate probabilistic predictions (Gawlikowski et al., 2021; Abdar et al., 2021). For example, consider a probabilistic regression model that produces $90\%$ prediction intervals. An important property would be that $90\%$ of these prediction intervals contain the realizations. + +For models that output a predictive distribution, probabilitis- + +$^{1}$ Department of Computer Science, University of Mons, Mons, Belgium. Correspondence to: Victor Dheur . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +tic calibration is an important property that states that all quantiles must be calibrated, i.e., the frequency of realizations below these quantiles must match the corresponding quantile level. Additionally, predictive distributions should be sufficiently sharp (i.e., concentrated around the realizations) and leverage the information in the inputs. + +In the classification setting, Guo et al. (2017) found that common neural architectures trained on image and text data were miscalibrated, sparking increased interest in neural network calibration. In a follow-up study, Minderer et al. (2021) showed that more recent neural architectures demonstrate improved calibration. However, there has been less research on calibration for neural probabilistic regression models compared to classification. Therefore, it remains uncertain whether the same results apply to the regression setting. This paper addresses this gap by conducting a comprehensive study on probabilistic calibration for regression using tabular data. We explore various calibration methods, including quantile recalibration (Kuleshov, Fenner, et al., 2018) and conformalized quantile regression (Romano, Patterson, et al., 2019). We also consider regularization methods, which have been shown to perform well in the classification setting (Karandikar et al., 2021; Popordanoska et al., 2022; Yoon et al., 2023). + +We make the following main contributions: + +1. We conduct the largest empirical study to date on probabilistic calibration of neural regression models using 57 tabular datasets (Sections 4 and 6). We consider multiple state-of-the-art calibration methods (Section 5), including post-hoc recalibration, conformal prediction, and regularization methods, with various scoring rules and predictive models. +2. Building on quantile recalibration, we propose a new differentiable calibration map using kernel density estimation, which provides improved negative log-likelihood compared to baselines. We also introduce two new regularization objectives based on the probabilistic calibration error (Section 5). +3. We show that quantile recalibration is a special case of conformal prediction, providing an explanation for + +their superior performance in terms of probabilistic calibration (Section 6). + +# 2. Background + +We consider a univariate regression problem where the target variable $Y \in \mathcal{V}$ depends on an input variable $X \in \mathcal{X}$ , with $\mathcal{Y} = \mathbb{R}$ representing the target space and $\mathcal{X}$ representing the input space. Our objective is to approximate the conditional distribution $P_{Y|X}$ using training data $\mathcal{D} = \{(X_i, Y_i)\}_{i=1}^N$ where $(X_i, Y_i) \stackrel{\text{i.i.d.}}{\sim} P \equiv P_X \times P_{Y|X}$ . + +A probabilistic predictor $F_{\theta}:\mathcal{X}\to \mathcal{F}$ is a function parametrized by $\theta \in \Theta$ that maps an input $x\in \mathcal{X}$ to a predictive cumulative distribution function (CDF) $F_{\theta}(\cdot \mid x)$ in the space $\mathcal{F}$ of distributions over $\mathbb{R}$ . Additionally, given $x\in \mathcal{X}$ , we denote the predictive quantile function (QF) by $Q_{\theta}(\cdot \mid x)$ , and probability density function (PDF) by $f_{\theta}(\cdot \mid x)$ . Similarly, the marginal CDF, QF, or PDF of a random variable $R$ is denoted by $F_{R},Q_{R}$ , or $f_{R}$ , respectively. + +Probabilistic calibration. Given an input $x \in \mathcal{X}$ , the model $F_{\theta}$ is ideal if it precisely matches the conditional distribution $P_{Y|X}$ . However, learning the ideal model based on finite data is not possible without additional (strong) assumptions (Foygel Barber et al., 2021). To avoid additional assumptions, we can instead enforce certain desirable properties that are attainable in practice and that a good or ideal forecaster should exhibit. One such property is probabilistic calibration. + +Let $Z = F_{\theta}(Y \mid X) \in [0,1]$ denote the probability integral transform (PIT) of $Y$ conditional on $X$ . The model $F_{\theta}$ is probabilistically calibrated (also known as PIT-calibrated) if $\forall \alpha \in [0,1]$ , + +$$ +F _ {Z} (\alpha) \doteq \Pr (Z \leq \alpha) = \alpha . \tag {1} +$$ + +Let $U \in [0,1]$ be a uniform random variable independent of $Z$ . The left and right hand sides of (1) can be interpreted as the CDF of $Z$ and $U$ , respectively, as a function of $\alpha$ . This shows that the uniformity of the PIT is equivalent to probabilistic calibration (Dawid, 1984). + +Since the ideal forecaster is probabilistically calibrated, we can require this property from any competent forecaster. However, probabilistic calibration, though necessary, is not sufficient for making accurate probabilistic predictions. Additionally, as discussed by Gneiting and Resin (2021), probabilistic calibration primarily addresses unconditional aspects of predictive performance and is implied by more robust conditional notions of calibration, such as auto-calibration. + +Probabilistic calibration error. The most common approach for evaluating probabilistic calibration is to consider distances of the form $\int_0^1 |F_Z(\alpha) - F_U(\alpha)|^p d\alpha$ where $p > 0$ . + +The particular cases of $p = 1$ and $p = 2$ are known as the 1-Wasserstein distance and Cramér-von Mises distance, respectively. We denote the empirical CDF of the PIT as $\hat{F}_Z(\alpha) = \frac{1}{N}\sum_{i=1}^{N}\mathbb{1}(Z_i\leq \alpha)$ where $Z_i = F_\theta(Y_i\mid X_i)$ are PIT realizations. A common approach to assess probabilistic calibration using Monte Carlo estimation is to evaluate it at equidistant values $\alpha_1 < \dots < \alpha_M$ as follows: + +$$ +\mathrm {P C E} _ {p} \left(F _ {\theta}, \mathcal {D}\right) = \frac {1}{M} \sum_ {j = 1} ^ {M} \left| \alpha_ {j} - \hat {F} _ {Z} \left(\alpha_ {j}\right) \right| ^ {p}. \tag {2} +$$ + +This metric has been previously employed in literature such as Zhao et al. (2020) and Zhou et al. (2021) with $p = 1$ , and Kuleshov, Fenner, et al. (2018) and Utpala and Rai (2020) with $p = 2$ . It is important to note that, unlike the classical definition of the $p$ -norm, we do not exponentiate $\frac{1}{p}$ in (2) to maintain consistency with prior literature. In the subsequent sections, we focus our analysis on $\mathrm{PCE}_1$ and use the abbreviation PCE for brevity. + +One limitation of scalar metrics like PCE is their inability to provide detailed information regarding calibration errors at individual quantile levels, $\alpha_{1},\ldots ,\alpha_{M}$ . Instead, PIT reliability diagrams offer a visual assessment of probabilistic calibration across all quantile levels by plotting the empirical CDF of the PIT $Z$ . These diagrams display the right side of (1) against its left side, with a perfectly calibrated model represented by a diagonal line (in the asymptotic case). Figure 2 provides examples of such reliability diagrams, which have been employed in studies by Pinson and Hagedorn (2012) and Kuleshov, Fenner, et al. (2018). + +# 3. Related Work + +Post-hoc calibration approaches involve adjusting the predictions of a trained model using a mapping learned from a separate calibration dataset. In the context of classification, temperature scaling (Guo et al., 2017) is a simple and effective method that adjusts predictive confidence while maintaining accuracy. For regression tasks, quantile recalibration (Kuleshov, Fenner, et al., 2018) aims to achieve probabilistic calibration. Conformal prediction (Vovk et al., 2020) is a general approach that provides prediction sets with a finite-sample coverage guarantee. Notable methods applied with deep learning include Conformal Quantile Regression (Romano, Patterson, et al., 2019) and Distributional Conformal Prediction (Izbicki et al., 2020; Chernozhukov et al., 2021). Furthermore, post-hoc approaches have also been proposed for conditional notions of calibration (Song et al., 2019; Kuleshov and Deshpande, 2022). + +Regularization approaches aim to improve calibration during training by incorporating regularization techniques. Some methods, proposed by Zhao et al. (2020) and Feldman et al. (2021), utilize regularization to target different + +![](images/14debb840ef61ab784ae7f38974d770fe3884ab835b063944d3e10f303daa176.jpg) +Figure 1: Multiple regression benchmark datasets with references. Datasets inside parentheses have not been considered in this study. Full dataset names are available in Table 1. + +conditional notions of calibration based on the inputs. Zhou et al. (2021) introduced an alternative loss function involving the simultaneous training of two neural networks, while Pearce et al. (2018), Chung et al. (2021), and Thiagarajan et al. (2020) proposed objectives that allow control over the tradeoff between coverage and sharpness of prediction intervals. To our knowledge, the only regularization objective specifically targeting probabilistic calibration is quantile regularization (Utpala and Rai, 2020). Other types of uncertainty quantification methods include ensembling (Lakshminarayanan, Pritzel, et al., 2017) and Bayesian methods (Jospin et al., 2022) + +# 4. Are Neural Regression Models Probabilistically Calibrated? + +We conduct an extensive empirical study to evaluate the probabilistic calibration of neural regression models. To this end, we calculate the probabilistic calibration error defined in (2) for various state-of-the-art models across multiple benchmark datasets. + +Benchmark datasets. We analyze a total of 57 datasets, including 27 from the OpenML curated benchmark (Grinsztajn et al., 2022), 18 from the AutoML Repository (Gijsbers et al., 2019), and 12 from the UCI Machine Learning Repository (Dua and Graff, 2017). + +These datasets are widely used in the evaluation of deep probabilistic models and uncertainty quantification, as evidenced by previous studies such as Fakoor et al. (2021), Chung et al. (2021), Zhou et al. (2021), Utpala and Rai (2020), and Gal and Ghahramani (2016). + +Figure 1 provides an overview of the utilization of these datasets in previous studies. To the best of our knowledge, our study represents the most comprehensive assessment of probabilistic calibration for neural regression models published to date. + +Neural probabilistic regression models. We consider three state-of-the-art neural probabilistic regression models. The + +first model predicts a parametric distribution, where the parameters are obtained as outputs of a hypernetwork. Previous studies have often focused on the Gaussian distribution (Lakshminarayanan, Pritzel, et al., 2017; Utpala and Rai, 2020; Zhao et al., 2020). To introduce more flexibility, we consider a mixture of $K$ Gaussian distributions. Given an input $x \in \mathcal{X}$ , the hypernetwork parametrizes the means $\mu_k(x)$ , standard deviations $\sigma_k(x)$ , and weights $w_k(x)$ for each component $k = 1, \dots, K$ . To ensure positive standard deviations and that the mixture weights form a discrete probability distribution, we use the Softplus and Softmax activations, respectively. We have two variants of this model depending on the scoring rule used for training: the negative log-likelihood (NLL) or the continuous ranked probability score (CRPS). These models are denoted as MIX-NLL and MIX-CRPS, respectively. It is worth noting that the CRPS of a mixture of Gaussians has a closed-form expression (Grimit et al., 2006). + +The second model predicts quantiles of the distribution (Tagasovska and Lopez-Paz, 2019; Chung et al., 2021; Feldman et al., 2021). Specifically, given an input $x \in \mathcal{X}$ and a quantile level $\alpha \in [0,1]$ , the model outputs a quantile $Q_{\theta}(\alpha \mid x)$ . The full quantile function can be obtained by evaluating the model at multiple quantile levels. The model is trained by minimizing the quantile score at multiple levels, which is asymptotically equivalent to minimizing the CRPS (Bracher et al., 2021). We denote this model as SQR-CRPS, where SQR stands for simultaneous quantile regression (Tagasovska and Lopez-Paz, 2019). + +Experimental setup. We adopt the large-sized regime introduced by Grinsztajn et al. (2022), which involves truncating the datasets to a maximum of 50,000 examples. Among the 57 datasets, the number of examples ranges from 135 to 50,000, and the number of features ranges from 3 to $3,611^{1}$ . Each of the 57 datasets is divided into four sets: training $(65\%)$ , validation $(10\%)$ , calibration $(15\%)$ , and test $(10\%)$ . We normalize the input $X$ and target $Y$ using + +![](images/2b92817b31bf56c3898e81e8ceb7e2472527ddc4c0f5300a0aad95f1d4d2b56b.jpg) +Figure 2: The top row shows the PCE for different datasets with one standard error (error bar). The bottom row gives examples of PIT reliability diagrams for five datasets. + +the mean and standard deviation from the training split. The final predictions are then transformed back to the original scale. For our neural network models, we employ the same fully-connected architecture as previous studies conducted by Kuleshov, Fenner, et al. (2018), Chung et al. (2021), and Fakoor et al. (2021). Further details regarding the model hyperparameters can be found in Appendix C. + +Results. In Figure 2, the first row displays the PCE (averaged over five random train-validation-test splits) for MIX-NLL in blue on each of the 57 datasets. For comparison, the PCE of a perfectly calibrated model, i.e. with uniformly distributed PITs, computed using $5 \times 10^{4}$ simulated values is shown in orange. The second row presents reliability diagrams for five datasets, with $90\%$ consistency bands as in Gneiting, Wolffram, et al. (2023). Similar information is provided for MIX-CRPS and SQR-CRPS in Figures 12 and 13 in Appendix B.4, respectively. Additionally, reliability diagrams for all datasets can be found in Figure 15 in Appendix B.6. + +The analysis reveals that the (average) PCE is generally high across many datasets, although there are significant variations between datasets. To test the statistical significance of these results, $10^{4}$ samples were generated from the sampling distribution of the average PCE under the null hypothesis of probabilistic calibration. The resulting sampling distribution for all datasets is presented in Appendix B.5. + +By computing the p-value associated with a one-sided test in the upper tail of the distribution (as illustrated in Appendix B.5), it was observed that most datasets have a p-value of zero. This indicates that the average PCE obtained + +for the considered model is higher than all the simulated average PCEs of the probabilistically calibrated model. Applying a threshold of 0.01 and a Holm correction for the 57 hypothesis tests, the null hypothesis is rejected for 11 datasets out of the 57. + +Overall, the results indicate that the neural models considered in this study are generally probabilistically miscalibrated on a significant number of benchmark tabular datasets. In Section 6, we will further explore how calibration methods can substantially improve the PCE of neural models. + +# 5. Calibration Methods + +We begin by discussing the three main approaches to calibration: quantile recalibration, conformal prediction, and regularization-based calibration. Following that, we introduce two novel variants of regularization-based calibration. + +Quantile recalibration and conformal prediction are post-hoc methods, meaning they are applied after model training. These approaches utilize a separate calibration dataset $\mathcal{D}' = \{(X_i',Y_i')\}_{i=1}^{N'}$ , where $(X_i',Y_i') \stackrel{\text{i.i.d.}}{\sim} P_{X,Y}$ . On the other hand, regularization-based calibration operates directly during training and relies solely on the training data $\mathcal{D}$ . + +# 5.1. Quantile Recalibration + +Quantile recalibration aims to transform a potentially miscalibrated CDF $F_{\theta}$ into a probabilistically calibrated CDF $F_{\theta}' = F_Z \circ F_{\theta}$ , using the calibration map $F_{Z}$ which repre + +sents the CDF of the PITs for $F_{\theta}$ . For a given quantile level $\alpha \in [0,1]$ , the recalibrated CDF $F_{\theta}^{\prime}$ satisfies: + +$$ +\begin{array}{l} \Pr \left(F _ {\theta} ^ {\prime} (Y \mid X) \leq \alpha\right) = \Pr \left(F _ {\theta} (Y \mid X) \leq Q _ {Z} (\alpha)\right) (3) \\ = F _ {Z} \left(Q _ {Z} (\alpha)\right) (4) \\ = \alpha . (5) \\ \end{array} +$$ + +In practice, $F_{Z}$ is not directly available and needs to be estimated from data. Kuleshov, Fenner, et al. (2018) proposed estimating it using isotonic regression, while Utpala and Rai (2020) showed that computing the empirical CDF is an equivalent and simpler method. Specifically, given a set of PIT values $Z_{i}^{\prime} = F_{\theta}(Y_{i}^{\prime} \mid X_{i}^{\prime}), i = 1, \dots, N^{\prime}$ , the calibration map $\phi^{\mathrm{EMP}}$ is computed as: + +$$ +\phi^ {\mathrm {E M P}} \left(\alpha ; \left\{Z _ {i} ^ {\prime} \right\} _ {i = 1} ^ {N ^ {\prime}}\right) = \frac {1}{N ^ {\prime}} \sum_ {i = 1} ^ {N ^ {\prime}} \mathbb {1} \left(Z _ {i} ^ {\prime} \leq \alpha\right), \tag {6} +$$ + +where $\alpha \in [0,1]$ + +Similarly to Utpala and Rai (2020), we also consider a linear calibration map $\phi^{\mathrm{LIN}}$ , which is continuous, and corresponds to a linear interpolation between the points $\left\{(0,0),(Z_{(1)}',1/N'+1),\ldots,(Z_{(N')}',N'/N'+1),(1,1)\right\}$ , + +where $Z_{(k)}^{\prime}$ is the $k$ th order statistic of $Z_1^\prime ,\ldots ,Z_N^{\prime}$ + +In addition, we propose a calibration map based on kernel density estimation (KDE), denoted as $\phi^{\mathrm{KDE}}$ . This calibration map offers the advantage of being differentiable and can lead to improved NLL performance. The key idea is to use a relaxed approximation of the indicator function, which allows us to make the PIT CDF (6) differentiable. Specifically, we compute + +$$ +\mathbb {1} _ {\tau} (a \leq b) = \sigma (\tau (b - a)) \approx \mathbb {1} (a \leq b), +$$ + +where $\tau > 0$ is an hyperparameter and $\sigma(x) = \frac{1}{1 + e^{-x}}$ denotes the sigmoid function. The resulting smoothed empirical CDF is given by + +$$ +\phi^ {\mathrm {K D E}} \left(\alpha ; \left\{Z _ {i} ^ {\prime} \right\} _ {i = 1} ^ {N ^ {\prime}}\right) = \frac {1}{N ^ {\prime}} \sum_ {i = 1} ^ {N ^ {\prime}} \mathbb {1} _ {\tau} \left(Z _ {i} ^ {\prime} \leq \alpha\right). \tag {7} +$$ + +This corresponds to estimating the CDF $F_{Z}$ using KDE based on $N'$ realizations of $Z(\{Z_i'\}_{i=1}^{N'})$ . Since $\sigma$ is the CDF of the logistic distribution, we use the PDF of the logistic distribution as the kernel in the KDE. Algorithm 1 summarizes this method. + +# Algorithm 1 Quantile recalibration + +Input: Predictive CDF $F_{\theta}$ and $\mathcal{D}' = \{(X_i',Y_i')\}_{i = 1}^{N'}$ + +Compute $Z_{i}^{\prime} = F_{\theta}(Y_{i}^{\prime}\mid X_{i}^{\prime})$ $(i = 1,\dots ,N^{\prime})$ + +Compute the calibration map $\phi$ , either $\phi^{\mathrm{EMP}}$ , $\phi^{\mathrm{LIN}}$ , or $\phi^{\mathrm{KDE}}$ + +Return: Recalibrated CDF $F_{\theta}^{\prime} = \phi \circ F_{\theta}$ . + +# 5.2. Conformal Prediction + +Let us assume the realizations of our calibration dataset $\mathcal{D}'$ are drawn exchangeably from $P_{X,Y}^2$ . Given a predictive model $M_{\theta}$ and a coverage level $\alpha \in [0,1]$ , (inductive) conformal prediction allows us to construct a prediction set $C_{\alpha}(X) \subseteq \mathcal{V}$ for any input $X$ , satisfying the property: + +$$ +\Pr (Y \in C _ {\alpha} (X)) = \frac {\lceil (N ^ {\prime} + 1) \alpha \rceil}{N ^ {\prime} + 1} \tag {8} +$$ + +$$ +\approx \alpha . \tag {9} +$$ + +Conformal prediction achieves this by utilizing a conformity score $s_{\theta}(Y \mid X)$ , which intuitively quantifies the similarity between new samples and previously observed samples. When the conformity score increases with $Y$ , an interval $C_{\alpha}(X) = (-\infty, s_{\theta}^{-1}(\alpha \mid X)]$ can be constructed, ensuring the conformal guarantee (8) at level $\alpha$ . + +Let $Q_{\theta}'(\alpha \mid X) = s_{\theta}^{-1}(\alpha \mid X)$ represent the (revised) model obtained through conformal prediction from $Q_{\theta}(\alpha \mid X)$ . Under the assumption that $Q_{\theta}'$ is continuous and strictly increasing, the conformal guarantee implies that $\operatorname{Pr}(Y \leq Q_{\theta}'(\alpha \mid X)) \approx \alpha$ , which indicates approximate probabilistic calibration at level $\alpha$ . + +Conformalized Quantile Regression (Romano, Patterson, et al., 2019) is an example of a conformal procedure, where the conformity score is defined as $s_{\theta}(Y \mid X) = Y - Q_{\theta}(\alpha \mid X)$ , representing the quantile residual. Another example is Distributional Conformal Prediction (Izbicki et al., 2020; Chernozhukov et al., 2021), which employs the conformity score $s_{\theta}(Y \mid X) = F_{\theta}(Y \mid X)$ , referring to the PIT. Algorithm 2 provides a summary of how to compute calibrated quantiles using inductive conformal prediction. + +# Algorithm 2 Calibrated quantiles with conformal prediction + +Input: Trained model $M_{\theta}$ , $\mathcal{D}' = \{(X_i',Y_i')\}_{i=1}^{N'}$ , strictly increasing conformity score $s$ , quantile level $\alpha \in [0,1]$ , input $X$ . + +Compute $S_{i} = s_{\theta}(Y_{i}^{\prime}\mid X_{i}^{\prime})$ $(i = 1,\dots ,N^{\prime})$ + +Compute $\hat{q} = S_{(\lceil (N' + 1)\alpha \rceil)}$ where $S_{(k)}$ denote the kth smallest value among $\{S_1,\dots ,S_{N'}, + \infty \}$ + +Return: Calibrated quantile $Q_{\theta}'(\alpha \mid X) = s_{\theta}^{-1}(\hat{q} \mid X)$ + +# 5.3. Regularization-based Calibration + +Regularization-based calibration methods aim to enhance model calibration by incorporating a regularization term into the training objective. Although widely used in classification, there are relatively fewer methods specifically designed for regression problems. In this section, we discuss two approaches: quantile regularization (Utpala and Rai, 2020) + +and the truncation method (Chung et al., 2021). The main steps of regularization-based calibration are summarized in Algorithm 3. + +# Algorithm 3 Regularization-based calibration + +Input: Model $M_{\theta}$ , calibration regularizer $\mathcal{R}(\theta)$ and tuning parameter $\lambda \geq 0$ + +Compute $\theta^{*} = \arg \min_{\theta \in \Theta}\mathcal{L}^{\prime}(\theta ;\mathcal{D})$ where + +$$ +\mathcal {L} ^ {\prime} (\theta ; \mathcal {D}) = 1 / N \sum_ {i = 1} ^ {N} \mathcal {L} \left(M _ {\theta} (\cdot \mid X _ {i}), Y _ {i}\right) + \lambda \mathcal {R} (\theta ; \mathcal {D}) +$$ + +Return: Regularized model $M_{\theta^{*}}$ + +# 5.3.1. QUANTILE REGULARIZATION + +The regularizer proposed by Utpala and Rai (2020) aims to measure the deviation of the PIT variable $Z$ from a uniform distribution, which is characteristic of a probabilistically calibrated model. This regularization penalty encourages the selection of calibrated models during training. + +The authors observed that the KL divergence between $Z$ and a uniform random variable is equivalent to the negative differential entropy of $Z$ , denoted as $H(Z)$ . To approximate $H(Z)$ , they employed sample-spacing entropy estimation (Vasicek, 1976), resulting in the following regularizer: + +$$ +\begin{array}{l} \mathcal {R} _ {\mathrm {Q R}} (\theta ; \mathcal {D}) (10) \\ = \frac {1}{N - k} \sum_ {i = 1} ^ {N - k} \log \left[ \frac {N + 1}{k} \left(Z _ {(i + k)} - Z _ {(i)}\right) \right] (11) \\ \approx H (Z), (12) \\ \end{array} +$$ + +where $k$ is a hyperparameter satisfying $1 \leq k \leq N$ , and $Z_{(i)}$ represents the $i$ th order statistic of $Z$ . + +To ensure differentiability during optimization, the authors employed a differentiable relaxation technique called NeuralSort (Grover et al., 2019), as sorting is a non-differentiable operation. + +# 5.3.2. TRUNCATION-BASED CALIBRATION + +The regularization approach introduced by Chung et al. (2021), that we denote Trunc, involves truncating the predictive distribution based on the current level of calibration. + +Given a quantile model $Q_{\theta}$ , let $\hat{F}_{Z}(\alpha) = \frac{1}{N}\sum_{i=1}^{N}\mathbb{1}(Y_{i} \leq Q_{\theta}(\alpha \mid X_{i}))$ be the estimated PIT CDF evaluated at $\alpha$ and $\rho(x,y) = (y - x)\mathbb{1}(x < y)$ . The regularization objective for level $\alpha$ is defined as follows: + +$$ +\begin{array}{l} \mathcal {R} _ {\text {T r u n c}} (\theta ; \mathcal {D}, \alpha) (13) \\ = \left\{ \begin{array}{l l} \frac {1}{N} \sum_ {i = 1} ^ {N} \rho \left(Q _ {\theta} (\alpha \mid X _ {i}), Y _ {i}\right) & \text {i f} \hat {F} _ {Z} (\alpha) < \alpha \\ \frac {1}{N} \sum_ {i = 1} ^ {N} \rho \left(Y _ {i}, Q _ {\theta} (\alpha \mid X _ {i})\right) & \text {o t h e r w i s e} \end{array} \right. (14) \\ \end{array} +$$ + +This regularization objective adjusts $\hat{F}_Z(\alpha)$ to match $\alpha$ by increasing it when $\hat{F}_Z(\alpha) < \alpha$ , and vice versa. The + +final regularization objective is computed by averaging $\mathcal{R}_{\mathrm{Trunc}}(\theta ;\mathcal{D},\alpha)$ over multiple quantile levels $\{\alpha_{j}\}_{j = 1}^{M}$ : + +$$ +\mathcal {R} _ {\text {T r u n c}} (\theta ; \mathcal {D}) = \frac {1}{M} \sum_ {j = 1} ^ {M} \mathcal {R} _ {\text {t r u n c}} (\theta ; \mathcal {D}, \alpha_ {j}). \tag {15} +$$ + +It is worth noting that Chung et al. (2021) combine the previous regularization objective with a sharpness objective that penalizes the width between the quantile predictions, given by $\frac{1}{M}\sum_{j=1}^{M}\frac{1}{N}\sum_{i=1}^{N}|Q_{\theta}(\alpha_j|X_i)-Q_{\theta}(1-\alpha_j|X_i)|$ . Instead, we combine it with a strictly proper scoring rule. + +# 5.4. New Regularization-based Calibration Methods + +Building upon the quantile calibration method discussed in Section 5.3.1, we propose two new regularization objectives which compute a differentiable $\mathrm{PCE}_p$ using alternative statistical distances. + +The first approach, named PCE-KDE, leverages the differentiable calibration map $\phi^{\mathrm{KDE}}$ (7) based on KDE. Given a set of quantile levels $\{\alpha_{j}\}_{j = 1}^{M}$ , the regularization objective is given by + +$$ +\mathcal {R} _ {\mathrm {P C E - K D E}} (\theta ; \mathcal {D}) = \frac {1}{M} \sum_ {j = 1} ^ {M} \left| \alpha_ {j} - \phi^ {\mathrm {K D E}} \left(\alpha_ {j}; \left\{Z _ {i} \right\} _ {i = 1} ^ {N}\right) \right| ^ {p}, \tag {16} +$$ + +where $p > 0$ . Note that $\mathcal{R}_{\mathrm{PCE - KDE}}$ reduces to $\mathrm{PCE}_p$ in (2) when $\tau$ in (7) goes to $\infty$ . + +The second approach considers distances of the form $\int_0^1 |Q_Z(\alpha) - Q_U(\alpha)|^p d\alpha$ , where $Q_{Z}$ and $Q_{U}$ denote the quantile functions of the true and uniform distributions, respectively. When $p = 1$ , this distance reduces to the 1-Wasserstein distance, equivalent to $\int_0^1 |F_Z(\alpha) - F_U(\alpha)|d\alpha$ which aligns with PCE (see Proposition 1 in Appendix A.1). + +By exploiting the fact that $\mathbb{E}\big[F_Z(Z_{(i)})\big] = i / N + 1$ , we approximate $Q_{Z}(i / N + 1)$ using the $i$ -th order statistic $Z_{(i)}$ . The regularization objective is given by + +$$ +\mathcal {R} _ {\mathrm {P C E - S o r t}} (\theta ; \mathcal {D}) = \frac {1}{N} \sum_ {i = 1} ^ {N} \left| Z _ {(i)} - \frac {i}{N + 1} \right| ^ {p}, \tag {17} +$$ + +where $p > 0$ . Differentiable relaxations to sorting, such as those proposed by Blondel et al. (2020) and Cuturi et al. (2019), can be employed to obtain the order statistics. + +# 6. A Comparative Study of Probabilistic Calibration Methods + +In continuation of the empirical study described in Section 4, we now proceed to evaluate the performance of the probabilistic calibration methods outlined in the previous section. + +Specifically, we apply eight distinct calibration methods to the three neural regression models introduced in Section 4. These methods are evenly divided into two categories: post-hoc methods and regularization-based methods. + +To assess the effectiveness of these calibration methods, we employ four different evaluation metrics. The evaluation is conducted on a set of 57 datasets, utilizing the same experimental setup detailed in Section 4. To ensure a fair and consistent comparison, all the methods have been implemented within a unified codebase3. + +# 6.1. Experimental Setup + +Base probabilistic models and calibration methods. We consider the three probabilistic models presented in Section 4, namely MIX-NLL, MIX-CRPS, and SQR-CRPS. For the MIX models, when applying quantile recalibration, we transform the CDF using the empirical CDF estimator (Rec-EMP), the linear estimator (Rec-LIN), or the KDE estimator (Rec-KDE). For SQR-CRPS, we transform multiple quantiles using conformalized quantile regression (CQR). For the three models, we consider the four regularization objectives presented in Sections 5.3 and 5.4 (with $p = 1$ ), namely $\mathcal{R}_{\mathrm{PCE - KDE}}$ (PCE-KDE), $\mathcal{R}_{\mathrm{PCE - Sort}}$ (PCE-Sort), $\mathcal{R}_{\mathrm{QR}}$ (QR), and $\mathcal{R}_{\mathrm{Trunc}}$ (Trunc). PCE-Sort is only shown in the Appendix because it performs similarly to PCE-KDE. + +Metrics. We measure the accuracy of the probabilistic predictions using NLL and CRPS. For the SQR model, we estimate CRPS by averaging the quantile score at 64 equidistant quantile levels. Probabilistic calibration is measured using PCE, defined in (1). Finally, we measure sharpness using the mean standard deviation of the predictive distributions, denoted by STD. + +Hyperparameters. In our experiments, MIX-NLL and MIX-CRPS output a mixture of 3 Gaussians, and SQR-CRPS outputs 64 quantiles. We justify the choice of these hyperparameters in Appendix C. The hyperparameter $\tau$ of Rec-KDE and PCE-KDE is fixed at 100, which was found to perform well empirically. For regularization methods, an important hyperparameter is the regularization factor $\lambda$ . As previously observed in classification (Karandikar et al., 2021), we found that higher values of $\lambda$ tend to improve calibration but worsen NLL, CRPS, and STD. Karandikar et al. (2021) proposed to limit the loss in accuracy by a maximum of $1\%$ . We adopt a similar strategy by selecting $\lambda$ which minimizes PCE with a maximum increase in CRPS of $10\%$ in the validation set. For each dataset, we select + +$\lambda$ in the set $\{0, 0.01, 0.05, 0.2, 1, 5\}$ , which corresponds to various degrees of calibration regularization. + +Comparison of multiple models over many datasets. As in Karandikar et al. (2021), and since NLL, CRPS and STD have different scales across datasets, we report Cohen's d, which is a standardized effect size comparing the mean of one method (over 5 runs, in our case) against a baseline. Values of $-0.8$ and $-2$ are considered large and huge, respectively. Due to the heterogeneity of the datasets that we consider, the performance of our models can vary widely across datasets. To visualize the results, we show the distribution of Cohen's d using letter-value plots (Hofmann et al., 2011), which indicate the quantiles at levels $1/8$ , $1/4$ , $1/2$ , $3/4$ and $7/8$ , as well as outliers. A median value below zero indicates that the model improved the metric on more than half the datasets. + +In order to assess whether significant differences exist between different methods, we follow the recommendations of Ismail Fawaz et al. (2019), which are based on (Demšar, 2006). First, we test for a significant difference among model performances using the Friedman test (Friedman, 1940). Then, we use the pairwise post-hoc analysis recommended by Benavoli et al. (2016) using a Wilcoxon signed-rank test (Wilcoxon, 1945) with Holm's alpha correction (Holm, 1979). The results of this procedure are shown using a critical difference diagram (Demšar, 2006). The lower the rank (further to the right), the better performance of a model. A thick horizontal line shows a group of models whose performance is not significantly different, with a significance level of 0.05. + +# 6.2. Results + +Figure 3 shows the letter-values plots for the Cohen's d of PCE (top panel) as well as the associated critical diagram (bottom panel), for all methods and datasets. The reference model is MIX-NLL. The results with other models as reference are available in Appendix B.1. Blue, green, and red colors are used for the post-hoc methods, the regularization-based methods, and the base models, respectively. The same information is given in Figures 4 and 5 for the CRPS and the NLL, respectively. + +Comparison of PCE. As expected, Figure 3 shows that the PCE of calibration methods is improved compared to the base models. Furthermore, independently of the base model, we can see that post-hoc methods achieve significantly better PCE than regularization methods. When comparing PCE-KDE with QR, we can see that there is a significantly larger decrease in PCE with the MIX-CRPS base model compared to MIX-NLL. Finally, both PCE-KDE and Trunc decrease PCE for SQR-CRPS, without a significant difference between them. + +![](images/d8fa5d7f9609667f9d71959d3f9da37f13bc5c0cabe9942046f990377437e3f6.jpg) +(a) Cohen's d of PCE with respect to the MIX-NLL model + +![](images/c7454eda88719a9bd93259527a3f5e2c670312d4a7ee5162f66b1173db24df5e.jpg) +(b) Critical difference diagram + +Comparison of CRPS. While post-hoc methods outperform regularization methods in terms of PCE, Figure 4 shows they have a higher CRPS (except for the SQR base model). This can be explained by the fact that regularization methods prevent the CRPS from increasing exceedingly due to the selection criterion for $\lambda$ . + +Comparison of NLL. Figure 5 shows the importance of the calibration map. In fact, quantile recalibration with a linear map significantly increases the NLL, while smooth interpolation decreases PCE without a large increase in NLL. Note that we only consider MIX models since we cannot compute the NLL for SQR. + +On the choice of a calibration method. If probabilistic calibration is critical to the application, our experiments suggest that post-hoc methods such as quantile recalibration and conformal prediction should be preferred. However, when we also want to control the CRPS or the NLL, regularization methods can offer a better trade-off in terms of calibration and sharpness. In fact, as shown in Figure 6 in Appendix B.1, when the base model is MIX-NLL, all regularization methods provide a significant improvement in probabilistic calibration without deteriorating the CRPS, NLL or STD. For the MIX-CRPS model, Figure 7 shows that QR has limited impact on CRPS and NLL, while providing better calibration. For the SQR-CRPS base model, Figure 8 shows that the SQR-CRPS + CQR conformal method significantly outperforms the Trunc and PCE- + +![](images/cbe34fa4e99d98684592b3f90f9ffccdd0cd460d937d53e1bd039f3b165fb1f4.jpg) +(a) Cohen's d of CRPS with respect to the MIX-NLL model + +![](images/7b25701a3772f4418542e2d9e62fc7348b07cee44ec53dbb374e740c38010167.jpg) +Figure 3: Comparison of PCE with multiple base losses and calibration methods. +(b) Critical difference diagram +Figure 4: Comparison of CRPS with multiple base losses and calibration methods. + +KDE regularization methods both in terms of PCE and CRPS. Overall, Appendix B.1 suggests that MIX-NLL + PCE-KDE, MIX-CRPS + QR and SQR-CRPS + CQR are good choices for practitioners aiming to improve PCE without significantly impacting other aspects of the conditional distribution. Finally, since both regularization and post-hoc methods are able to improve calibration, we investigate whether a combination of these two methods can lead to better performance. Figure 9 in Appendix B.2 shows that such a combination does not significantly improve probabilistic calibration, with an increase in CRPS and NLL. This indicates that practitioners should exercise caution when applying regularization to a model that is already well-calibrated. + +# 6.3. Link between Quantile Recalibration and Conformal Prediction + +Conformal prediction methods are well-known for their finite-sample coverage guarantee. Interestingly, a specific implementation of quantile recalibration can be considered a special case of conformal prediction. This implies that quantile recalibration can also provide a finite-sample coverage guarantee. This observation could potentially explain why both methods, conformal prediction and quantile recalibration, are effective in improving probabilistic calibration. + +Theorem 1. Quantile recalibration is equivalent to Distributional Conformal Prediction (DCP) of left intervals at each coverage level $\alpha \in [0,1]$ . The equivalence is obtained + +![](images/d7e1b36881cfae25c66dfb0fadebe5ca9a52ce2dda150f85b0385eb8485a3f77.jpg) +(a) Cohen's d of NLL with respect to the MIX-NLL model + +![](images/b59bdb95c240b8765a94688167c5c96ec32f600be68833469fc201c4fbc4a7d6.jpg) +(b) Critical difference diagram +Figure 5: Comparison of NLL with multiple base losses and calibration methods. + +when the estimator of the calibration map is defined by a slightly different estimator than the conventional one in (6), namely $\phi_{DCP}(\alpha) = \frac{1}{N' + 1}\sum_{i = 1}^{N'}\mathbb{1}(Z_i'\leq \alpha)$ . + +Proof. Given a predictive distribution $F_{\theta}$ learned from a training dataset $\mathcal{D} = \{(X_i,Y_i)\}_{i = 1}^N$ where $(X_{i},Y_{i})\stackrel {\mathrm{i.i.d.}}{\sim}P_{X,Y}$ , let $Z_{i}^{\prime} = F_{\theta}(Y_{i}^{\prime}\mid X_{i}^{\prime})$ represent the PIT values computed on a separate calibration dataset $\mathcal{D}' = \{(X_i',Y_i')\}_{i = 1}^{N'}$ , where $(X_{i}',Y_{i}')\stackrel {\mathrm{i.i.d.}}{\sim}P_{X,Y}$ . + +In the DCP approach, as outlined in Algorithm 2, the conformal scores are given by the PIT values $Z_{i}^{\prime}$ . DCP first computes the $\alpha$ empirical quantile of the scores as $\hat{q} = Z_{(\lceil (N' + 1)\alpha \rceil)}^{\prime}$ , where $Z_{(k)}^{\prime}$ represents the $k$ th smallest value among $\{Z_1',\dots,Z_N', + \infty \}$ . Then, the conformalized quantile is computed as $Q_{\theta}'(\alpha \mid X) = Q_{\theta}(\hat{q}\mid X)$ which corresponds to conformal prediction with coverage $\alpha$ for the left interval $(-\infty ,Q_{\theta}'(\alpha \mid X)]$ . + +Let us consider quantile recalibration with the calibration map $\phi$ in Algorithm 1 given by $\phi_{\mathrm{DCP}}(\alpha) = \frac{1}{N' + 1}\sum_{i = 1}^{N'}\mathbb{1}(Z_i'\leq \alpha)$ . It computes a recalibrated CDF $F_{\theta}'$ by composing the original CDF $F_{\theta}$ with $\phi_{\mathrm{DCP}}$ , yielding $F_{\theta}'(y\mid X) = \phi_{\mathrm{DCP}}(F_{\theta}(y\mid X))$ . + +We observe that $\phi_{\mathrm{DCP}}$ is the CDF of a discrete random variable, with $\phi_{\mathrm{DCP}}^{-1}(\alpha) = Z_{([N' + 1)\alpha ]}^{\prime}$ representing its empirical quantile function. Furthermore, the composition $\phi_{\mathrm{DCP}}\circ F_{\theta}(\cdot \mid X)$ acts as the inverse function of $Q_{\theta}(\cdot \mid X)\circ \phi_{\mathrm{DCP}}^{-1}$ . As a result, both the DCP approach and quantile recalibration yield QFs and CDFs that correspond to the same underlying distribution. + +Quantile recalibration with other recalibration maps (e.g., $\phi^{\mathrm{EMP}}$ , $\phi^{\mathrm{LIN}}$ , or $\phi^{\mathrm{KDE}}$ ) would correspond to DCP where the empirical quantile $\hat{q}$ is selected using other strategies which does not provide the exact conformal guarantee (8). + +# 7. Conclusion + +The observation that neural network classifiers tend to be miscalibrated (Guo et al., 2017) has prompted the development of various approaches for calibrating these models. In this paper, we present the largest empirical study conducted to date on the probabilistic calibration of neural regression models. Our study provides valuable insights into their performance and the selection of calibration methods. Notably, we introduce a novel differentiable calibration map based on kernel density estimation for quantile recalibration, as well as two novel regularization objectives derived from the PCE. + +Our study reveals that regularization methods can provide a favorable tradeoff between calibration and sharpness. However, post-hoc methods demonstrate superior performance in terms of PCE. We attribute this finding to the finite-sample coverage guarantee offered by conformal prediction and demonstrate that quantile recalibration can be viewed as a specific case of conformal prediction. + +Future investigations may extend the study of probabilistic calibration to other models, such as tree-based models, and explore alternative notions of calibration (Gneiting and Resin, 2021). Notably, distribution calibration represents a promising direction, as it has inspired the development of calibration methods (Song et al., 2019; Kuleshov and Deshpande, 2022). + +# Acknowledgement + +This work was supported by the Fonds de la Recherche Scientifique - FNRS under Grants T.0011.21 and J.0011.20. + +# References + +[1] Moloud Abdar et al. “A review of uncertainty quantification in deep learning: Techniques, applications and challenges”. An international journal on information fusion 76 (Dec. 2021), pp. 243–297. +[2] Alessio Benavoli, Giorgio Corani, and Francesca Mangili. “Should We Really Use Post-Hoc Tests Based on Mean-Ranks?” Journal of machine learning research: JMLR 17.5 (2016), pp. 1-10. +[3] Mathieu Blondel et al. "Fast Differentiable Sorting and Ranking". In: Proceedings of the 37th Interna- + +tional Conference on Machine Learning. Ed. by Hal Daumé Iii and Aarti Singh. Vol. 119. Proceedings of Machine Learning Research. PMLR, 2020, pp. 950-959. +[4] Johannes Bracher et al. "Evaluating epidemic forecasts in an interval format". en. PLoS computational biology 17.2 (Feb. 2021), e1008618. +[5] Victor Chernozhukov, Kaspar Wüthrich, and Yinchu Zhu. "Distributional conformal prediction". en. Proceedings of the National Academy of Sciences of the United States of America 118.48 (Nov. 2021). +[6] Youngseog Chung et al. “Beyond Pinball Loss: Quantile Methods for Calibrated Uncertainty Quantification”. In: Advances in Neural Information Processing Systems. Ed. by M Ranzato et al. Vol. 34. Curran Associates, Inc., 2021, pp. 10971–10984. +[7] Marco Cuturi, Olivier Teboul, and Jean-Philippe Vert. "Differentiable Ranks and Sorting using Optimal Transport" (28 5 2019). arXiv: 1905.11885 [cs.LG]. +[8] A P Dawid. “Present Position and Potential Developments: Some Personal Views: Statistical Theory: The Prequential Approach”. Journal of the Royal Statistical Society. Series A 147.2 (1984), pp. 278–292. +[9] Janez Demšar. "Statistical Comparisons of Classifiers over Multiple Data Sets". Journal of machine learning research: JMLR 7 (Dec. 2006), pp. 1-30. +[10] Dheeru Dua and Casey Graff. UCI Machine Learning Repository. 2017. +[11] Rasool Fakoor et al. "Flexible Model Aggregation for Quantile Regression" (Feb. 2021). arXiv: 2103.00083 [stat.ML]. +[12] Feldman, Bates, and Romano. "Improving conditional coverage via orthogonal quantile regression". Advances in neural information processing systems (2021). +[13] Rina Foygel Barber et al. “The limits of distribution-free conditional predictive inference”. Information and Inference: A Journal of the IMA 10.2 (156 2021), pp. 455–482. +[14] Milton Friedman. “A Comparison of Alternative Tests of Significance for the Problem of $m$ Rankings”. en. The Annals of Mathematical Statistics 11.1 (Mar. 1940), pp. 86-92. + +[15] Yarin Gal and Zoubin Ghahramani. "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning". In: Proceedings of The 33rd International Conference on Machine Learning. Ed. by Maria Florina Balcan and Kilian Q Weinberger. Vol. 48. Proceedings of Machine Learning Research. New York, USA: PMLR, 2016, pp. 1050-1059. +[16] Jakob Gawlikowski et al. “A Survey of Uncertainty in Deep Neural Networks” (July 2021). arXiv: 2107.03342 [cs.LG]. +[17] Pieter Gijsbers et al. “An Open Source AutoML Benchmark” (July 2019). arXiv: 1907.00909 [cs.LG]. +[18] Tilmann Gneiting and Johannes Resin. "Regression Diagnostics meets Forecast Evaluation: Conditional Calibration, Reliability Diagrams, and Coefficient of Determination" (Aug. 2021). arXiv: 2108.03210 [stat.ME]. +[19] Tilmann Gneiting, Daniel Wolffram, et al. "Model Diagnostics and Forecast Evaluation for Quantiles". Annual Review of Statistics and Its Application (July 2023). +[20] E P Grimit et al. “The continuous ranked probability score for circular variables and its application to mesoscale forecast ensemble verification”. en. Quarterly Journal of the Royal Meteorological Society 132.621C (Oct. 2006), pp. 2925–2942. +[21] Léo Grinsztajn, Edouard Oyallon, and Gáel Varoquaux. “Why do tree-based models still outperform deep learning on tabular data?” (July 2022). arXiv: 2207.08815 [cs.LG]. +[22] Aditya Grover et al. "Stochastic Optimization of Sorting Networks via Continuous Relaxations". In: International Conference on Learning Representations. 2019. +[23] Vitor Guizilini et al. "3D Packing for Self-Supervised Monocular Depth Estimation". In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2020. +[24] Varun Gulshan et al. "Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs". en. JAMA: the journal of the American Medical Association 316.22 (13 12 2016), pp. 2402-2410. +[25] Chuan Guo et al. "On Calibration of Modern Neural Networks". In: Proceedings of the 34th International Conference on Machine Learning. Ed. by Doina Precup and Yee Whye Teh. Vol. 70. Proceedings of Machine Learning Research. PMLR, 2017, pp. 1321-1330. + +[26] Heike Hofmann, Karen Kafadar, and Hadley Wickham. Letter-value plots: Boxplots for large data. Tech. rep. had.co.nz, 2011. +[27] Sture Holm. “A Simple Sequentially Rejective Multi-ple Test Procedure”. Scandinavian journal of statistics, theory and applications 6.2 (1979), pp. 65–70. +[28] Hassan Ismail Fawaz et al. “Deep learning for time series classification: a review”. Data mining and knowledge discovery 33.4 (Jan. 2019), pp. 917–963. +[29] Rafael Izbicki, Gilson Shimizu, and Rafael Stern. "Flexible distribution-free conditional predictive bands using density estimators". In: Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics. Ed. by Silvia Chiappa and Roberto Calandra. Vol. 108. Proceedings of Machine Learning Research. PMLR, 2020, pp. 3068-3077. +[30] Laurent Valentin Jospin et al. "Hands-On Bayesian Neural Networks—A Tutorial for Deep Learning Users". IEEE Computational Intelligence Magazine 17.2 (May 2022), pp. 29-48. +[31] Archit Karandikar et al. “Soft Calibration Objectives for Neural Networks” (July 2021). arXiv: 2108.00106 [cs.LG]. +[32] Volodymyr Kuleshov and Shachi Deshpande. "Calibrated and Sharp Uncertainties in Deep Learning via Density Estimation". In: Proceedings of the 39th International Conference on Machine Learning. Ed. by Kamalika Chaudhuri et al. Vol. 162. Proceedings of Machine Learning Research. PMLR, 2022, pp. 11683-11693. +[33] Volodymyr Kuleshov, Nathan Fenner, and Stefano Ermon. "Accurate Uncertainties for Deep Learning Using Calibrated Regression". In: Proceedings of the 35th International Conference on Machine Learning. Ed. by Jennifer Dy and Andreas Krause. Vol. 80. Proceedings of Machine Learning Research. PMLR, 2018, pp. 2796-2804. +[34] Lakshminarayanan, Pritzel, et al. "Simple and scalable predictive uncertainty estimation using deep ensembles". Advances in neural information processing systems (2017). +[35] Matthias Minderer et al. “Revisiting the Calibration of Modern Neural Networks”. In: Advances in Neural Information Processing Systems. Ed. by M Ranzato et al. Vol. 34. Curran Associates, Inc., 2021, pp. 15682-15694. + +[36] Tim Pearce et al. "High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach". In: Proceedings of the 35th International Conference on Machine Learning. Ed. by Jennifer Dy and Andreas Krause. Vol. 80. Proceedings of Machine Learning Research. PMLR, 2018, pp. 4075-4084. +[37] Pierre Pinson and Renate Hagedorn. “Verification of the ECMWF ensemble forecasts of wind speed against analyses and observations”. en. Meteorological Applications 19.4 (Dec. 2012), pp. 484–500. +[38] Teodora Popordanoska, Raphael Sayer, and Matthew B Blaschko. “A Consistent and Differentiable Lp Canonical Calibration Error Estimator”. Oct. 2022. +[39] Romano, Patterson, et al. “Conformalized quantile regression”. Advances in neural information processing systems (2019). +[40] Hao Song et al. "Distribution calibration for regression". In: Proceedings of the 36th International Conference on Machine Learning. Ed. by Kamalika Chaudhuri and Ruslan Salakhutdinov. Vol. 97. Proceedings of Machine Learning Research. PMLR, 2019, pp. 5897-5906. +[41] Tagasovska and Lopez-Paz. "Single-model uncertainties for deep learning". Advances in neural information processing systems (2019). +[42] Jayaraman J Thiagarajan et al. "Designing accurate emulators for scientific processes using calibration-driven deep models". en. Nature communications 11.1 (June 2020), p. 5622. +[43] Saiteja Utpala and Piyush Rai. “Quantile Regularization: Towards Implicit Calibration of Regression Models” (Feb. 2020). arXiv: 2002.12860 [cs.LG]. +[44] Oldrich Vasicek. “A Test for Normality Based on Sample Entropy”. Journal of the Royal Statistical Society. Series B, Statistical methodology 38.1 (1976), pp. 54–59. +[45] Vladimir Vovk et al. “Conformal calibrators”. In: Proceedings of the Ninth Symposium on Conformal and Probabilistic Prediction and Applications. Ed. by Alexander Gammerman et al. Vol. 128. Proceedings of Machine Learning Research. PMLR, 2020, pp. 84–99. +[46] Frank Wilcoxon. "Individual Comparisons by Ranking Methods". Biometrics Bulletin 1.6 (1945), pp. 80-83. +[47] Hee Suk Yoon et al. "ESD: Expected Squared Difference as a Tuning-Free Trainable Calibration Measure". In: The Eleventh International Conference on Learning Representations. 2023. + +[48] Shengjia Zhao, Tengyu Ma, and Stefano Ermon. "Individual Calibration with Randomized Forecasting". In: Proceedings of the 37th International Conference on Machine Learning. Ed. by Hal Daumé Iii and Aarti Singh. Vol. 119. Proceedings of Machine Learning Research. PMLR, 2020, pp. 11387-11397. +[49] Tianhui Zhou et al. “Estimating Uncertainty Intervals from Collaborating Networks”. en. Journal of machine learning research: JMLR 22 (Jan. 2021). + +# A. Proofs + +# A.1. Integral of the Absolute Difference between CDFs or QFs + +Proposition 1. Let $F_A, F_B : [0,1] \to [0,1]$ denote two strictly increasing CDFs of random variables defined on $[0,1]$ with corresponding QFs $Q_A$ and $Q_B$ . Then, + +$$ +\int_ {0} ^ {1} \left| F _ {A} (q) - F _ {B} (q) \right| d q = \int_ {0} ^ {1} \left| Q _ {A} (p) - Q _ {B} (p) \right| d p. \tag {18} +$$ + +Proof. We define two functions $r, s: [0,1] \times [0,1] \to \{0,1\}$ where + +$$ +r (q, p) = \left\{ \begin{array}{l l} 1 & \text {i f} F _ {A} (q) \leq p \leq F _ {B} (q) \text {o r} F _ {B} (q) \leq p \leq F _ {A} (q) \\ 0 & \text {o t h e r w i s e ,} \end{array} \right. \tag {19} +$$ + +$$ +\text {a n d} s (q, p) = \left\{ \begin{array}{l l} 1 & \text {i f} Q _ {A} (p) \leq q \leq Q _ {B} (p) \text {o r} Q _ {B} (p) \leq q \leq Q _ {A} (p) \\ 0 & \text {o t h e r w i s e .} \end{array} \right. \tag {20} +$$ + +Let us show that $r$ and $s$ are equal. Considering $q \in [0,1]$ and $p \in [0,1]$ , we can write + +$$ +F _ {A} (q) \leq p \leq F _ {B} (q) \tag {21} +$$ + +$$ +\Longleftrightarrow \left(F _ {A} (q) \leq p\right) \wedge (p \leq F _ {B} (q)) \tag {22} +$$ + +$$ +\Longleftrightarrow (q \leq Q _ {A} (p)) \wedge (Q _ {B} (p) \leq q) \tag {23} +$$ + +$$ +\Longleftrightarrow Q _ {B} (p) \leq q \leq Q _ {A} (p), \tag {24} +$$ + +where (23) holds since both $F_{A}$ and $F_{B}$ are strictly increasing. + +Similarly, $F_B(q) \leq p \leq F_A(q) \iff Q_A(p) \leq q \leq Q_B(p)$ . Hence $r(q,p) = 1 \iff s(q,p) = 1$ and $r$ and $s$ are equal. + +By Fubini's theorem, we have + +$$ +\int_ {0} ^ {1} \int_ {0} ^ {1} r (q, p) d p d q = \int_ {0} ^ {1} \int_ {0} ^ {1} s (q, p) d q d p. \tag {25} +$$ + +Furthermore, upon evaluating the inner integrals, we obtain + +$$ +\begin{array}{l} \int_ {0} ^ {1} r (q, p) d p = \left\{ \begin{array}{l l} \int_ {F _ {A} (q)} ^ {F _ {B} (q)} 1 d p & \text {i f} F _ {A} (q) \leq F _ {B} (q) \\ \int_ {F _ {B} (q)} ^ {F _ {A} (q)} 1 d p & \text {o t h e r w i s e} \end{array} \right. (26) \\ = \left| F _ {A} (q) - F _ {B} (q) \right|. (27) \\ \end{array} +$$ + +Similarly, we have $\int_0^1 s(q,p) dq = |Q_A(p) - Q_B(p)|$ . Finally, by substituting these results in (25), we prove (18). + +# B. Detailed Results + +This section presents additional experimental results. + +# B.1. Comparison between Recalibration, Conformal Prediction and Regularization Approaches per Base Model + +First, we present the results of our experiments comparing recalibration, conformal prediction, and regularization approaches. Our objective is to determine which metrics are improved by these methods compared to a vanilla model. We divide our comparisons based on the three base models considered: MIX-NLL (Figure 6), MIX-CRPS (Figure 7) and SQR-CRPS (Figure 8). + +Since NLL, CRPS, and standard deviation cannot be directly compared across different datasets, we utilize Cohen's d as an effect size measure, with the baseline being a vanilla model of the same base model. For instance, the baseline for MIX-CRPS + Rec-EMP is MIX-CRPS. Additionally, we provide critical difference diagrams to assess the significance of differences. + +Overall, recalibration and conformal prediction demonstrate significantly improved PCE compared to the baseline, although there is a trade-off with other metrics. For both MIX-NLL and MIX-CRPS, Rec-EMP yields infinite NLL, Rec-LIN substantially increases NLL, while Rec-KDE has a lesser impact on NLL. However, Rec-KDE results in a significant degradation of CRPS compared to other recalibration methods when the base model is MIX-CRPS. In the case of quantile predictions, CQR significantly improves PCE. + +While regularization methods generally lead to improved PCE, they are still outperformed by recalibration and conformal prediction in this regard. However, we observe that with the MIX-NLL base model, regularization methods (PCE-KDE, PCE-Sort and QR) have minimal impact on CRPS, NLL, and STD compared to recalibration methods. With the MIX-CRPS base model, the difference in CRPS between recalibration and regularization is less pronounced. Nevertheless, it is evident that regularization methods PCE-KDE and PCE-Sort, which rely on PCE, result in less sharp predictions compared to recalibration methods, which produce sharper predictions. + +Regarding quantile predictions, the case is reversed: conformal prediction (SQR-CRPS + CQR) yields less sharp predictions, while regularization with SQR-CRPS + Trunc leads to sharper predictions. + +![](images/97ad3332964a8fca33965768f7d8e70891afd401f9c2b5ec23d20fa78dc7b9a0.jpg) +(a) Boxplots of Cohen's d of different metrics on all datasets, with respect to MIX-NLL. + +![](images/2d45d3985be7040195c55e499bcea72804b53d43c9f6343d7162d5f83a51ef79.jpg) + +![](images/67c4da20e5a8b7f902643e5d6f5184411bd9b5627d3d8849b090c2de18b4c96c.jpg) +Figure 6: Comparison of different metrics where the base model is MIX-NLL. + +![](images/d6242dadb9b0c5cd34fdd67c54edefd1d8b0408c49287453f071ddf002649487.jpg) + +![](images/a733ba241b08f0a9d926b88505dc8001c039e018c08eaa4bf7b5ba3db6d0417c.jpg) + +![](images/1371d93433fdf802b2fb5dd8026a7a363462d95750baf20509c565427b015384.jpg) + +![](images/12ac48fce30f2c187b0c0ab36534c7174992aafe5d14fd6044fde4d7622f7159.jpg) +(a) Boxplots of Cohen's d of different metrics on all datasets, with respect to MIX-CRPS. + +![](images/95a24646354db8f29df5ebe561fd00d7df0769b0212d39a678c83e0306e9818f.jpg) + +![](images/04abe1e5d6a39087f860866e7433b7b7680f97435fa81c028d92acd26dba9519.jpg) +(b) PCE + +![](images/c25d9888761de42fc3caed018fe70705188f6ba188b7203640b3245c84a0f5a8.jpg) +(c) CRPS + +![](images/09c5d655c1e8c9565814337efcf3b370ad89b9e2797658c3b8f6a73ef3f453cf.jpg) +(d) NLL + +![](images/3c7452e6e9ec2213b912215e634ea960c67c912fb3edbaeac30a5c260027c151.jpg) +(e) STD +Figure 7: Comparison of different metrics where the base model is MIX-CRPS. + +![](images/820887803dccf237f185b569465ce085fdcf347ae763f9c5799011d3578721c5.jpg) + +![](images/1e223250be6e2f23335480769ed237449cbe589c8f426170bcee8224b17b3c72.jpg) + +![](images/b13c4972904acc80bdae380f12d9c09735fd43bf61226b15669e7c821cd9d74c.jpg) +(a) Boxplots of Cohen's d of different metrics on all datasets, with respect to SQR-CRPS. + +![](images/ea4e011dc681d82f28e73e786da4c44812ec68cb57cc90232f7a74a809a08e1a.jpg) +(b) PCE + +![](images/771664480b21ca6b849af86357c8a2ca671369a22800b6fa34f5261d4cd93267.jpg) +(c) CRPS + +![](images/37ac8756e0637f2567a35330226ba837531f7b496dab9d9a8103a08adeb54969.jpg) +(d) STD +Figure 8: Comparison of different metrics where the base model is SQR-CRPS. + +# B.2. Combining Regularization and Post-hoc Methods + +In this paper, we have established that post-hoc methods are generally more favorable than regularization methods when the primary objective is to enhance probabilistic calibration. Since regularization methods operate during training and do not alter the form of predictions (e.g., Gaussian mixture predictions), they can be easily combined with post-hoc methods. In this section, we address the question: "Which metrics do regularization methods improve when combined with a post-hoc method compared to the same model without regularization?" + +To ensure clarity, we focus our presentation on a selection of paired regularization and post-hoc methods. Figure 9 illustrates the impact of regularization on various metrics for these pairs. In Figure 9(a), the baseline corresponds to the same post-hoc method without regularization, enabling a direct measurement of the effect of adding regularization to a post-hoc method. It is important to note that the boxplots in this figure cannot be directly compared due to the different baselines. + +The critical difference diagrams provide a comparison of all methods, with and without regularization. Overall, when combined with post-hoc methods, regularization has a negative impact: no regularization method significantly improves probabilistic calibration, and they tend to negatively affect CRPS, NLL, and STD metrics. + +![](images/e27c805cab7ea488215f98c7f93db22afedc7076daed2016abde346ec786b273.jpg) + +![](images/6b3037bb9084abb2449332c1879889ba88bed1460985b22fee06254d7e7ac061.jpg) + +![](images/82c707861bd86073a66ec6bd6c3c7b858841a11b6fcb74f4a42d30e8581497ca.jpg) +(a) Boxplots of Cohen's d of different metrics on all datasets, with respect to the same model except that regularization is not applied. + +![](images/319c88f1da5358728ed9fb9ac8ff2d45180275ef0d074f1f2276592078ecec07.jpg) + +![](images/b3807b56198654d13c82f6587cbfac6be1a2a39e49c6181bba0ef461a5c1dc05.jpg) +(b) PCE + +![](images/0d41e55fc5c900de5540d492b3e95e28ba8a6e9ed13350e3549f5e0715217d70.jpg) +(c) CRPS + +![](images/cf376283f7078e894ead4ddc7e1b62fc295293c5373337b58a834edc9699cad3.jpg) +(d) NLL + +![](images/5218a2914415bf1b3274aeeb140b31d1472bb5dccfa4a0ef5feaa732d841d96c.jpg) +(e) STD +Figure 9: Comparison of different metrics showing the effect of regularization when combined with a post-hoc method, compared to the same model without regularization. + +# B.3. Post-hoc Calibration based on the Training Dataset + +In this paper, the calibration map or conformity scores have been computed on a separate calibration dataset, following common practice in the literature. However, holding out data for post-hoc calibration reduces the quantity of training data. For the sake of clarity, we focus our analysis on the MIX-NLL and SQR-CRPS base losses + +In this section, we compare post-hoc calibration based on the training dataset to post-hoc calibration based on the calibration dataset. We aim to answer the question: "Can it be beneficial to use post-hoc calibration based on the training dataset, + +and should it be preferred over regularization methods when there is no calibration dataset available?" One advantage of regularization methods and post-hoc calibration methods based on the training dataset is that the base model can be trained on more data (80% in our experiments, compared to 65% when holding out the calibration dataset). + +Figure 10 presents a comparison of different methods, with post-hoc methods trained on the calibration dataset indicated by (calib) and those trained on the training dataset indicated by (train). We observe that post-hoc methods based on the calibration dataset tend to significantly outperform their counterparts based on the training dataset in terms of probabilistic calibration. Specifically, MIX-NLL + Rec-LIN and MIX-NLL + Rec-KDE achieve significantly better calibration when the calibration map is learned on the calibration dataset. Similarly, SQR-CRPS + CQR tends to improve calibration when conformal prediction is based on the calibration dataset. It is worth noting that even without a calibration dataset, post-hoc methods tend to be better calibrated than regularization methods. + +Finally, we observe that post-hoc methods based on the training dataset tend to achieve better CRPS and NLL scores, although not significantly. Additionally, they are also significantly sharper. This may be attributed to the larger training dataset available to the base model when there is no held-out dataset. + +![](images/5d56cc270b7d63c57db64505bc78b7ac31f0f8418168731a775b9480dccca84e.jpg) + +![](images/9b0af993775041b3a1aa6b3c2335c52cf5631001aed685affcea15189e62bb09.jpg) + +![](images/060130efaa5132cd5ee4bf343f8b6259420dcd1191889bec77c4a095b7db409f.jpg) +(a) Boxplots of Cohen's d of different metrics on all datasets, with respect to MIX-NLL. + +![](images/d2763b5ad8693a43866131fc35d642ae886578b7f88534ab2a754341b0e1f09b.jpg) + +![](images/292a77b8c2fb989c2f4c673ceffd14065788ee17e2240d34a186203f1c6cf2a9.jpg) +(b) PCE + +![](images/01a791266a15ed22564405023b285cbe8a63e5e6c455bfb8a7bb2aaa15591e6d.jpg) +(c) CRPS + +![](images/ddb1b51cb9e720ee0191ae33072f7e42ac20f63c110c9d228c21e1281ee329a4.jpg) +(d) NLL + +![](images/f41f5c6d78c99f55dc6c7666e54e4ea90523dbf1e7c724d5bddd596cc3bcbbb4.jpg) +(e) STD +Figure 10: Comparison of different metrics. + +# B.4. Calibration of Vanilla Models + +Figure 12 and Figure 13 provide additional results from our empirical study in Section 4, specifically focusing on the PCE obtained with MIX-CRPS and SQR-CRPS. The datasets are ordered in the same manner as shown in Figure 2 for comparison. We observe that SQR-CRPS is less calibrated compared to MIX-NLL. + +![](images/0b60ea88dafff27a06379bc4f15b238d8d795d3e47a28ac9d4db54f37084e20a.jpg) +Figure 11: PCE obtained on different datasets, with examples of reliability diagrams. The height of each bar is the mean PCE of 5 runs with different dataset splits while the error bar represents the standard error of the mean. For 5 datasets, the PIT reliability diagrams of 5 runs are displayed in the bottom row. +Figure 12: PCE of SQR-CRPS, on all datasets. + +![](images/72c38ca38db487283862c023d5e07afd0857f08fae49437a2094156841d10ba9.jpg) +Figure 13: PCE of MIX-CRPS, on all datasets. + +# B.5. Distribution of the Test Statistic + +Figure 14 shows the distribution of the test statistic, as described in Section 4. We observe that, in a lot of cases, the average PCE of the compared models is larger than all the $10^{4}$ samples of the average PCE from a probabilistically calibrated model. Among the different calibration methods, post-hoc calibration with MIX-NLL + Rec-EMP achieves the highest level of calibration performance in the majority of cases. + +![](images/0cb69cc4efd7ee8e5368374f6886c0f56c0ef5a025b5d1a665d4b015ee65f653.jpg) +Figure 14: Distribution of the test statistic on all datasets for different models. + +# B.6. Reliability Diagrams + +Figure 15 and Figure 16 compare reliability diagrams obtained on models with and without post-hoc calibration, respectively. With only a few exceptions, the post-hoc calibrated models exhibit a visual proximity to the diagonal line. + +![](images/d2cca9d89d7ef9cf3c8c0bc2d12d36af3bd2e77a443ea9be5dbec9ef01a472ce.jpg) +Figure 15: Reliability diagrams on all datasets for different models. + +![](images/8801e81ddbd7058ac512c86c191bf6bcb042971e71eba93a29074c1314c99b57.jpg) +Figure 16: Reliability diagrams on all datasets for different models with post-hoc calibration. + +# C. Hyperparameters + +In our experiments, we adopt a specific architecture consisting of 3 hidden layers with 100 units per layer, ReLU nonlinearities, and a dropout rate of 0.2 on the last hidden layer. Early stopping with a patience of 30 is applied to select the epoch with the lowest base loss on the validation dataset. + +In this section, we delve into the performance of different model parameters, including the number of components in Gaussian mixture predictions, the number of quantiles in quantile predictions, and the number of hidden layers in the underlying models. + +Figure 17 compares models that predict mixtures with varying numbers of components compared to the reference of 3 components. Notably, when there is only 1 component (yielding a single Gaussian prediction), the model's performance significantly deteriorates in terms of CRPS, NLL, and sharpness. However, as the number of components increases beyond 3, the differences become less pronounced. + +Figure 18 compares models with different numbers of quantiles compared to a reference of 64 quantiles. The results reveal a consistent pattern: predicting more quantiles consistently enhances performance in terms of probabilistic calibration, CRPS, and sharpness. + +Figure 19 compares models with different numbers of layers relative to a 3-layer model. It highlights that models with 2, 3, or 5 layers tend to yield superior performance in terms of CRPS and NLL. + +![](images/7b1ea687106446fa1d60d78c8f1592c86127af6072d974925b2246c56426ec91.jpg) +Figure 17: Comparison of models whose predictions are Gaussian mixtures with different numbers of components. All models are trained with NLL loss, without regularization or post-hoc method. The box plots show Cohen's d of different metrics on all datasets. Cohen's d is computed with respect to a model whose predictions are Gaussian mixtures with 3 components. + +![](images/a10352a9bf805dd4255c77560e3cc45af3538574a9f9d4fdabbb3bb74d87ee02.jpg) + +![](images/eeeeb80455f742e2015eebc573768f90f5737f8c9956d789017bde6a9aacfc2e.jpg) + +![](images/2ec16523b49452b0196d560fe7cc5198f12f374ff6460a3404d59ef413257c7f.jpg) + +![](images/dec6e036b20aa88fcae907c924e1b710c7aa21f7739bcaeca128b596d6278c7b.jpg) +Figure 18: Comparison of models whose predictions are different numbers of quantiles. All models are trained with CRPS loss, without regularization or post-hoc method. The box plots show Cohen's d of different metrics on all datasets. Cohen's d is computed with respect to a model whose predictions are 64 quantiles. + +![](images/08ddeca13322a127c7d759c4f882d1af141ff49131fd30a2c9026f6b8d534254.jpg) + +![](images/25f6d0fe8248f6183bafd6362f7cfbf7993c2f155232074ea45a1bbc5e67d69e.jpg) + +![](images/57e2e886664d8f18057046616b7e9bf72bf955ebe8d33431f516be9831353c72.jpg) +Figure 19: Comparison of models with different number of layers. All models predict Gaussian mixtures and are trained with NLL loss, without regularization or post-hoc method. The box plots show Cohen's d of different metrics on all datasets. Cohen's d is computed with respect to a model with 3 hidden layers. + +![](images/3874052a29d195baa856ceb1470bb100e85950502dfbf4a0be49ea8c12bb36a1.jpg) + +# D. Tabular Regression Datasets + +Table 1 presents the datasets considered in our experiments. To ensure consistency, when datasets are available from multiple sources, we select one specific source per dataset, as indicated in Figure 1. Our selection prioritizes the suites 297, 299, and 269 of OpenGL, followed by UCI datasets. + +In the OpenGL suite 297, we discovered that the datasets houses and california are identical, and thus, we only included the california dataset in our analysis. Moreover, the UCI archive for the dataset wine_quality contains two separate datasets for red and white wine. As there was no indication regarding the specific dataset(s) used in previous studies, we followed the approach of Grinsztajn et al. (2022) and solely considered the dataset related to white wine. In Figure 1, other studies may have employed the alternative dataset or a combination of both datasets. + +Table 1: Datasets + +
GroupDatasetAbbrev.Nb of training instancesNb of features
UCICPUCP11357
YachtYAC2006
MPGMPG2547
EnergyENE4999
CrimeCRI531104
FishFIS5906
ConcreteCON6698
AirfoilAI19765
Kin8nmKIN53248
PowerPOW62194
NavalNAV775717
ProteinPRO297249
OpenGL 297wine_qualityWIN422311
isoletISO5068613
cpu_ACTCP2532421
sulfurSUL65526
Brazilian_housesBRA69498
AileronsAIL893733
MiamiHousing2016MIA905513
polPOL975026
elevatorsELE1078916
Bike_Sharing_DemandBIK112966
fifaFIF117405
californiaCAL134168
superconductSUP1382079
house_salesHO31404815
house_16HHO11480916
diamondsDIA350616
medical_chargesMED500003
yearYEA5000090
nyc-taxi-green-dec-2016NYC500009
OpenGL 299analcatdata supremeANA263312
Mercedes_BenzMER2735735
_Greener_Manufacturing
visualizing-soilVIS56165
yprop_4_1YPR577582
OnlineNewsPopularityONL2576873
black_fridayBLA5000023
SGEMM_GPUSGE5000015
_kernel_performance
particulate-matterPAR5000026
-ukair-2017
OpenGL 269tecatorTEC156124
bostonBOS32822
MIP-2016-regressionMIP708111
socmobSOC75139
MoneyballMON80018
house_price_nominalHO2711234
us_crimeUS_1295101
quakeQUA14153
space_gaSPA20196
abaloneABA271510
SAT11-HAND-routine-regressionSAT2886118
Santander_transactionSAN28983611
_value
collegesCOL435134
topo_2_1TOP5775252
Allstate_Claims_SeverityALL50000477
YolandaYOL50000100
Buzzinsocialmedia_TwitterBUZ5000070
Airlines_DepDelay_10MAI2500005
\ No newline at end of file diff --git a/alargescalestudyofprobabilisticcalibrationinneuralnetworkregression/images.zip b/alargescalestudyofprobabilisticcalibrationinneuralnetworkregression/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..87ceb72ecc6d486c399cebc1ef71391193fafe31 --- /dev/null +++ b/alargescalestudyofprobabilisticcalibrationinneuralnetworkregression/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f3fa912b2d20e6e5ced521ccaf83ba6ceaa56bfe76e723031120fec4b3c1c81 +size 2141256 diff --git a/alargescalestudyofprobabilisticcalibrationinneuralnetworkregression/layout.json b/alargescalestudyofprobabilisticcalibrationinneuralnetworkregression/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e39759ab059ecce5a279294f5220483f64ab2859 --- /dev/null +++ b/alargescalestudyofprobabilisticcalibrationinneuralnetworkregression/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc0a010702e6820f6acdabfa339c17dbf4ad1d229fbc2fccd96933d6ea36937d +size 810733 diff --git a/alawofrobustnessbeyondisoperimetry/ab43e12f-b7ba-48bb-b757-f044bd5722ee_content_list.json b/alawofrobustnessbeyondisoperimetry/ab43e12f-b7ba-48bb-b757-f044bd5722ee_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..900bf42dd045c389b3ba054fa2eb30a0dfab8bae --- /dev/null +++ b/alawofrobustnessbeyondisoperimetry/ab43e12f-b7ba-48bb-b757-f044bd5722ee_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f57e3bb16e733f3c5449bb46e3fe3da606e4345c674b57dde9af6ef2d759b79f +size 147091 diff --git a/alawofrobustnessbeyondisoperimetry/ab43e12f-b7ba-48bb-b757-f044bd5722ee_model.json b/alawofrobustnessbeyondisoperimetry/ab43e12f-b7ba-48bb-b757-f044bd5722ee_model.json new file mode 100644 index 0000000000000000000000000000000000000000..43190659bd49d7fe37a0cead13f7acab5161d0d7 --- /dev/null +++ b/alawofrobustnessbeyondisoperimetry/ab43e12f-b7ba-48bb-b757-f044bd5722ee_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53029c883512bdf84c29bbda7ee1ba7c734e25057ca8caa5def16dd54e4bd448 +size 171103 diff --git a/alawofrobustnessbeyondisoperimetry/ab43e12f-b7ba-48bb-b757-f044bd5722ee_origin.pdf b/alawofrobustnessbeyondisoperimetry/ab43e12f-b7ba-48bb-b757-f044bd5722ee_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..83d33a837adb0323c5e874eef1019b65aea64d1f --- /dev/null +++ b/alawofrobustnessbeyondisoperimetry/ab43e12f-b7ba-48bb-b757-f044bd5722ee_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:48a1e05c2b96e72e43f0389b3daea7dad06dbd0487c968429ed8c074eb40bf7e +size 388837 diff --git a/alawofrobustnessbeyondisoperimetry/full.md b/alawofrobustnessbeyondisoperimetry/full.md new file mode 100644 index 0000000000000000000000000000000000000000..320700936cb381ea72444dd54bb99abc0d510a83 --- /dev/null +++ b/alawofrobustnessbeyondisoperimetry/full.md @@ -0,0 +1,762 @@ +# A Law of Robustness beyond Isoperimetry + +Yihan Wu $^{1}$ Heng Huang $^{1}$ Hongyang Zhang $^{2}$ + +# Abstract + +We study the robust interpolation problem of arbitrary data distributions supported on a bounded space and propose a two-fold law of robustness. Robust interpolation refers to the problem of interpolating $n$ noisy training data points in $\mathbb{R}^d$ by a Lipschitz function. Although this problem has been well understood when the samples are drawn from an isoperimetry distribution, much remains unknown concerning its performance under generic or even the worst-case distributions. We prove a Lipschitzness lower bound $\Omega(\sqrt{n/p})$ of the interpolating neural network with $p$ parameters on arbitrary data distributions. With this result, we validate the law of robustness conjecture in prior work by Bubeck, Li, and Nagaraj on two-layer neural networks with polynomial weights. We then extend our result to arbitrary interpolating approximators and prove a Lipschitzness lower bound $\Omega(n^{1/d})$ for robust interpolation. Our results demonstrate a two-fold law of robustness: i) we show the potential benefit of overparametrization for smooth data interpolation when $n = \mathrm{poly}(d)$ , and ii) we disprove the potential existence of an $\mathcal{O}(1)$ -Lipschitz robust interpolating function when $n = \exp(\omega(d))$ . + +# 1. Introduction + +Robustness has been a central research topic in machine learning (Szegedy et al., 2014; Goodfellow et al., 2014), statistics (Huber, 2004), operation research (Ben-Tal et al., 2009), and many other domains. In machine learning, study of adversarial robustness has led to significant advances in defending against adversarial attacks, where test inputs with slight modification can lead to problematic prediction + +$^{1}$ Department of Computer Science, University of Maryland at College Park $^{2}$ School of Computer Science, University of Waterloo. Correspondence to: Yihan Wu , Heng Huang , Hongyang Zhang . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +results. In statistics and operation research, robustness is a desirable property for optimization problems against uncertainty, which can be represented as deterministic or random variability in the value of optimization parameters. This is known as robust statistics or robust optimization. In both cases, the problem can be stated as given a deterministic labeling function $g: \mathbb{R}^d \to [-1, 1]$ , (approximately) interpolating the training data $\{(x_i, g(x_i))\}_{i=1}^n$ or its noisy counterpart by a function with small Lipschitz constant. The focus of this paper is on the latter setting known as robust interpolation problem (Bubeck & Sellke, 2023). That is, given noisy training data $\{(x_i, g(x_i) + z_i)\}_{i=1}^n$ of size $n$ where $x_1, \dots, x_n$ are restricted in a unit ball and $z_1, \dots, z_n$ have variance $> 0$ , how many network parameters and training samples are needed for robust interpolation provided that the functions in the class can (approximately) interpolate the noisy training data with Lipschitz constant $L$ ? + +There are several reasons to study the noisy setting (Bubeck & Sellke, 2023): 1) The real-world data are noisy. For example, it has been shown that around $3.3\%$ of the data in the most-cited datasets was inaccurate or mislabeled (Northcutt et al., 2021). 2) This noise assumption is necessary from a theoretical point of view, as otherwise there could exist a Lipschitz function which perfectly fits the training data for any large $n$ . Despite progress on the robust interpolation problem (Bubeck & Sellke, 2023; Bubeck et al., 2021), many fundamental questions remain unresolved. In modern learning theory, it was commonly believed that 1) big data (Schmidt et al., 2018), 2) low dimensionality of input (Blum et al., 2020; Yang et al., 2020a; Kumar et al., 2020), and 3) overparametrization (Bubeck & Sellke, 2023; Bubeck et al., 2021) improve robustness. We view the robustness problem from the perspective of Lipschitzness and ask the following question: + +Are big data and large models a remedy for robustness? + +In fact, there is significant empirical evidence to indicate that enlarging the model size (overparametrization) improves robustness when $n$ is moderately large (e.g., when $n = \mathrm{poly}(d)$ , see (Madry et al., 2017; Schmidt et al., 2018)). Our work verifies the benefit of overparametrization for fitting a neural network with $p$ parameters below the noise level by proving such neural networks must have a Lipschitzness lower bound $\Omega(\sqrt{n/p})$ . On the other hand, big + +data and large models may not be a remedy for robustness if $n$ goes even larger. We show that for any approximator, no matter how many parameters it contains, its Lipschitz-ness is of order $\Omega(n^{1/d})$ . In particular, our result disproves the existence of learning an $\mathcal{O}(1)$ -Lipschitz function with $n = \exp(\omega(d))$ . Besides, by showing that for any learning algorithm, there exists a joint data distribution such that one needs at least $n = \exp(\Omega(d))$ samples to learn an $\mathcal{O}(1)$ -Lipschitz function with good population error, we demonstrate that big data are also necessary for robust interpolation in some special cases. + +The robust interpolation problem becomes more challenging when no assumptions are made on the distribution of covariates. Due to the well-separated nature of data, most positive results for obtaining good Lipschitzness lower bound have focused on the isoperimetry distribution (Bubeck & Sellke, 2023). A probability measure $\mu$ on $\mathbb{R}^d$ satisfies $c$ -isoperimetry if for any bounded $L$ -Lipschitz $f: \mathbb{R}^d \to \mathbb{R}$ , and any $t \geq 0$ , + +$$ +\Pr (| f (x) - \mathbb {E} [ f (x) ] | \geq t) \leq 2 \exp (- \frac {d t ^ {2}}{2 c L ^ {2}}). +$$ + +Isoperimetry states that the output of any Lipschitz function is $\mathcal{O}(1)$ -subgaussian under suitable rescaling. Special cases of isoperimetry include high-dimensional Gaussians $\mathcal{N}(0, \frac{I_d}{d})$ , uniform distributions on spheres and hypercubes of diameter 1. However, real-world data might not follow the isoperimetry assumption. In contrast, our results of Theorem 3.4 go beyond isoperimetry and providing a lower bound of robustness for functions with $p$ parameters under arbitrary distributions in the bounded space. Our results of Theorem 3.9 go even further by providing a universal lower bound of robustness for any model class, including the class of neural networks with arbitrary architecture. + +Notations. We will use $\mathcal{X}$ to represent the instance space, $\mathcal{F} = \{f:\mathcal{X}\to [-1,1]\}$ to represent the hypothesis/function space, $x\in \mathcal{X}$ to represent the sample instance, $y\in [-1,1]$ to represent the target, and $z$ to represent the target noise. For errors, denote by $l(f(x),y)$ the loss function of $f$ on instance $x$ and target $y$ , in our work we use the mean squared error as in Bubeck & Sellke (2023), i.e., $l(f(x),y) = (f(x) - y)^2$ . Let $\mathcal{L}_D(f)\coloneqq \mathbb{E}_{(x,y)\sim \mathcal{D}}[l(f(x),y)]$ be the population error, and let $\mathcal{L}_S(f)\coloneqq \frac{1}{|S|}\sum_{(x,y)\in S}[l(f(x),y)]$ be the empirical error. Denote by $f:\mathcal{X}\rightarrow [-1,1]$ the prediction function which maps an instance to its predicted target. It can be parameterized, e.g., by deep neural networks. For norms, we denote by $\| x\|$ a generic norm. Examples of norms include $\| x\|_{\infty}$ , the infinity norm, and $\| x\|_2$ , the $\ell_2$ norm. We will frequently use $(\mathcal{X},\| \cdot \|)$ to represent the normed linear space of $\mathcal{X}$ with norm $\| \cdot \|$ . Define $\mathrm{diam}(\mathcal{X})$ as the diameter of $\mathcal{X}$ w.r.t. the norm $\| \cdot \|$ . For a given score function $f$ , we denote by $\mathrm{Lip}_{\| \cdot \|}(f)$ (or sometimes $\mathrm{Lip}(f)$ for simplicity) + +the Lipschitz constant of $f$ w.r.t. the norm $\| \cdot \|$ . Let $\lceil \cdot \rceil$ represent the ceiling operator. We will use $\mathcal{O}(\cdot), \Theta (\cdot) o(\cdot)$ and $\Omega (\cdot)$ to express sample complexity and Lipschitzness. + +# 1.1. Our results + +Our law of robustness is two-fold: a) overparametrization can potentially help robust interpolation when $n = \mathrm{poly}(d)$ (Section 3.1), and b) there exists no robust interpolation when $n = \exp(\omega(d))$ (Section 3.2). + +Lipschitzness (or local Lipschitzness) is an important characterization of adversarial robustness for learning algorithms (Yang et al., 2020b; Zhang et al., 2019; Wu et al., 2022b;c). The popular randomized smoothing approaches (Cohen et al., 2019; Li et al., 2019; Wu et al., 2022d) can provide robust guarantee through Lipschitzness but suffer curse of dimensionality problem (Wu et al., 2021). Thus, studying the Lipschitzness is crucial for understanding robustness. For a given score function $f$ , we denote by $\mathrm{Lip}_{\| \cdot \|}(f)$ the Lipschitz constant of $f$ w.r.t. the norm $\| \cdot \|$ . That is, for any $x_{1}, x_{2}$ in the input space, $|f(x_{1}) - f(x_{2})| \leq \mathrm{Lip}_{\| \cdot \|}(f)\| x_{1} - x_{2}\|$ . Our results show lower bounds on the Lipschitzness of learned functions when the training error is slightly smaller than the noise level (i.e., in the case of overfitting), but without assumptions on the distribution of covariates except that they are restricted in the bounded space $\mathcal{X} := \{x : \| x \| \leq 1\}$ . We are interested in the assumption of bounded space because: 1) most applications of machine learning focus on the case where the data are in the bounded space. For example, images and videos are considered to be in $[-1,1]^d$ . 2) The discussion of Lipschitzness is closely related to how large the input space is. For example, for the images restricted in $[-1,1]^d$ , special attentions are paid on the $\ell_{\infty}$ robust radius of 0.031 or 0.062 (Zhang et al., 2019; Madry et al., 2017), which corresponds to a (local) Lipschitz constant of $\mathcal{O}(1)$ for the classifier. + +The universal law of robustness by Bubeck & Sellke (2023) provides an $\Omega(\sqrt{nd / p})$ Lipschitzness lower bound of the interpolating functions when the underlying distribution is isoperimetry (see Theorem 2.1). Our first result goes beyond the isoperimetry assumption, and provides an $\Omega(\sqrt{n / p})$ Lipschitzness lower bound of the interpolating functions under arbitrary distribution. We note that the $\sqrt{d}$ difference between the two Lipschitzness lower bounds is due to the special property of the isoperimetry assumption (see Remark 3.5). Our result predicts the potential existence of an $\mathcal{O}(1)$ -Lipschitz function that fits the data below the noise level when $p = \Omega(n)$ . The following informal theorem illustrates the results (the detailed theorems are introduced at later sections): + +Theorem A (informal version of Theorem 3.4). Let + +$\mathcal{F}$ be any class of functions from $\mathbb{R}^d \to [-1, 1]$ and let $\{(x_i, y_i)\}_{i=1}^n$ be i.i.d. input-output pairs in $\{x : \|x\| \leq 1\} \times [-1, 1]$ for any given norm $\|\cdot\|$ . Assume that: + +1. The expected conditional variance of the output (i.e., the "noise level") is strictly positive, denoted by $\sigma^2 \coloneqq \mathbb{E}[\mathrm{Var}[y|x]] > 0$ . +2. $\mathcal{F}$ admits a $J$ -Lipschitz parametrization by $p$ real parameters, each of size at most poly $(n,d)$ . + +Then, with high probability over the sampling of the data, one has simultaneously for all $f \in \mathcal{F}$ : + +$$ +\frac {1}{n} \sum_ {i = 1} ^ {n} (y _ {i} - f (x _ {i})) ^ {2} \leq \sigma^ {2} - \epsilon \Rightarrow \mathrm {L i p} _ {\| \cdot \|} (f) \geq \Omega \bigg (\epsilon \sqrt {\frac {n}{p}} \bigg). +$$ + +Remark 1.1. Our theorem takes a further step in proving the Conjecture 1 in Bubeck et al. (2021), where it is conjectured that for generic data sets, with high probability, any $f$ in the collections of two layer networks with $p$ parameters fitting the data must also satisfy $\mathrm{Lip}_{\| \cdot \|}(f) \geq \Omega(\sqrt{n/p})$ . We validate the conjecture under the polynomial weights assumption, where Bubeck & Sellke (2023) validate the Conjecture 1 under the polynomial weights assumption and the isoperimetry assumption. + +Remark 1.2 (Strong overparametrization is not necessary for the robust interpolation). The Lipschitzness lower bound of Bubeck & Sellke (2023) suggests strong overparametrization, i.e., $p = \Omega(nd)$ , is required for the robust interpolation under the isoperimetry assumption. Our theorem shows that strong overparametrization may not be a necessary condition for the robust interpolation on a general distribution. Moderate overparametrization with $p = \Omega(n)$ may also be enough for robust interpolation. Our results are consistent with the empirical observations that CIFAR10 (50000 images) can be robustly fitted by a model with $p = 10^6$ , and ImageNet (10 $^7$ images) can be robustly fitted by a model with $p = 10^7 \sim 10^8$ . + +Big data hurts robust interpolation. Under the assumptions of isoperimetry distribution and the $J$ -Lipschitz parameterized functions, the universal law of robustness by Bubeck & Sellke (2023) predicts the potential existence of an $\mathcal{O}(1)$ -Lipschitz function fits the data below the noise level when $p = \Omega(nd)$ . Our result goes beyond the two assumptions and disproves the existence of such $\mathcal{O}(1)$ -Lipschitz functions in the big data scenario when $n = \exp(\omega(d))$ for arbitrary distributions: + +Theorem B (informal version of Theorem 3.9). Let $\mathcal{F}$ be any class of functions from $\mathbb{R}^d\to [-1,1]$ and let $\{(x_i,y_i)\}_{i = 1}^n$ be i.i.d. input-output pairs in $\{x:\| x\| \leq 1\} \times [-1,1]$ for any given norm $\| \cdot \|$ . Assume that: + +1. The expected conditional variance of the output (i.e., the "noise level") is strictly positive, denoted by $\sigma^2 \coloneqq \mathbb{E}[\mathrm{Var}[y|x]] > 0$ . + +Then, with high probability over the sampling of the data, one has simultaneously for all $f \in \mathcal{F}$ : + +$$ +\frac {1}{n} \sum_ {i = 1} ^ {n} \left(y _ {i} - f \left(x _ {i}\right)\right) ^ {2} \leq \sigma^ {2} - \epsilon \Rightarrow \operatorname {L i p} _ {\| \cdot \|} (f) \geq \Omega \left(\epsilon n ^ {1 / d}\right). +$$ + +Difference between our results and Bubeck & Sellke (2023). Bubeck & Sellke (2023) proposed a universal law of robustness for general class of functions (see Theorem 2.1). Our results Theorem 3.4 and Theorem 3.9 share the same setting with Theorem 2.1, while the former ones make much weaker assumptions: 1) Both Theorem 3.4 and Theorem 3.9 do not require an isoperimetry assumption of input distributions. 2) Theorem 3.9 does not make any assumption on the Lipschitzness and size of model parametrization. Moreover, while Theorem 2.1 predicts potential existence of an $\mathcal{O}(1)$ -Lipschitz robust interpolating function when $p = \Omega(nd)$ , Theorem 3.9 disproves the hypothesis in the big data scenario when $n = \exp(\omega(d))$ for arbitrary distributions in the bounded space. Besides, our bounds work for all $\ell_p(p \geq 1)$ norm while the bound in Bubeck & Sellke (2023) only focuses on $\ell_2$ norm. + +Practical implications. Our analysis provides important implications for practical settings. When selecting the models for learning on a certain dataset, ideally the number of parameters in the selected model should be the same (or slightly larger) scale of the dataset in order to get good robust performance. When the size of dataset is too large comparing to the dimension of dataset, in order to achieve good robustness, it may be beneficial to either reduce the size of the training data or scatter the data in a higher-dimensional space by padding special covariates. This approach can help to mitigate the negative effects of the curse of big data and improve model robustness, particularly when dealing with large datasets in practical applications (Wu et al., 2022a; Sun et al., 2021; 2022). + +# 2. Related Work + +Robust interpolation problem. Bubeck et al. (2021) provided the first guarantee on the law of robustness for two-layer neural networks which was later extended by Bubeck & Sellke (2023) to a universal law of robustness for general class of functions under isoperimetry distributions. A probability measure $\mu$ on $\mathbb{R}^d$ satisfies $c$ -isoperimetry if for any bounded $L$ -Lipschitz $f: \mathbb{R}^d \to \mathbb{R}$ , and any $t \geq 0$ , $\operatorname*{Pr}(|f(x) - \mathbb{E}[f(x)]| \geq t) \leq 2\exp\left(-\frac{dt^2}{2cL^2}\right)$ . + +Theorem 2.1 (Theorem 1 of Bubeck & Sellke (2023)). Let $\mathcal{F}$ be a class of functions from $\mathbb{R}^d\to [-1,1]$ and let $\{(x_i,y_i)\}_{i = 1}^n$ be i.i.d. input-output pairs in $\mathbb{R}^d\times [-1,1]$ . Assume that: + +1. The expected conditional variance of the output (i.e., the "noise level") is strictly positive, denoted by $\sigma^2 \coloneqq \mathbb{E}[\mathrm{Var}[y|x]] > 0$ . + +2. $\mathcal{F}$ admits a $J$ -Lipschitz parametrization by $p$ real parameters, each of size at most poly $(n,d)$ . +3. The distribution $\mu$ of the input $x_{i}$ satisfies isoperimetry (or a mixture thereof). + +Then, with high probability over the sampling of the data, one has simultaneously for all $f \in \mathcal{F}$ : + +$$ +\frac {1}{n} \sum_ {i = 1} ^ {n} (y _ {i} - f (x _ {i})) ^ {2} \leq \sigma^ {2} - \epsilon \Rightarrow \mathrm {L i p} _ {\| \cdot \| _ {2}} (f) \geq \Omega \bigg (\epsilon \sqrt {\frac {n d}{p}} \bigg). +$$ + +Our work extends the result of Bubeck & Sellke (2023) by consequently removing the third assumption (see Theorem 3.4) and the second assumption (see Theorem 3.9). + +Sample complexity of robust learning. The sample complexity of robust learning for benign distributions and certain function class has been extensively studied in the recent years. In particular, Bhattacharjee et al. (2021) considered the sample complexity of robust linear classification on the separated data. Yin et al. (2019) studied the adversarially robust generalization problem through the lens of Rademacher complexity. Cullina et al. (2018) extended the PAC-learning framework to account for the presence of an adversary. Montasser et al. (2019) showed that any hypothesis class with finite VC dimension is robustly PAC learnable with an improper learning rule. They also showed that the requirement of being improper is necessary. Schmidt et al. (2018) showed an $\Omega(\sqrt{d})$ -factor gap between the standard and robust sample complexity for a mixture of Gaussian distributions in $\ell_{\infty}$ robustness, which was later extended to the case of $\ell_p$ robustness with a tight bound by Bhagoji et al. (2019); Dobriban et al. (2020); Dan et al. (2020). Different from the prior work, our work is the first to discover the sample complexity of robust learning for arbitrary function class and learning algorithms. + +# 3. A Two-fold Law of Robustness + +In this section, we present our main theoretical analysis, which contributes to our two-fold law of robustness. All missing proofs can be found in the appendix. + +Robust interpolation problem. We first introduce our problem settings. Given noisy training data $\{(x_i, y_i := g(x_i) + z_i)\}_{i=1}^n$ of size $n$ where $x_1, \ldots, x_n$ are training samples, $g(x_1), \ldots, g(x_n)$ the ground truth, and $z_1, \ldots, z_n$ have variance $\sigma^2 > 0$ , we say a model $f$ robustly interpolates (or fits the data below the noise level) the training data if and only if + +$$ +\exists \epsilon > 0, \frac {1}{n} \sum_ {i = 1} ^ {n} (y _ {i} - f (x _ {i})) ^ {2} \leq \sigma^ {2} - \epsilon . +$$ + +Our two-fold law of robustness. a) Overparametrization can potentially help robust interpolation when $n = \mathrm{poly}(d)$ + +(Section 3.1); b) There exists no robust interpolation when $n = \exp(\omega(d))$ (Section 3.2). + +# 3.1. A Lipschitz lower bound beyond the isoperimetry assumption. + +In this part, we show the first part of our two-fold law of robustness: overparametrization can potentially help robust interpolation when $n = \mathrm{poly}(d)$ . Notice, here we claim "potentially help" as overparametrization is only a necessary but not sufficient condition for robust interpolation. + +Motivation. We notice that, the proof of Theorem 2.1 (Bubeck & Sellke, 2023) depends heavily on the definition of isoperimetry distribution, i.e., $\operatorname*{Pr}(|f(x) - \mathbb{E}[f(x)]| \geq t) \leq 2\exp(-\frac{dt^2}{2cL^2})$ for $L$ -Lipschitz $f: \mathbb{R}^d \to \mathbb{R}$ . This formula indicates the high-concentration property of isoperimetry distributions due to the $\exp(-d)$ dependency of $\operatorname*{Pr}(|f(x) - \mathbb{E}[f(x)]| \geq t)$ . The $\exp(-d)$ dependency is also the reason that the Lipschitzness lower bound of Bubeck & Sellke (2023) is $\Omega(\sqrt{nd / p})$ instead of the $\Omega(\sqrt{n / p})$ lower bound we derived. + +Challenge. One may naturally come up with the idea to derive a bound of $\operatorname*{Pr}(|f(x) - \mathbb{E}[f(x)]| \geq t)$ for arbitrary distributions and go beyond the isoperimetry distribution. However, the challenge is that unlike the regular concentration bound on $\operatorname*{Pr}(|x - \mathbb{E}[x]| \geq t)$ , we are dealing with a more complicate case, where the random variable is $f(x)$ with arbitrary $L$ -Lipschitz $f$ . To solve this problem, we apply the Azuma's inequality below: + +Lemma 3.1 (Azuma's inequality (Azuma, 1967)). Suppose $\{X_{k} : k = 0,1,2,3,\ldots\}$ is a martingale and $|X_{k} - X_{k-1}| \leq c_{k}$ almost surely. Then for all positive integers $N$ and $\epsilon > 0$ , + +$$ +\Pr \left(\left| X _ {N} - X _ {0} \right| \geq \epsilon\right) \leq 2 \exp \left(- \frac {\epsilon^ {2}}{2 \sum_ {k = 1} ^ {N} c _ {k} ^ {2}}\right). +$$ + +Azuma's inequality shows the concentration bound for the values of martingales that have bounded differences. With this lemma, we are able to derive the following concentration bound for arbitrary distributions on a bounded space. + +Lemma 3.2. Given an arbitrary probability measure $\mu$ on the bounded space $\mathcal{X} \subset \mathbb{R}^d$ , for any $L$ -Lipschitz $f: \mathbb{R}^d \to \mathbb{R}$ , and any $t \geq 0$ , + +$$ +\Pr (| f (x) - \mathbb {E} [ f (x) ] | \geq t) \leq 2 \exp (- \frac {t ^ {2}}{2 \operatorname {d i a m} (\mathcal {X}) ^ {2} L ^ {2}}). +$$ + +Comparing with the $\exp \left(-\frac{dt^2}{2cL^2}\right)$ bound for the isoperimetry distributions, our bound for arbitrary distributions only differs a $d$ on the numerator of the term inside the exponential. In order to achieve the same concentration bound of isoperimetry distributions, one need $\mathrm{diam}(\mathcal{X}) = \Theta (1 / \sqrt{d})$ + +which means our input are located on an $\Theta(1/\sqrt{d})$ -diameter space. As the real world datasets are usually supported on an $\Theta(1)$ -diameter space, matching the isoperimetry bound for all distributions is empirical meaningless. + +With Lemma 3.2, we can start to calculate the Lipschitzness lower bound with the following lemma on finite function class + +Lemma 3.3. Let $\mathcal{F}$ be a finite class of $L$ -Lipschitz functions from $\mathbb{R}^d \to [-1,1]$ and let $\{(x_i, y_i)\}_{i=1}^n$ be i.i.d. input-output pairs in $\{x : \|x\| \leq 1\} \times [-1,1]$ for any given norm $\|\cdot\|$ . Assume that the expected conditional variance of the output (i.e., the "noise level") is strictly positive, denoted by $\sigma^2 \coloneqq \mathbb{E}[\mathrm{Var}[y|x]] > 0$ , we have + +$$ +\begin{array}{l} \Pr \left(\exists f \in \mathcal {F}: \frac {1}{n} \sum_ {i = 1} ^ {n} \left(y _ {i} - f \left(x _ {i}\right)\right) ^ {2} \leq \sigma^ {2} - \epsilon\right) \\ \leq 4 \exp \left(- \frac {n \epsilon^ {2}}{8 ^ {3}}\right) + | \mathcal {F} | \exp \left(- \frac {\epsilon^ {2} n}{2 ^ {1 0} L ^ {2}}\right). \\ \end{array} +$$ + +Lemma 3.3 shows the connection between the robust interpolation problem and the Lipschitzness of the underlying functions. Notice, the probability of $\exists f\in \mathcal{F}$ .. $\frac{1}{n}\sum_{i = 1}^{n}(y_i - f(x_i))^2\leq \sigma^2 -\epsilon$ decreases with $L$ , which indicates that we need a large enough $L$ to make sure that there exists $f$ satisfying the condition of robust interpolation problem. With this intuition, we can calculate the following Lipschitzness lower bound for the robust interpolation problem without the isoperimetry assumption. + +Theorem 3.4. Let $\mathcal{F}$ be any class of functions from $\mathbb{R}^d \to [-1,1]$ and let $\{(x_i, y_i)\}_{i=1}^n$ be i.i.d. input-output pairs in $\{x : \|x\| \leq 1\} \times [-1,1]$ for any given norm $\|\cdot\|$ . Assume that: + +1. The expected conditional variance of the output (i.e., the "noise level") is strictly positive, denoted by $\sigma^2 \coloneqq \mathbb{E}[\mathrm{Var}[y|x]] > 0$ . +2. $J$ -Lipschitz parametrization: $\mathcal{F} = \{f_w, w \in \mathcal{W}\}$ with $\mathcal{W} \subset \mathbb{R}^p$ , $\mathrm{diam}(\mathcal{W}) \leq W$ and for any $w_1, w_2 \in W$ , + +$$ +\left| \left| f _ {w _ {1}} - f _ {w _ {2}} \right| \right| _ {\mathcal {F}} \leq J \left| \left| w _ {1} - w _ {2} \right| \right|. +$$ + +Then, with probability at least $1 - \delta$ , one has simultaneously for all $f \in \mathcal{F}$ : + +$$ +\begin{array}{l} \frac {1}{n} \sum_ {i = 1} ^ {n} \left(y _ {i} - f \left(x _ {i}\right)\right) ^ {2} \leq \sigma^ {2} - \epsilon \Rightarrow \\ \operatorname {L i p} _ {\| \cdot \|} (f) \geq \frac {\epsilon}{3 2} \sqrt {\frac {n}{p \ln (3 6 W J \epsilon^ {- 1}) + \ln (2 / \delta)}}. \\ \end{array} +$$ + +The crucial part of the proof is to find a finite $\epsilon / 6J$ -covering of $\mathcal{F}$ with the $J$ -Lipschitz parametrization assumption. Then we can apply Lemma 3.3 to this finite covering set + +and get the Lipschitzness lower bound. We will show in the next section that without the $J$ -Lipschitz parametrization assumption, one can hardly use the similar proof technique to derive the Lipschitzness lower bound. + +Bubeck & Sellke (2023) showed that under neural network settings, $J$ is always of polynomial order of the diameter of the weight space. Thus, if the weight is only polynomial large w.r.t. $d$ and $n$ , $\ln(60WJ\epsilon^{-1})$ would not affect the Lipschitzness bound too much and we may neglect it in its asymptotic approximation. Thus, we have a Lipschitzness lower bound of order $\Omega(\epsilon \sqrt{n/p})$ for the robust interpolation problem. Our theorem validates the first part of our law of robustness, i.e., the potential existence of robust interpolating functions under the overparametrization scenario when $n = \mathrm{poly}(d)$ (see Remark 3.10). + +Tightness of our bound. When $n = \mathrm{poly}(d)$ , Theorem 4 of Bubeck et al. (2021) has already demonstrated the existence of an at most $\mathcal{O}(\sqrt{n / p})$ -Lipschitz two layer network, which fits generic data below the noise level. Thus, our Lipschitzness lower bound is tight. + +Remark 3.5 (Difference of the $\sqrt{d}$ -dependency between Theorem 2.1 and 3.4). Comparing to the $\Omega(\epsilon \sqrt{nd / p})$ of Lipschitzness lower bound in Theorem 2.1, our bound does not depend on the dimension $d$ . This difference, as we discussed in Lemma 3.2, is due to the isoperimetry assumption. In Bubeck et al. (2021), it's also showed that the tight Lipschitzness lower bound of two layer networks is of order $\Omega(\epsilon \sqrt{n / p})$ , which is consistent with our results. + +# 3.2. A Lipschitz lower bound beyond the $J$ -Lipschitz parametrization assumption. + +In this part, we show the second part of our two-fold law of robustness. We demonstrate an intriguing observation that huge data hurts robust interpolation. Our analysis leads to a universal lower bound of Lipschitzness regarding the robust interpolation problem, which goes beyond the isoperimetry and $J$ -Lipschitz parametrization assumptions. Our analysis is based on the relation between Rademacher complexity and the generalization gap between the population error $\mathcal{L}_{\mathcal{D}}(f)$ and the training error $\mathcal{L}_S(f)$ . + +Motivation. The $J$ -Lipschitzness parametrization assumption provides us a simple way to find a covering of the function space $\mathcal{F}$ . Although the Lipschitzness lower bound in Bubeck & Sellke (2023) has only logarithmic dependency with respect to $J$ , it may still affect the Lipschitzness lower bound when the weight of neural networks is exponentially large w.r.t. $d$ , or the number of layers of neural works is polynomial w.r.t. $d$ . Thus, we seek to derive a Lipschitzness lower bound beyond the $J$ -Lipschitzness parametrization assumption. + +Challenge. Without the $J$ -Lipschitzness parametrization + +assumption, the covering number of the function $\mathcal{F}$ will have more complicate dependency on the Lipschitzness $L$ (see Lemma 3.6). In this case, calculating Lipschitz-ness lower bound with Lemma 3.6 and Lemma 3.3 requires one to solve an inequality like $L^{-d}\ln L + L^{-2}\geq C$ , which obviously has no closed-form solution when $d\geq 3$ . Thus, we need other techniques to deal with this case. Recall the objective of robust interpolation problem is $\frac{1}{n}\sum_{i = 1}^{n}(y_i - f(x_i))^2\leq \sigma^2 -\epsilon$ , one can immediately find that the left hand side formula is the train error with mean squared loss $\mathcal{L}_S(f)$ . Under the label noise settings, we have $\mathcal{L}_{\mathcal{D}}(f) = \mathbb{E}_{\mathcal{D}}[(f(x) - y)^2 ]\geq \mathbb{E}_x[\mathrm{Var}(y|x)] = \sigma^2$ which yields $\frac{1}{n}\sum_{i = 1}^{n}(y_i - f(x_i))^2\leq \sigma^2 -\epsilon \Rightarrow \mathcal{L}_S(f)\leq$ $\mathcal{L}_{\mathcal{D}}(f) - \epsilon$ . Therefore, if one can derive + +$$ +\mathcal {L} _ {S} (f) \leq \mathcal {L} _ {\mathcal {D}} (f) - \epsilon \Rightarrow \mathrm {L i p} _ {\| \cdot \|} (f) \geq \Omega (\epsilon n ^ {1 / d}), +$$ + +a natural corollary is that + +$$ +\begin{array}{l} \frac {1}{n} \sum_ {i = 1} ^ {n} \left(y _ {i} - f \left(x _ {i}\right)\right) ^ {2} \leq \sigma^ {2} - \epsilon \Rightarrow \mathcal {L} _ {S} (f) \leq \mathcal {L} _ {\mathcal {D}} (f) - \epsilon \\ \Rightarrow \operatorname {L i p} _ {\| \cdot \|} (f) \geq \Omega (\epsilon n ^ {1 / d}). \\ \end{array} +$$ + +In this way, we successfully convert the robust interpolation problem to a generalization problem between the empirical error and population error under the mean squared loss, which can be solved by the statistical learning techniques, e.g., VC dimension and Rademacher complexity. We focus on the Rademacher complexity in this part. + +Rademacher complexity. We start with the definition of Rademacher complexity, which measures the richness of a function class. For a set $\mathcal{A} \subset \mathbb{R}^n$ , the Rademacher complexity is defined as + +$$ +R (\mathcal {A}) := \frac {1}{n} \mathbb {E} _ {\sigma_ {1}, \dots , \sigma_ {n} \in \{- 1, 1 \}} \left[ \sup _ {\mathbf {a} \in \mathcal {A}} \sum_ {i = 1} ^ {n} \sigma_ {i} a _ {i} \right]. +$$ + +Given a loss function $l$ , a hypothesis class $\mathcal{F}$ , and a training set $S = \{(x_1, y_1), \ldots, (x_n, y_n)\}$ , denote by $l \circ \mathcal{F} := \{l(f(\cdot), \cdot) : f \in \mathcal{F}\}$ and $l \circ \mathcal{F} \circ S := \{(l(f(x_1), y_1), \ldots, l(f(x_n), y_n)) : f \in \mathcal{F}\}$ . The Rademacher complexity of the set $l \circ \mathcal{F} \circ S$ is given by + +$$ +R (l \circ \mathcal {F} \circ S) := \frac {1}{n} \mathbb {E} _ {\sigma_ {1}, \dots , \sigma_ {n} \in \{- 1, 1 \}} \left[ \sup _ {f \in \mathcal {F}} \sum_ {i = 1} ^ {n} \sigma_ {i} l (f (x _ {i}), y _ {i})) \right]. +$$ + +For every function $f \in \mathcal{F}$ , the generation error between $\mathcal{L}_{\mathcal{D}}(f)$ and $\mathcal{L}_S(f)$ is bounded by the Rademacher complexity of the function space $l \circ \mathcal{F} \circ S$ . More formally, assume that $\forall f \in \mathcal{F}, \forall x \in \mathcal{X}, |l(f(x), y)| \leq a$ . Then with a probability at least $1 - \delta$ , for all $f \in \mathcal{F}$ , + +$$ +\mathcal {L} _ {\mathcal {D}} (f) - \mathcal {L} _ {S} (f) \leq 2 \mathbb {E} _ {S \in \mathcal {D} ^ {n}} [ R (l \circ \mathcal {F} \circ S) ] + a \sqrt {\frac {2 \ln (2 / \delta)}{n}}. \tag {1} +$$ + +From Equation 1, we can see that given a lower bound of generalization gap $\mathcal{L}_{\mathcal{D}}(f) - \mathcal{L}_S(f)\geq \epsilon$ , one has immediately + +$$ +\mathbb {E} _ {S \in \mathcal {D} ^ {n}} [ R (l \circ \mathcal {F} \circ S) ] \geq \frac {\epsilon}{2} - \frac {a}{2} \sqrt {\frac {2 \ln (2 / \delta)}{n}}. +$$ + +Therefore, if we can find the relation between the Rademacher complexity of $l \circ \mathcal{F}$ and the Lipschitzness of the functions in class $\mathcal{F}$ , we are able to derive a constraint of the Lipschitz constant for $\mathcal{F}$ . The contraction lemma of Rademacher complexity (Lemma 26.9 of Shalev-Shwartz & Ben-David (2014)) states that for a given space $A$ and a $L$ -lipschitz function $h$ on $A$ , we have $R(h \circ A) \leq L \cdot R(A)$ . Thus, if the error function $l(f(x), y)$ is $C$ -Lipschitz w.r.t. $f \in \mathcal{F}$ for arbitrary $y \in [-1, 1]$ , + +$$ +R (l \circ \mathcal {F} \circ S) \leq C \cdot R (\mathcal {F} \circ S). \tag {2} +$$ + +It has been proved (von Luxburg & Bousquet, 2004) that the Rademacher complexity of a set is directly related to the number of $\epsilon$ -covering of the set. So the first step to calculate the Rademacher complexity of $\mathcal{F} \circ S$ is to find the covering number of this function space. + +Given a space $(\mathcal{X},||\cdot ||)$ and a covering radius $\eta$ , let $N(\mathcal{X},\eta ,||\cdot ||)$ , a.k.a. the $\eta$ -covering number, be the minimum number of $\eta$ -ball which covers $\mathcal{X}$ . For a given function space $\mathcal{F}$ , define + +$$ +| | f - f ^ {\prime} | | _ {\mathcal {F}} = \sup _ {x \in \mathcal {X}} | f (x) - f ^ {\prime} (x) |. +$$ + +We have the following upper bound of the covering number of $\mathcal{F}$ : + +Lemma 3.6 (Covering number of $L$ -Lipschitz function space). For a bounded and connected space $(\mathcal{X},||\cdot ||)$ , let $B_{L}$ be the set of functions $f$ ’s such that $\mathrm{Lip}_{||\cdot ||}(f)\leq L$ . If $\mathcal{X}$ is connected and centered, we have for every $\epsilon >0$ + +$$ +N (B _ {L}, \epsilon , | | \cdot | | _ {\mathcal {F}}) \leq \left\lceil \frac {2 L \cdot \operatorname {d i a m} (\mathcal {X})}{\epsilon} \right\rceil 2 ^ {N (\mathcal {X}, \frac {\epsilon}{2 L}, | | \cdot | |)}. +$$ + +The Dudley's integral provides the relation between the covering number of a function class and its Rademacher complexity. With Dudley's integral, von Luxburg & Bousquet (2004) showed that for every $\epsilon > 0$ , + +$$ +\begin{array}{l} \mathbb {E} _ {S ^ {\prime} \in \mathcal {D} ^ {n}} \left[ R \left(B _ {L} \circ S\right) \right] \leq \\ 2 \epsilon + \frac {4 \sqrt {2}}{\sqrt {n}} \int_ {\epsilon / 4} ^ {\operatorname {d i a m} \left(B _ {L}\right)} \sqrt {\ln \left(N \left(B _ {L} , u , | | \cdot | | _ {\mathcal {F}}\right)\right)} d u. \tag {3} \\ \end{array} +$$ + +Notice that when $u > 2L\cdot \mathrm{diam}(\mathcal{X})$ , the number of $u$ -covering is 1 and $\ln (N(B_L,u,||\cdot ||_{\mathcal{F}})) = 0$ . Combining it with Lemma 3.6 yields the following lemma: + +Lemma 3.7. Let $(\mathcal{X},||\cdot ||)$ be a bounded and connected space and $B_{L}$ be all functions $f\in \mathcal{F}$ with $\mathrm{Lip}_{||\cdot ||}(f)\leq L$ Let $n = |S|$ . If $\mathcal{X}$ is connected and centered, for any $\epsilon >0$ + +$$ +\mathbb {E} _ {S \in \mathcal {D} ^ {n}} [ R (B _ {L} \circ S) ] \leq 2 \epsilon + \frac {4 \sqrt {2}}{\sqrt {n}} \times +$$ + +$$ +\int_ {\epsilon / 4} ^ {2 L \cdot \operatorname {d i a m} (\mathcal {X})} \sqrt {N \left(\mathcal {X} , \frac {u}{2 L} , | | \cdot |\right)} \ln 2 + \ln \left\lceil \frac {2 L \cdot \operatorname {d i a m} (\mathcal {X})}{u} \right\rceil d u. +$$ + +As all the variables in Lemma 3.7 are known, by calculating the integration, one can derive an upper bound of Rademacher complexity $\mathbb{E}_{S\in \mathcal{D}^n}[R(B_L\circ S)]$ + +Lemma 3.8. If $\mathrm{diam}(\mathcal{X}) = 2$ w.r.t. $||\cdot ||$ and $d\geq 3$ we have + +$$ +\mathbb {E} _ {S \in \mathcal {D} ^ {n}} [ R (B _ {L} \circ S) ] \leq +$$ + +$$ +9 6 \frac {L}{n ^ {1 / d}} + \frac {9 6 \sqrt {2 \ln 2}}{d - 2} \frac {L}{n ^ {1 / d}} + \frac {1 6 \sqrt {2} L}{\sqrt {n}} \sqrt {\ln \left(\frac {1}{3} n ^ {1 / d} + 1\right)}. +$$ + +According to Equation 1 in (Mendelson & Vershynin, 2003), when $\frac{u}{2L} \leq \mathrm{diam}(\mathcal{X})$ , $N(\mathcal{X}, \frac{u}{2L}, \| \cdot \|) \leq (\frac{6L \cdot \mathrm{diam}(\mathcal{X})}{u})^d$ if $\mathcal{X} \subseteq \mathbb{R}^d$ . Then the integral part will be $\sqrt{\left(\frac{12L}{u}\right)^d \ln 2 + \ln \left[\frac{2L \cdot \mathrm{diam}(\mathcal{X})}{u}\right]}$ , which is no more than $\sqrt{\left(\frac{12L}{u}\right)^d \ln 2} + \sqrt{\ln\left(\frac{4L}{u} + 1\right)}$ . Taking $\epsilon = \Theta\left(\frac{L}{n^{1/d}}\right)$ , the integral part will be bounded by $\Theta(Ln^{1/2-1/d})$ . Thus $\mathbb{E}_{S' \in \mathcal{D}^n}[R(B_L \circ S')] \leq \Theta\left(\frac{L}{n^{1/d}}\right) + \frac{4\sqrt{2}}{\sqrt{n}} \Theta(Ln^{1/2-1/d}) = \Theta\left(\frac{L}{n^{1/d}}\right)$ . + +In our settings, we are interested in the squared $\ell_2$ loss $l(f(x),y) = (f(x) - y)^2$ . We have $\nabla_{f(x)}l(f(x),y) = 2(f(x) - y)\leq 2(|f(x)| + |y|)\leq 4$ , i.e., $l(f(x),y)$ is 4-Lipschitz w.r.t. $f(x)$ for arbitrary $y\in [-1,1]$ . Thus, $\mathbb{E}_{S\in \mathcal{D}^n}[R(l\circ B_L\circ S)]\leq 4\mathbb{E}_{S\in \mathcal{D}^n}[R(B_L\circ S)] = \mathcal{O}\left(\frac{L}{n^{1 / d}}\right)$ . Combining this result with Equation 1 yields the main theorem of our paper: + +Theorem 3.9 (Lipschitzness Lower Bound Beyond the $J$ -Lipschitz parametrization assumption). Let $\mathcal{F}$ be any class of functions from $\mathbb{R}^d \to [-1,1]$ and let $\{(x_i, y_i)\}_{i=1}^n$ be i.i.d. input-output pairs in $\{x : \|x\| \leq 1\} \times [-1,1]$ for any given norm $\|\cdot\|$ . Assume that: + +1. The expected conditional variance of the output (i.e., the "noise level") is strictly positive, denoted by $\sigma^2 \coloneqq \mathbb{E}[\mathrm{Var}[y|x]] > 0$ . + +Then with probability at least $1 - \delta$ , for all $f \in \mathcal{F}$ : + +$$ +\frac {1}{n} \sum_ {i = 1} ^ {n} (y _ {i} - f (x _ {i})) ^ {2} \leq \sigma^ {2} - \epsilon \Rightarrow +$$ + +$$ +\operatorname {L i p} _ {\| \cdot \|} (f) \geq \frac {n ^ {1 / d}}{K} \left(\frac {1}{8} \epsilon - \frac {1}{2} \sqrt {\frac {2 \ln (2 / \delta)}{n}}\right), +$$ + +where $K = 96 + \frac{96\sqrt{2\ln 2}}{d - 2} +\frac{16\sqrt{2}}{n^{1 / 2 - 1 / d}}\sqrt{\ln(\frac{1}{3}n^{1 / d} + 1)}.$ + +Theorem 3.9 states that, for all data distribution $\mathcal{D}$ with label noise of variance $\sigma^2$ and every function $f:\mathcal{X}\to [-1,1]$ , overfitting i.e. $\frac{1}{n}\sum_{i = 1}^{n}(y_i - f(x_i))^2\leq \sigma^2 -\epsilon$ implies $\mathrm{Lip}_{\| \cdot \|}\geq \Omega (\epsilon n^{1 / d})$ , which validates the second part of our law of robustness, i.e., achieving good robust interpolation is impossible when $n = \exp (\omega (d))$ . + +Remark 3.10. Theorem 3.9 disprove the existence of robust interpolating functions when $n = \exp(\omega(d))$ . Thus, the first part of our law of robustness holds only when $n = \operatorname{poly}(d)$ . + +Tightness of our bound. Intuitively, the Lipschitzness of the interpolating function is inversely propositional to the distance between the closest training data pairs. Given $n$ training data in the $d$ -dimensional bounded space, one can scatter the data evenly in the space, where the distance between any training pair is as large as $\Theta(1/n^{1/d})$ . Inspired by this, we complement Theorem 3.9 with a matching Lipschitzness upper bound of $\mathcal{O}(n^{1/d})$ , which shows that the Lipschitzness lower bound in Theorem 3.9 is achievable by a certain function and training data: + +Theorem 3.11 (Tightness of our bound). For any distribution $\mathcal{D}$ which is supported on $\{x\in \mathbb{R}^d:||x||\leq 1\}$ , there exist $n$ training samples $\{x_{1},\ldots ,x_{n}\}$ such that $\forall i,j,i\neq j,||x_i - x_j||\geq \frac{1}{n^{1 / d}}$ . Denote by $\{y_1,\dots,y_n\}$ the observed targets. We design a function $f^{*}$ which first perfectly fits the training samples, i.e., $f^{*}(x_{i}) = y_{i},\forall i\in [n]$ , then use the linear interpolation between neighbour training points as the prediction of other samples. This function is at most $2n^{1 / d}$ -Lipschitz. + +Theorem 3.11 shows that there exists $n$ samples, such that the function which perfectly fits the training samples is $\mathcal{O}(n^{1/d})$ -Lipschitz. + +# 3.2.1. OUR (COUNTER-INTUITIVE) IMPLICATIONS + +It was widely believed that 1) big data (Schmidt et al., 2018), 2) low dimensionality of input (Blum et al., 2020), and 3) overparametrization (Bubeck & Sellke, 2023; Bubeck et al., 2021; Gao et al., 2019) improve robustness. Our main results of Theorem 3.9 challenge the common beliefs and show that these hypotheses may not be true in the robust interpolation problem. Our results shed light on the theoretic understanding of robustness beyond isoperimetry assumption. + +The curse of big data. Our Lipschitzness lower bound in Theorem 3.9 is increasing w.r.t. the sample size $n$ . The intuition is that as one has more training data, those data are squeezed in the bounded space with smaller margin. Thus to fit the data well, the Lipschitz constant of the interpolating functions cannot be small. Perhaps surprisingly, our results contradict with the common belief that more data always + +improve model robustness. + +The blessing of dimensionality. It is known that high dimensionality of input space strengthens the power of adversary. For example, in the $\ell_{\infty}$ threat model, an adversary can change every pixel of a given image by 8 or 16 intensity levels. Admittedly, higher dimensionality means that the adversary can modify more pixels. However, we show that our Lipschitzness lower bound in Theorem 3.9 is decreasing w.r.t. $d$ . The intuition is that input space with higher dimension has larger space to scatter the data. So the data can be well-separated, and thus the Lipschitz constant of the interpolating functions can be small. + +# 4. Small Data May Hurt Performance and Robustness + +In Section 3, we mainly focus on the robust interpolation problem on the training samples. The lower bound given by Theorem 3.9 implies that one can sample at most $\exp (\mathcal{O}(d))$ training samples in order to obtain an $\mathcal{O}(1)$ -Lipschitz function in the robust interpolation problem. In this section, we show that $n = \exp (\Omega (d))$ is a necessary condition for obtaining a good population error by any $\mathcal{O}(1)$ -Lipschitz learning algorithm. + +We now provide a complementary result of Section 3.2. We first prove that for learning algorithms on binary classification tasks, if the number of training samples is less than half of the number of all samples, there exists a distribution with label noise such that the average error of all learning algorithms is greater than a constant. As the distribution on a binary classification is naturally a distribution on the regression tasks, we can find such a distribution for the regression tasks similarly. + +Lemma 4.1. Let $\mathcal{A}(S):\mathcal{X}\to \{-a,a\}$ be any learning algorithm with respect to the squared $\ell_2$ loss over a domain $\mathcal{X}$ and samples $S$ . Assume there are label noise $\mathbb{E}[\mathrm{Var}[y|x]] = \sigma^2$ . Let $m$ be any number smaller than $|\mathcal{X}| / 2$ , representing the size of a training set. Then, for any $a > 0$ there exists a distribution $\mathcal{D}$ (with label noise) over $\mathcal{X}\times \{-a,a\}$ such that + +$$ +\mathbb {E} _ {S \sim \mathcal {D} ^ {m}} [ \mathcal {L} _ {\mathcal {D}} (\mathcal {A} (S)) ] ] \geq \frac {1}{2} (a ^ {2} + \sigma^ {2}). +$$ + +In the next lemma, we will show a no-free-lunch theory on the regression tasks and algorithms that outputs an $L$ -Lipschitz function. The intuition is to consider the minimum distance between two points in the distribution $\mathcal{D}$ . On one hand, if the minimum distance is less than $\epsilon$ , we can assign the two samples that achieve the minimum distance with labels 1 and $-1$ , respectively. As the algorithm $\mathcal{A}$ is $L$ -Lipschitz, the maximum difference between the predicted labels of the two selected points is $L\epsilon$ . Thus, the error of + +$\mathcal{A}$ will be larger than $1 - L\epsilon$ . On the other hand, if the minimum distance is larger than $\epsilon$ , the maximum number of points in the distribution $\mathcal{D}$ will be less than the number of the $\epsilon$ -packing of the input space $\mathcal{X}$ . By Lemma 4.1, there exists a distribution such that if the number of training samples is less than half of the $\epsilon$ -packing of the input space, the average error of all learning algorithms will be at least a constant. More formally, we have the following theorem: + +Lemma 4.2 (No-free-lunch theory with $L$ -Lipschitz algorithms). Let $\mathcal{A}(S): \mathcal{X} \to [-1,1]$ be any algorithm that returns an $L$ -Lipschitz function (w.r.t. the norm $\|\cdot\|$ ) for the task of regression w.r.t. the squared $\ell_2$ loss over a domain $(\mathcal{X},\|\cdot\|)$ and samples $S$ . Let $n$ be the size of training set, i.e., $n = |S|$ . Assume that the label noise has variance $\sigma^2 \coloneqq \mathbb{E}_{\mathcal{D}}[\mathrm{Var}(y|x)] \leq 1/2$ . Then, there exists a distribution $\mathcal{D}$ over $\mathcal{X} \times [-1,1]$ with noisy labels such that for all $L$ -Lipschitz (w.r.t. norm $\|\cdot\|$ ) learning algorithm and any $\epsilon \in [0,\frac{1}{2L}]$ : + +$$ +n < M (\mathcal {X}, \epsilon , | | \cdot | |) / 2 \Rightarrow +$$ + +$$ +\mathbb {E} _ {S \sim \mathcal {D} ^ {n}} [ \mathcal {L} _ {\mathcal {D}} (\mathcal {A} (S)) ] \geq \min \left\{\frac {1}{4}, \frac {1}{2} - L \epsilon \right\} + \sigma^ {2}, +$$ + +where $M(\mathcal{X},\epsilon ,||\cdot ||)$ is the $\epsilon$ -packing number of $(\mathcal{X},||\cdot ||)$ . + +Now we are ready to prove our main theorem. + +Theorem 4.3. Let $S = \{(x_i, y_i)\}_{i=1}^n$ be i.i.d. training pairs in $\{x : \| x \| \leq 1\} \times [-1, 1]$ for any given norm $\| \cdot \|$ . Denote by $\mathcal{L}_D(f) := \mathbb{E}_D[(f(x) - y)^2]$ the squared $\ell_2$ loss. Assume that the expected conditional variance of the output (i.e., the "noise level") is strictly positive and bounded by $1/2$ , denoted by $\sigma^2 := \mathbb{E}[\mathrm{Var}[y|x]]$ . Let $A(S) : \mathcal{X} \to \mathbb{R}$ be any L-Lipschitz learning algorithm over a training set $S$ . Then there exists a distribution $\mathcal{D}'$ of $(x, y)$ such that + +$$ +n < \frac {1}{2} \left(\frac {2 L}{1 - 2 \epsilon}\right) ^ {d} \Rightarrow \mathbb {E} _ {S} [ \mathcal {L} _ {\mathcal {D} ^ {\prime}} (\mathcal {A} (S)) ] \geq \min \left\{\frac {1}{4}, \epsilon \right\} + \sigma^ {2}. +$$ + +Proof. Consider $\mathcal{X} = \{x\in \mathbb{R}^d:||x||\leq 1\}$ . We have $M(\mathcal{X},\eta ,||\cdot ||)\geq \left(\frac{1}{\eta}\right)^d$ . Thus by Lemma 4.2, there exists a distribution $\mathcal{D}$ such that if $\sigma^2\leq 0.5$ + +$$ +n < \frac {1}{2} \left(\frac {1}{\eta}\right) ^ {d} \Rightarrow n < M (\mathcal {X}, \eta , | | \cdot | |) / 2 \Rightarrow +$$ + +$$ +\mathbb {E} _ {S \sim \mathcal {D} ^ {n}} [ \mathcal {L} _ {\mathcal {D}} (\mathcal {A} (S)) ] \geq \min \left\{\frac {1}{4}, \frac {1}{2} - L \eta \right\} + \sigma^ {2}. +$$ + +Taking $\eta = \frac{1/2 - \epsilon}{L}$ where $\epsilon \in (0, 1/2)$ , we have $n < \frac{1}{2} \left( \frac{2L}{1 - 2\epsilon} \right)^d$ implies $\mathbb{E}_{S \sim \mathcal{D}^n}[\mathcal{L}_\mathcal{D}(\mathcal{A}(S))] \geq \min \left\{ \frac{1}{4}, \epsilon \right\} + \sigma^2$ . Thus in the worst case, $n$ has to be at least $\exp(\Omega(d))$ if one wants to achieve good astuteness by any learning algorithm that returns an $\mathcal{O}(1)$ -Lipschitz function. This completes the proof of Theorem 4.3. + +Theorem 4.3 states that for certain distributions, $n$ has to be at least $\exp(\Omega(d))$ if one wants to achieve good population error by any $\mathcal{O}(1)$ -Lipschitz learning algorithm. This is not restricted to the algorithms that perfectly fit the training data. The sample complexity lower bound matches the upper bound given in Theorem 3.9. + +# 5. Conclusions + +In this work, we study the robust interpolation problem beyond the isoperimetry assumption, and propose a twofold law of robustness. We show the potential benefit of overparametrization for smooth data interpolation when $n = \mathrm{poly}(d)$ , and disprove the potential existence of an $\mathcal{O}(1)$ -Lipschitz robust interpolating function when $n = \exp(\omega(d))$ . Besides, we also prove that small data $(\exp(\mathcal{O}(d)))$ may hurt robustness on certain distributions. Perhaps surprisingly, the results shed light on the curse of big data and the blessing of dimensionality regarding robustness. + +# Acknowledgement + +Hongyang Zhang is supported by NSERC Discovery Grant RGPIN-2022-03215, DGECR-2022-00357. Yihan Wu and Heng Huang were partially supported by NSF IIS 1838627, 1837956, 1956002, 2211492, CNS 2213701, CCF 2217003, DBI 2225775. + +# References + +Azuma, K. Weighted sums of certain dependent random variables. Tohoku Mathematical Journal, Second Series, 19(3):357-367, 1967. +Ben-Tal, A., El Ghaoui, L., and Nemirovski, A. Robust optimization. Princeton university press, 2009. +Bhagoji, A. N., Cullina, D., and Mittal, P. Lower bounds on adversarial robustness from optimal transport. In Advances in Neural Information Processing Systems, 2019. +Bhattacharjee, R., Jha, S., and Chaudhuri, K. Sample complexity of robust linear classification on separated data. In International Conference on Machine Learning, pp. 884-893, 2021. +Blum, A., Dick, T., Manoj, N., and Zhang, H. Random smoothing might be unable to certify $\ell_{\infty}$ robustness for high-dimensional images. Journal of Machine Learning Research, 21:1-21, 2020. +Bubeck, S. and Sellke, M. A universal law of robustness via isoperimetry. Journal of the ACM, 70(2):1-18, 2023. +Bubeck, S., Li, Y., and Nagaraj, D. M. A law of robustness + +for two-layers neural networks. In Annual Conference on Learning Theory, volume 134, pp. 804-820, 2021. +Case, B. M., Gallagher, C., and Gao, S. A note on subgaussian random variables. Cryptology ePrint Archive, 2019. +Cohen, J. M., Rosenfeld, E., and Kolter, J. Z. Certified adversarial robustness via randomized smoothing. ICML, 2019. +Cullina, D., Bhagoji, A. N., and Mittal, P. PAC-learning in the presence of evasion adversaries. In Advances in Neural Information Processing Systems, pp. 230-241, 2018. +Dan, C., Wei, Y., and Ravikumar, P. Sharp statistical guarantees for adversarially robust gaussian classification. In International Conference on Machine Learning, pp. 2345-2355, 2020. +Dobriban, E., Hassani, H., Hong, D., and Robey, A. Provable tradeoffs in adversarially robust classification. arXiv preprint arXiv:2006.05161, 2020. +Gao, R., Cai, T., Li, H., Hsieh, C.-J., Wang, L., and Lee, J. D. Convergence of adversarial training in overparametrized neural networks. Advances in Neural Information Processing Systems, 32:13029-13040, 2019. +Goodfellow, I. J., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2014. +Huber, P. J. Robust statistics, volume 523. John Wiley & Sons, 2004. +Kumar, A., Levine, A., Goldstein, T., and Feizi, S. Curse of dimensionality on randomized smoothing for certifiable robustness. In International Conference on Machine Learning, pp. 5458-5467, 2020. +Li, B., Chen, C., Wang, W., and Carin, L. Certified adversarial robustness with additive noise. In Advances in Neural Information Processing Systems, pp. 9464-9474, 2019. +Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2017. +Mendelson, S. and Vershynin, R. Entropy and the combinatorial dimension. Inventiones mathematicae, 152(1): 37-55, 2003. +Montasser, O., Hanneke, S., and Srebro, N. VC classes are adversarially robustly learnable, but only improperly. In Annual Conference on Learning Theory, pp. 2512-2530, 2019. + +Northcutt, C. G., Athalye, A., and Mueller, J. Pervasive label errors in test sets destabilize machine learning benchmarks. In NeurIPS 2021 Datasets and Benchmarks Track, 2021. +Schmidt, L., Santurkar, S., Tsipras, D., Talwar, K., and Madry, A. Adversarily robust generalization requires more data. In Advances in Neural Information Processing Systems, 2018. +Shalev-Shwartz, S. and Ben-David, S. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014. +Sun, J., Yang, Y., Xun, G., and Zhang, A. A stagewise hyperparameter scheduler to improve generalization. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 1530-1540, 2021. +Sun, J., Huai, M., Jha, K., and Zhang, A. Demystify hyperparameters for stochastic optimization with transferable representations. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 1706-1716, 2022. +Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014. +von Luxburg, U. and Bousquet, O. Distance-based classification with Lipschitz functions. Journal of Machine Learning Research, 5:669-695, 2004. +Wu, X., Huang, F., Hu, Z., and Huang, H. Faster adaptive federated learning. arXiv preprint arXiv:2212.00974, 2022a. +Wu, X., Hu, Z., and Huang, H. Decentralized riemannian algorithm for nonconvex minimax problems. arXiv preprint arXiv:2302.03825, 2023. +Wu, Y., Bojchevski, A., Kuvshinov, A., and Gunnemann, S. Completing the picture: Randomized smoothing suffers from the curse of dimensionality for a large family of distributions. In International Conference on Artificial Intelligence and Statistics, pp. 3763-3771. PMLR, 2021. +Wu, Y., Bojchevski, A., and Huang, H. Adversarial weight perturbation improves generalization in graph neural network. arXiv preprint arXiv:2212.04983, 2022b. +Wu, Y., Li, X., Kerschbaum, F., Huang, H., and Zhang, H. Towards robust dataset learning. arXiv preprint arXiv:2211.10752, 2022c. + +Wu, Y., Zhang, H., and Huang, H. Retrievalguard: Provably robust 1-nearest neighbor image retrieval. In International Conference on Machine Learning, pp. 24266-24279. PMLR, 2022d. +Yang, G., Duan, T., Hu, J. E., Salman, H., Razenshteyn, I., and Li, J. Randomized smoothing of all shapes and sizes. In International Conference on Machine Learning, pp. 10693-10705, 2020a. +Yang, Y.-Y., Rashtchian, C., Zhang, H., Salakhutdinov, R., and Chaudhuri, K. A closer look at accuracy vs. robustness. In Advances in Neural Information Processing Systems, 2020b. +Yin, D., Kannan, R., and Bartlett, P. Rademacher complexity for adversarially robust generalization. In International conference on machine learning, pp. 7085-7094, 2019. +Zhang, H., Yu, Y., Jiao, J., Xing, E., El Ghaoui, L., and Jordan, M. Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning, pp. 7472-7482, 2019. + +# A. Missing proofs + +# A.1. Proof of Lemma 3.2 + +Proof. Denote by $X$ the random variable of $\mu$ on bounded space $\mathcal{X}$ . We consider the $Z_{1} = f(X)$ and $Z_{0} = \mathbb{E}[f(X)]$ , since + +$$ +| Z _ {1} - Z _ {0} | = | f (X) - \mathbb {E} [ f (X) ] | = | \mathbb {E} _ {X ^ {\prime}} [ f (X) - f (X ^ {\prime}) ] | \leq | L \sup _ {x, x ^ {\prime} \in \mathcal {X}} | | x - x ^ {\prime} | | | = L \mathrm {d i a m} (\mathcal {X}), +$$ + +where $X'$ is of the same distribution with $X$ . Because $\mathbb{E}[Z_1] = Z_0$ , $\{Z_0, Z_1\}$ is a martingale with bounded difference. Thus, by Azuma's inequality Lemma 3.1, we have + +$$ +\operatorname * {P r} (| f (x) - \mathbb {E} [ f (x) ] | \geq t) = \operatorname * {P r} (| Z _ {1} - Z _ {0} | \geq t) \leq 2 \exp (- \frac {t ^ {2}}{2 \mathrm {d i a m} (\mathcal {X}) ^ {2} L ^ {2}}). +$$ + +# A.2. Proof of Lemma 3.3 + +Proof. We use the similar proof technique as in Bubeck & Sellke (2023). Our proof depends on the following lemma. + +Lemma A.1 (Lemma 2.1 of Bubeck & Sellke (2023)). Let $\mathcal{F}$ be any class of functions from $\mathbb{R}^d\to [-1,1]$ . Let $\{(x_i,y_i)\}_{i = 1}^n$ be i.i.d. input-output pairs in $\mathbb{R}^d\times [-1,1]$ for any given norm $\| \cdot \|$ . Assume that the expected conditional variance of the output (i.e., the "noise level") is strictly positive, denoted by $\sigma^2\coloneqq \mathbb{E}[\mathrm{Var}[y|x]] > 0$ . + +$$ +\operatorname * {P r} \left(\exists f \in \mathcal {F}: \frac {1}{n} \sum_ {i = 1} ^ {n} (y _ {i} - f (x _ {i})) ^ {2} \leq \sigma^ {2} - \epsilon\right) \leq 2 \exp \left(- \frac {n \epsilon^ {2}}{8 ^ {3}}\right) + \operatorname * {P r} \left(\exists f \in \mathcal {F}: \frac {1}{n} \sum_ {i = 1} ^ {n} f (x _ {i}) z _ {i} \geq \frac {\epsilon}{4}\right). +$$ + +We now try to bound the term $\operatorname*{Pr}(\exists f\in \mathcal{F}:\frac{1}{n}\sum_{i = 1}^{n}f(x_i)z_i\geq \frac{\epsilon}{4})$ . As $x_{i}$ is randomly sampled from the input distribution and $\mathrm{diam}(\mathcal{X}) = 2$ , we have + +$$ +\operatorname * {P r} (| f (x _ {i}) - \mathbb {E} [ f (x) ] | \geq t) \leq 2 \exp (- \frac {t ^ {2}}{8 L ^ {2}}), +$$ + +which indicates $f(x_{i}) - \mathbb{E}[f(x)]$ is $8L^{2} / n$ -subgaussian distributed. Because $|z_{i}| = |y_{i} - g(x_{i})| \leq 2$ , we know $(f(x_{i}) - \mathbb{E}[f(x)])z_{i}$ is $32L^{2}$ -subgaussian. By Property 1 in Case et al. (2019) we know $\frac{1}{n}\sum_{i=1}^{n}(f(x_{i}) - \mathbb{E}[f(x)])z_{i}$ is $32L^{2} / n$ -subgaussian. Since $\mathbb{E}[(f(x_{i}) - \mathbb{E}[f(x)])z_{i}] = 0$ , we have + +$$ +\operatorname * {P r} \left(\frac {1}{n} \sum_ {i = 1} ^ {n} (f (x _ {i}) - \mathbb {E} [ f (x) ]) z _ {i} \geq \frac {\epsilon}{8}\right) \leq \exp (- \frac {n \epsilon^ {2}}{2 ^ {1 0} L ^ {2}}), +$$ + +Since the range of the functions is in $[-1, 1]$ we have $\mathbb{E}[f(x)] \in [-1, 1]$ and hence: + +$$ +\Pr \left(\exists f: \frac {1}{n} \sum_ {i = 1} ^ {n} \mathbb {E} [ f (x) ] z _ {i} \geq \frac {\epsilon}{8}\right) \leq \Pr \left(| \frac {1}{n} \sum_ {i = 1} ^ {n} z _ {i} | \geq \frac {\epsilon}{8}\right), +$$ + +By Hoeffding's inequality, the above quantity is smaller than $2\exp (-n\epsilon^{2} / 8^{3})$ Thus we obtain with an union bound: + +$$ +\begin{array}{l} \Pr \left(\exists f \in \mathcal {F}: \frac {1}{n} \sum_ {i = 1} ^ {n} f (x _ {i}) z _ {i} \geq \frac {\epsilon}{4}\right) \leq | \mathcal {F} | \Pr \left(\frac {1}{n} \sum_ {i = 1} ^ {n} (f (x _ {i}) - \mathbb {E} [ f (x) ]) z _ {i} \geq \frac {\epsilon}{8}\right) + \Pr \left(| \frac {1}{n} \sum_ {i = 1} ^ {n} z _ {i} | \geq \frac {\epsilon}{8}\right) \\ \leq | \mathcal {F} | \exp (- \frac {n \epsilon^ {2}}{2 ^ {1 0} L ^ {2}}) + 2 \exp (- n \epsilon^ {2} / 8 ^ {3}). \\ \end{array} +$$ + +Together with Lemma A.1 we have + +$$ +\Pr \left(\exists f \in \mathcal {F}: \frac {1}{n} \sum_ {i = 1} ^ {n} (y _ {i} - f (x _ {i})) ^ {2} \leq \sigma^ {2} - \epsilon\right) \leq 4 \exp \left(- \frac {n \epsilon^ {2}}{8 ^ {3}}\right) + | \mathcal {F} | \exp \left(- \frac {n \epsilon^ {2}}{2 ^ {1 0} L ^ {2}}\right), +$$ + +which proves this lemma. + +# A.3. Proof of Theorem 3.4 + +Proof. We use the similar proof technique as in Bubeck & Sellke (2023). + +We argue that the $\eta$ -covering of the function space $\mathcal{F}$ is upper bounded by the $\eta / J$ -covering of the parameter space $\mathcal{W}$ . To see this, we can select the centers $\mathcal{W}^c = \{w_i^c\}$ of the $\eta / J$ -covering of $\mathcal{W}$ , and covering $\mathcal{F}$ with $\eta$ -balls centered at $f_{w_i^c}$ , because $\forall f_w \in \mathcal{F}$ , we can find $w' \in \mathcal{W}^c$ such that $||w - w'|| \leq \eta / J$ , by the definition of $J$ -Lipschitz parametrization we have $||f_w - f_w'||_{\mathcal{F}} \leq J||w - w'|| \leq \eta$ , thus $\mathcal{F}$ can be covered by $N(\mathcal{W}, \eta / J, ||\cdot||)$ balls. So we have + +$$ +N (\mathcal {F}, \eta , | | \cdot | | _ {\mathcal {F}}) \leq N (\mathcal {W}, \eta / J, | | \cdot | |) \leq (6 J W / \eta) ^ {p}. +$$ + +Taking $\eta = \frac{\epsilon}{6}$ and denote by $\mathcal{W}_{\epsilon}$ the $\epsilon / 6J$ -covering of the $\mathcal{W}$ . Applying Lemma 3.3 to $\mathcal{F}_w = \{f_w : w \in \mathcal{W}_{\epsilon}\}$ we have + +$$ +\operatorname * {P r} \left(\exists f \in \mathcal {F} _ {w}: \frac {1}{n} \sum_ {i = 1} ^ {n} (y _ {i} - f (x _ {i})) ^ {2} \leq \sigma^ {2} - \frac {\epsilon}{2} \mathrm {a n d} \operatorname {L i p} _ {| | \cdot | |} (f) \leq L\right) \leq 4 \exp \left(- \frac {n \epsilon^ {2}}{8 ^ {3}}\right) + \exp \left(p \ln (3 6 J W \epsilon^ {- 1}) - \frac {n \epsilon^ {2}}{2 ^ {1 0} L ^ {2}}\right), +$$ + +For all $f \in \mathcal{F}$ , we can find an $f' \in \mathcal{F}_w$ such that $||f - f_w||_{\mathcal{F}} \leq \epsilon / 6$ . One can easily derive + +$$ +\frac {1}{n} \sum_ {i = 1} ^ {n} (y _ {i} - f (x _ {i})) ^ {2} \leq \frac {1}{n} \sum_ {i = 1} ^ {n} (y _ {i} - f _ {w} (x _ {i})) ^ {2} + \epsilon / 2 \leq \sigma^ {2} - \epsilon . +$$ + +Thus, if $n$ is large enough such that $\exp (-n\epsilon^2 /8^3)\leq \delta /8$ and $L\geq \frac{\epsilon}{32}\sqrt{\frac{n}{p\ln(36WJ\epsilon^{-1}) + \ln(2 / \delta)}},$ we have + +$$ +\operatorname * {P r} \left(\exists f \in \mathcal {F}: \frac {1}{n} \sum_ {i = 1} ^ {n} (y _ {i} - f (x _ {i})) ^ {2} \leq \sigma^ {2} - \epsilon \mathrm {a n d} \operatorname {L i p} _ {| | \cdot | |} (f) \leq L\right) \leq \delta , +$$ + +which yields with probability at least $1 - \delta$ + +$$ +\frac {1}{n} \sum_ {i = 1} ^ {n} (y _ {i} - f (x _ {i})) ^ {2} \leq \sigma^ {2} - \epsilon \Rightarrow \mathrm {L i p} _ {| | \cdot | |} (f) \geq \frac {\epsilon}{3 2} \sqrt {\frac {n}{p \ln (3 6 W J \epsilon^ {- 1}) + \ln (2 / \delta)}} +$$ + +# A.4. Proof of Lemma 3.6 + +Proof. We consider the Lipschitz function class $B_{L} \coloneqq \{f : \operatorname{Lip}_{\|\cdot\|}(f) \leq L\}$ . In order to bound the covering number of $\mathcal{F}$ , we consider an $\frac{\epsilon}{2L}$ -covering of input space $\mathcal{X}$ consisting of $N = N_{\epsilon/(2L)}(\mathcal{X})$ plates $\mathcal{U}_1, \mathcal{U}_2, \ldots, \mathcal{U}_N$ centered at $s_1, s_2, \ldots, s_N$ . The fact that $\mathcal{X}$ is connected enables one to join any two sets $\mathcal{U}_i$ and $\mathcal{U}_j$ by a chain of intersecting $\mathcal{U}_k$ . For any function $f \in \mathcal{F}$ , we can construct its approximating functional $\widetilde{f}$ by taking its value on $\mathcal{U}_1$ as an $\epsilon/2$ -approximation of $f(s_1)$ . As $\operatorname{diam}(\mathcal{U}_1) \leq L \cdot \operatorname{diam}(\mathcal{X})$ , there are at most $[2L \cdot \operatorname{diam}(\mathcal{X}) / \epsilon]$ such approximations. On the other hand, note that the $N$ plates are chained. By Lipschitzness, the function values of $f$ on $s_1$ and $s_2$ differ at most $\epsilon/2$ , and so $f(s_2)$ differs at most $\epsilon$ from $\widetilde{f}(s_1)$ by triangle inequality. It implies that to construct an $\epsilon$ -approximation of $f(s_2)$ on $\mathcal{U}_2$ , we shall know either $\widetilde{f}(s_1) - \epsilon/2$ or $\widetilde{f}(s_1) + \epsilon/2$ . Repeating the same argument by $N$ times, we can bound the $\epsilon$ -covering of $f$ on $\mathcal{X}$ by $[2L \cdot \operatorname{diam}(\mathcal{X}) / \epsilon] 2^N$ . + +# A.5. Proof of Lemma 3.7 + +Proof. The proof of this lemma is quite straightforward. Notice that when $u > 2L \cdot \mathrm{diam}(\mathcal{X})$ , the number of $u$ -covering for $B_L$ is 1 and $\ln (N(B_L, u, ||\cdot||_{\mathcal{F}})) = 0$ . Combining Equation 3 with Lemma 3.6 yields this lemma. + +# A.6. Proof of Lemma 3.8 + +Proof. As $\frac{u}{2L} \leq \mathrm{diam}(\mathcal{X})$ , we have $N(\mathcal{X}, \frac{u}{2L}, ||\cdot||) \leq \left(\frac{12L}{u}\right)^d$ and + +$$ +\begin{array}{l} \mathbb {E} _ {S \in \mathcal {D} ^ {n}} [ R (B _ {L} \circ S) ] \leq 2 \epsilon + \frac {4 \sqrt {2}}{\sqrt {n}} \int_ {\epsilon / 4} ^ {2 L \cdot \operatorname {d i a m} (\mathcal {X})} \sqrt {N \left(\mathcal {X} , \frac {u}{2 L} , | | \cdot | |\right) \ln 2 + \ln \left(\left\lceil \frac {2 L \cdot \operatorname {d i a m} (\mathcal {X})}{u} \right\rceil\right)} d u \\ \leq 2 \epsilon + \frac {4 \sqrt {2}}{\sqrt {n}} \int_ {\epsilon / 4} ^ {4 L} \sqrt {\left(\frac {1 2 L}{u}\right) ^ {d} \ln 2 + \ln \left(\left\lceil \frac {2 L}{u} \right\rceil\right)} d u \\ \leq 2 \epsilon + \frac {4 \sqrt {2}}{\sqrt {n}} \int_ {\epsilon / 4} ^ {4 L} \left[ \sqrt {\left(\frac {1 2 L}{u}\right) ^ {d} \ln 2} + \sqrt {\ln \left(\left\lceil \frac {2 L}{u} \right\rceil\right)} \right] d u \\ \leq 2 \epsilon + \frac {4 \sqrt {2}}{\sqrt {n}} \int_ {\epsilon / 4} ^ {4 L} \sqrt {\left(\frac {1 2 L}{u}\right) ^ {d} \ln 2} d u + \frac {1 6 \sqrt {2} L}{\sqrt {n}} \sqrt {\ln (1 6 L / \epsilon + 1)}. \\ \end{array} +$$ + +Switching the integral variable from $u$ to $v = u / 12L$ we have + +$$ +\begin{array}{l} \int_ {\epsilon / 4} ^ {4 L} \sqrt {\left(\frac {1 2 L}{u}\right) ^ {d} \ln 2} d u = 1 2 L \int_ {\epsilon / (4 8 L)} ^ {1 / 3} \sqrt {v ^ {- d} \ln 2} d v \\ = 1 2 L \left[ \sqrt {\ln 2} \frac {1}{- d / 2 + 1} v ^ {- d / 2 + 1} \big | _ {\epsilon / (4 8 L)} ^ {1 / 3}\right) \\ < 1 2 L \frac {2 \sqrt {\ln 2}}{d - 2} \left(\frac {4 8 L}{\epsilon}\right) ^ {d / 2 - 1}. \\ \end{array} +$$ + +Based on the calculation above we have + +$$ +\mathbb {E} _ {S \in \mathcal {D} ^ {n}} [ R (B _ {L} \circ S) ] \leq 2 \epsilon + L \frac {9 6 \sqrt {2 \ln 2}}{\sqrt {n} (d - 2)} \left(\frac {4 8 L}{\epsilon}\right) ^ {d / 2 - 1} + \frac {1 6 \sqrt {2} L}{\sqrt {n}} \sqrt {\ln (1 6 L / \epsilon + 1)}. +$$ + +As this inequality holds for arbitrary $\epsilon > 0$ , we can take $\epsilon = 48L / n^{1/d}$ and have + +$$ +\mathbb {E} _ {S \in \mathcal {D} ^ {n}} [ R (B _ {L} \circ S) ] \leq 9 6 \frac {L}{n ^ {1 / d}} + \frac {9 6 \sqrt {2 \ln 2}}{d - 2} \frac {L}{n ^ {1 / d}} + \frac {1 6 \sqrt {2} L}{\sqrt {n}} \sqrt {\ln \left(\frac {1}{3} n ^ {1 / d} + 1\right)} \sim \mathcal {O} \left(\frac {L}{n ^ {1 / d}}\right). +$$ + +# A.7. Proof of Theorem 3.9 + +Proof. According to Equation 1, + +$$ +\mathcal {L} _ {\mathcal {D}} (f) - \mathcal {L} _ {S} (f) \leq 2 \mathbb {E} _ {S \in \mathcal {D} ^ {n}} [ R (l \circ \mathcal {F} \circ S) ] + a \sqrt {\frac {2 \ln (2 / \delta)}{n}}, +$$ + +where $a \coloneqq \max_{(x,y)} l(f(x),y) \leq 4$ . According to Equation 2 and $\nabla_{f(x)}l(f(x),y) \leq 4$ , we have $\mathbb{E}_{S \in \mathcal{D}^n}[R(l \circ \mathcal{F} \circ S)] \leq 4\mathbb{E}_{S \in \mathcal{D}^n}[R(\mathcal{F} \circ S)]$ . Thus, + +$$ +\mathbb {E} _ {S \in \mathcal {D} ^ {n}} [ R (\mathcal {F} \circ S) ] \geq \frac {1}{8} \left(\mathcal {L} _ {\mathcal {D}} (f) - \mathcal {L} _ {S} (f) - 4 \sqrt {\frac {2 \ln (2 / \delta)}{n}}\right). +$$ + +Under the label noise settings, we have + +$$ +\begin{array}{l} \mathcal {L} _ {\mathcal {D}} (f) = \mathbb {E} _ {\mathcal {D}} [ (f (x) - y) ^ {2} ] \\ = \mathbb {E} _ {x, y} \left[ (f (x) - \mathbb {E} _ {y} [ y | x ]) ^ {2} + (y - \mathbb {E} _ {y} [ y | x ]) ^ {2} \right] \\ \geq \mathbb {E} _ {x} [ \operatorname {V a r} (y | x) ] = \sigma^ {2}. \\ \end{array} +$$ + +So with the overfitting assumption $\mathcal{L}_S(f) \leq \sigma^2 - \epsilon$ , we have + +$$ +\begin{array}{l} \mathbb {E} _ {S \in \mathcal {D} ^ {n}} [ R (\mathcal {F} \circ S) ] \geq \frac {1}{8} \left(\mathcal {L} _ {\mathcal {D}} (f) - \mathcal {L} _ {S} (f) - 4 \sqrt {\frac {2 \ln (2 / \delta)}{n}}\right) \tag {4} \\ = \frac {\epsilon}{8} - \frac {1}{2} \sqrt {\frac {2 \ln (2 / \delta)}{n}}. \\ \end{array} +$$ + +Consider $B_{L} = \{f\in \mathcal{F}:\mathrm{Lip}_{||\cdot ||}(f)\leq L\}$ . According to Lemma 3.8, we have + +$$ +K \frac {L}{n ^ {1 / d}} \geq \mathbb {E} _ {S \in \mathcal {D} ^ {n}} [ R (B _ {L} \circ S) ] \geq \frac {\epsilon}{8} - \frac {1}{2} \sqrt {\frac {2 \ln (2 / \delta)}{n}}, +$$ + +where $K = 96 + \frac{96\sqrt{2\ln 2}}{d - 2} + \frac{16\sqrt{2}}{n^{1/2 - 1/d}}\sqrt{\ln(\frac{1}{3}n^{1/d} + 1)} \sim \Theta(1)$ . Thus we have + +$$ +L \geq \frac {n ^ {1 / d}}{K} \left(\frac {1}{8} \epsilon - \frac {1}{2} \sqrt {\frac {2 \ln (2 / \delta)}{n}}\right). +$$ + +If $\exists f_0\in \mathcal{F}$ , such that + +$$ +\mathcal {L} _ {S} (f _ {0}) \leq \sigma^ {2} - \epsilon \Rightarrow \operatorname {L i p} _ {\| \cdot \|} (f _ {0}) < \frac {n ^ {1 / d}}{K} \left(\frac {1}{8} \epsilon - \frac {1}{2} \sqrt {\frac {2 \ln (2 / \delta)}{n}}\right), +$$ + +we have + +$$ +\frac {\epsilon}{8} - \frac {1}{2} \sqrt {\frac {2 \ln (2 / \delta)}{n}} > K \frac {\operatorname {L i p} _ {\| \cdot \|} \left(f _ {0}\right)}{n ^ {1 / d}} \geq +$$ + +$$ +\mathbb {E} _ {S \in \mathcal {D} ^ {n}} \left[ R \left(B _ {\operatorname {L i p} _ {\| \cdot \|} (f _ {0})} \circ S\right) \right] \geq \frac {\epsilon}{8} - \frac {1}{2} \sqrt {\frac {2 \ln (2 / \delta)}{n}}, +$$ + +which yields contradiction. Therefore, $\forall f\in \mathcal{F}$ + +$$ +\mathcal {L} _ {S} (f) \leq \sigma^ {2} - \epsilon \Rightarrow \operatorname {L i p} _ {\| \cdot \|} (f) \geq \frac {n ^ {1 / d}}{K} \left(\frac {1}{8} \epsilon - \frac {1}{2} \sqrt {\frac {2 \ln (2 / \delta)}{n}}\right). +$$ + +Taking $\mathcal{X} = \{x\in \mathbb{R}^d:||x||\leq 1\}$ , we have $\mathrm{diam}(\mathcal{X}) = 2$ , which yields Theorem 3.9. + +# A.8. Proof of Theorem 3.11 + +Proof. First, we show that we can find $n$ training samples $\{x_1, \dots, x_n\}$ such that $\forall i, j, i \neq j, ||x_i - x_j|| \geq \frac{1}{n^{1/d}}$ . Consider the $\frac{1}{n^{1/d}}$ -packing of the space $\{x : ||x|| \leq 1\}$ , the packing number is greater than the $\frac{1}{n^{1/d}}$ -covering number of the same space, which at least $(1 / \frac{1}{n^{1/d}})^d = n$ , we then choose $\{x_1, \dots, x_n\}$ from the $\frac{1}{n^{1/d}}$ -packing, the minimum pairwise distance is at least $\frac{1}{n^{1/d}}$ . Next, we show $f^*$ is at most $n^{1/d}$ -Lipschitz, as $f^*$ is the linear interpolation between neighbour training points, the worst case Lipschitz constant is $\frac{|y_i - y_j|}{||x_i - x_j||} \leq 2n^{1/d}$ . + +# A.9. Proof of Lemma 4.1 + +Proof. Our proof is partly based on Theorem 5.1 of Shalev-Shwartz & Ben-David (2014). Let $\mathcal{C}$ be a subset of $\mathcal{X}$ of size $2m$ . There exist $T = 2^{2m}$ possible labeling functions from $\mathcal{C}$ to $\{-a, a\}$ . Denote these functions by $f_1, \dots, f_T$ . We then define a distribution $\mathcal{D}_i$ w.r.t. $f_i$ by + +$$ +\mathcal {D} _ {i} (\{(x, y) \}) = \left\{ \begin{array}{l l} p / | \mathcal {C} |, & \text {i f} y = f _ {i} (x); \\ (1 - p) / | \mathcal {C} |, & \text {i f} y \neq f _ {i} (x), \end{array} \right. +$$ + +where $p > 1/2$ satisfies $\operatorname{Var}(y|x) = \sigma^2 = 4a^2p(1 - p)$ (notice that as $f_i(x)$ can only be $a$ or $-a$ , $p$ is the same for all $f_i(x)$ 's). In this way, $\mathcal{D}_i$ satisfies the noisy label setting. We will show that for every algorithm $\mathcal{A}$ that receives a training set + +of size $m$ from $\mathcal{C} \times \{-a, a\}$ and returns a function $\mathcal{A}(S): \mathcal{C} \to \mathbb{R}$ , it holds that + +$$ +\max _ {i \in [ T ]} \mathbb {E} _ {S \sim \mathcal {D} _ {i} ^ {m}} \left[ \mathcal {L} _ {\mathcal {D} _ {i}} (\mathcal {A} (S)) \right] \geq \frac {a ^ {2} + \sigma^ {2}}{2}. +$$ + +There are $k = (2m)^{m}$ possible sequences of $m$ instances from $\mathcal{C}$ . Denote these sequences by $S_{1},\ldots ,S_{k}$ . Also, if $S_{j} = (x_{1},\dots,x_{m})$ , we denote by $S_{j}^{i}$ the sequence containing the instances in $S_{j}$ labeled by the function $f_{i}$ , namely, $S_{j}^{i} = ((x_{1},a_{1}f_{i}(x_{1})),\dots,(x_{m},a_{m}f_{i}(x_{m})))$ , where $\operatorname*{Pr}(a_l = 1) = p$ , $\operatorname*{Pr}(a_l = -1) = 1 - p$ , and $a_1,\dots,a_m$ are i.i.d. for all $S_{j}^{i}$ , given that $p$ is the same for all $f_{i}(x)$ 's. If the distribution is $\mathcal{D}_i$ , then the possible training sets that algorithm $\mathcal{A}$ receives are $S_1^i,\dots,S_k^i$ , and all these training sets have the same probability of being sampled. Therefore, + +$$ +\mathbb {E} _ {S \sim \mathcal {D} _ {i} ^ {m}} [ \mathcal {L} _ {\mathcal {D} _ {i}} (\mathcal {A} (S)) ] = \frac {1}{k} \sum_ {j = 1} ^ {k} \mathcal {L} _ {\mathcal {D} _ {i}} (\mathcal {A} (S _ {j} ^ {i})). +$$ + +Using the facts that "maximum" is larger than "average" and that "average" is larger than "minimum", we have + +$$ +\begin{array}{l} \max _ {i \in [ T ]} \frac {1}{k} \sum_ {j = 1} ^ {k} \mathcal {L} _ {\mathcal {D} _ {i}} \left(\mathcal {A} \left(S _ {j} ^ {i}\right)\right) \geq \frac {1}{T} \sum_ {i = 1} ^ {T} \frac {1}{k} \sum_ {j = 1} ^ {k} \mathcal {L} _ {\mathcal {D} _ {i}} \left(\mathcal {A} \left(S _ {j} ^ {i}\right)\right) \\ = \frac {1}{k} \sum_ {j = 1} ^ {k} \frac {1}{T} \sum_ {i = 1} ^ {T} \mathcal {L} _ {\mathcal {D} _ {i}} (\mathcal {A} (S _ {j} ^ {i})) \\ \geq \min _ {j \in [ k ]} \frac {1}{T} \sum_ {i = 1} ^ {T} \mathcal {L} _ {\mathcal {D} _ {i}} \left(\mathcal {A} \left(S _ {j} ^ {i}\right)\right). \\ \end{array} +$$ + +Next, fix some $j \in [k]$ . Denote by $S_{j} \coloneqq (x_{1},\dots,x_{m})$ and let $v_{1},\ldots ,v_{q}$ be the instances in $\mathcal{C}$ that do not appear in $S_{j}$ . Clearly, $q \geq m$ . Therefore, for every function $h: \mathcal{C} \to \mathbb{R}$ and every $i$ we have + +$$ +\begin{array}{l} \mathcal {L} _ {\mathcal {D} _ {i}} (h) = \frac {1}{2 m} \mathbb {E} _ {a \in \{- 1, 1 \} ^ {2 m}} \left[ \sum_ {x \in \mathcal {C}} (h (x) - a _ {i} f _ {i} (x)) ^ {2} \right] \\ = \frac {1}{2 m} \sum_ {x \in \mathcal {C}} [ p (h (x) - f _ {i} (x)) ^ {2} + (1 - p) (h (x) + f _ {i} (x)) ^ {2} ] \\ = \frac {1}{2 m} \sum_ {x \in \mathcal {C}} [ (h (x) - (2 p - 1) f _ {i} (x)) ^ {2} + 4 p (1 - p) f _ {i} (x) ^ {2} ] \\ = \sigma^ {2} + \frac {1}{2 m} \sum_ {x \in \mathcal {C}} [ (h (x) - (2 p - 1) f _ {i} (x)) ^ {2} ]. \\ \end{array} +$$ + +Note that + +$$ +\frac {1}{2 m} \sum_ {x \in \mathcal {C}} [ (h (x) - (2 p - 1) f _ {i} (x)) ^ {2} ] \geq \frac {1}{2 m} \sum_ {r = 1} ^ {q} (h (v _ {r}) - (2 p - 1) f _ {i} (v _ {r})) ^ {2} \geq \frac {1}{2 q} \sum_ {r = 1} ^ {q} (h (v _ {r}) - (2 p - 1) f _ {i} (v _ {r})) ^ {2}. +$$ + +Hence, + +$$ +\begin{array}{l} \frac {1}{T} \sum_ {i = 1} ^ {T} \mathcal {L} _ {\mathcal {D} _ {i}} (\mathcal {A} (S _ {j} ^ {i})) \geq \frac {1}{T} \sum_ {i = 1} ^ {T} \mathbb {E} _ {a \in \{- 1, 1 \} ^ {m}} \left[ \sigma^ {2} + \frac {1}{2 q} \sum_ {r = 1} ^ {q} (\mathcal {A} (S _ {j} ^ {i} (a)) (v _ {r}) - (2 p - 1) f _ {i} (v _ {r})) ^ {2} \right] \\ = \sigma^ {2} + \frac {1}{2 q} \sum_ {r = 1} ^ {q} \frac {1}{T} \sum_ {i = 1} ^ {T} \mathbb {E} _ {a \in \{- 1, 1 \} ^ {m}} [ (\mathcal {A} (S _ {j} ^ {i} (a)) (v _ {r}) - (2 p - 1) f _ {i} (v _ {r})) ^ {2} ] \\ \geq \sigma^ {2} + \frac {1}{2} \min _ {r \in [ p ]} \frac {1}{T} \sum_ {i = 1} ^ {T} \mathbb {E} _ {a \in \{- 1, 1 \} ^ {m}} [ (\mathcal {A} (S _ {j} ^ {i}) (a) (v _ {r}) - (2 p - 1) f _ {i} (v _ {r})) ^ {2} ]. \\ \end{array} +$$ + +Next, fix some $r \in [p]$ . We can partition all the functions in $f_1, \ldots, f_T$ into $T/2$ disjoint pairs, where for a pair $(f_i, f_{i'})$ we have that for every $c \in \mathcal{C}$ , $f_i(c) \neq f_{i'}(c)$ if and only if $c = v_r$ . Note that for such a pair and the same $a$ , we must have $S_j^i(a) = S_j^{i'}(a)$ and $\forall a \in \{-1, 1\}^m$ , $\operatorname{Pr}(a|S_j^i) = \operatorname{Pr}(a|S_j^{i'})$ . It follows that + +$$ +\begin{array}{l} \mathbb {E} _ {a \in \{- 1, 1 \} ^ {m}} [ (\mathcal {A} (S _ {j} ^ {i}) (v _ {r}) - (2 p - 1) f _ {i} (v _ {r})) ^ {2} ] + \mathbb {E} _ {a \in \{- 1, 1 \} ^ {m}} [ (\mathcal {A} (S _ {j} ^ {i ^ {\prime}}) (v _ {r}) - (2 p - 1) f _ {i ^ {\prime}} (v _ {r})) ^ {2} ] \\ \geq \mathbb {E} _ {a \in \{- 1, 1 \} ^ {m}} [ (\mathcal {A} (S _ {j} ^ {i}) (v _ {r}) - (2 p - 1) f _ {i} (v _ {r})) ^ {2} + (\mathcal {A} (S _ {j} ^ {i ^ {\prime}}) (v _ {r}) - (2 p - 1) f _ {i ^ {\prime}} (v _ {r})) ^ {2} ] \\ \geq \mathbb {E} _ {a \in \{- 1, 1 \} ^ {m}} \left[ \frac {1}{2} (2 p - 1) ^ {2} (f _ {i ^ {\prime}} (v _ {r}) - f _ {i} (v _ {r})) ^ {2} \right] \\ = 2 (2 p - 1) ^ {2} a ^ {2}, \\ \end{array} +$$ + +which yields + +$$ +\frac {1}{T} \sum_ {i = 1} ^ {T} \mathbb {E} _ {a \in \{- 1, 1 \} ^ {m}} [ (\mathcal {A} (S _ {j} ^ {i} (a)) (v _ {r}) - (2 p - 1) f _ {i} (v _ {r})) ^ {2} ] \geq (2 p - 1) ^ {2} a ^ {2}. +$$ + +Combining the discussion above, we have + +$$ +\max _ {i \in [ T ]} \mathbb {E} _ {S \sim \mathcal {D} _ {i} ^ {m}} [ \mathcal {L} _ {\mathcal {D} _ {i}} (\mathcal {A} (S)) ] \geq \min _ {j \in [ k ]} \frac {1}{T} \sum_ {i = 1} ^ {T} \mathcal {L} _ {\mathcal {D} _ {i}} (\mathcal {A} (S _ {j} ^ {i})) \geq \sigma^ {2} + \frac {1}{2} (2 p - 1) ^ {2} a ^ {2} = \frac {a ^ {2} + \sigma^ {2}}{2}. +$$ + +# A.10. Proof of Lemma 4.2 + +Proof. Consider an arbitrary finite set $\mathcal{C} \subseteq \mathcal{X}$ . Denote by $d(\mathcal{C}) \coloneqq \min_{(a,b) \in \mathcal{C} \times \mathcal{C}, a \neq b} ||a - b||$ . We now consider two cases: a) $d(\mathcal{C}) < \epsilon$ and b) $d(\mathcal{C}) \geq \epsilon$ , and show that our conclusion holds for both cases. + +Case a): $d(\mathcal{C}) < \epsilon$ . Denote by $(x_{1}, x_{2}) = \operatorname*{argmin}_{(a, b) \in \mathcal{C} \times \mathcal{C}, a \neq b} ||a - b||$ . We can select $\mathcal{D}$ such that $\mathcal{D}(\{(x_{1}, 1)\}) = \frac{p}{2}, \mathcal{D}(\{(x_{1}, -1)\}) = \frac{(1 - p)}{2}$ and $\mathcal{D}(\{(x_{2}, -1)\}) = \frac{p}{2}, \mathcal{D}(\{(x_{2}, 1)\}) = \frac{1 - p}{2}$ , where $4p(1 - p) = \sigma^{2}, p > 1/2$ . Consider an $L$ -Lipschitz learning algorithm $\mathcal{A}(S) : \mathcal{C} \to \mathbb{R}$ : + +$$ +\begin{array}{l} \mathbb {E} _ {S \sim \mathcal {D} ^ {n}} [ \mathcal {L} _ {\mathcal {D}} (\mathcal {A} (S)) ] \\ \geq \min _ {S \sim \mathcal {D} ^ {n}} \left[ \frac {p}{2} (\mathcal {A} (S) (x _ {1}) - 1) ^ {2} + \frac {1 - p}{2} (\mathcal {A} (S) (x _ {1}) + 1) ^ {2} + \frac {p}{2} (\mathcal {A} (S) (x _ {2}) + 1) ^ {2} + \frac {1 - p}{2} (\mathcal {A} (S) (x _ {2}) - 1) ^ {2} \right] \\ \geq \min _ {S \sim \mathcal {D} ^ {n}} [ 1 - (2 p - 1) | \mathcal {A} (S) (x _ {1}) - \mathcal {A} (S) (x _ {2}) | ] \\ \geq 1 - L (2 p - 1) \left| \left| x _ {1} - x _ {2} \right| \right| \\ \geq 1 - L \cdot d (\mathcal {C}) \\ = 1 - L \epsilon \\ \geq \frac {1}{2} - L \epsilon + \sigma^ {2}. \\ \end{array} +$$ + +Case b): $d(\mathcal{C}) \geq \epsilon$ . We reduce the regression problem from a binary classification problem with target $\{-1,1\}$ by considering the distribution $\mathcal{D}$ such that $\mathcal{D}$ only on $\mathcal{X} \times \{-1,1\}$ . Then by ??, for every $\mathcal{A}(S) : \mathcal{X} \to \mathbb{R}$ and every $\mathcal{C} \subseteq \mathcal{X}$ there exists $\mathcal{D}$ such that + +$$ +n < \frac {| \mathcal {C} |}{2} \Rightarrow \mathbb {E} _ {S \sim \mathcal {D} ^ {n}} [ \mathcal {L} _ {\mathcal {D}} (\mathcal {A} (S)) ] \geq \frac {1 + \sigma^ {2}}{2}. +$$ + +Notice that $\mathcal{C} \subseteq \mathcal{X}$ can be chosen arbitrarily. Thus we have + +$$ +n < \max _ {\mathcal {C} \subseteq \mathcal {X}, d (\mathcal {C}) \geq \epsilon} \frac {| \mathcal {C} |}{2} \Rightarrow \mathbb {E} _ {S \sim \mathcal {D} ^ {n}} [ \mathcal {L} _ {\mathcal {D}} (\mathcal {A} (S)) ] \geq \frac {1 + \sigma^ {2}}{2}. +$$ + +Denote the $\epsilon$ -packing number of space $(\mathcal{X},||\cdot ||)$ by $M(\mathcal{X},\epsilon ,||\cdot ||)$ . We have + +$$ +\max _ {\mathcal {C} \subseteq \mathcal {X}, d (\mathcal {C}) \geq \epsilon} \frac {| \mathcal {C} |}{2} = M (\mathcal {X}, \epsilon , | | \cdot | |) / 2. +$$ + +That is, + +$$ +n < M (\mathcal {X}, \epsilon , | | \cdot | |) / 2 \Rightarrow \mathbb {E} _ {S \sim \mathcal {D} ^ {n}} [ \mathcal {L} _ {\mathcal {D}} (\mathcal {A} (S)) ] \geq \frac {1 + \sigma^ {2}}{2} \geq \frac {1}{4} + \sigma^ {2}. +$$ + +Combining a) and b) yields our conclusion. + +# A.11. Proof of Theorem 4.3 + +Proof. Consider $\mathcal{X} = \{x\in \mathbb{R}^d:||x||\leq 1\}$ . We have $M(\mathcal{X},\eta ,||\cdot ||)\geq \left(\frac{1}{\eta}\right)^d$ and thus there exists a distribution $\mathcal{D}$ such that if $\sigma^2\leq 0.5$ + +$$ +n < \frac {1}{2} \left(\frac {1}{\eta}\right) ^ {d} \Rightarrow n < M (\mathcal {X}, \eta , | | \cdot | |) / 2 \Rightarrow \mathbb {E} _ {S \sim \mathcal {D} ^ {n}} [ \mathcal {L} _ {\mathcal {D}} (\mathcal {A} (S)) ] \geq \min \left\{\frac {1}{4}, \frac {1}{2} - L \eta \right\} + \sigma^ {2}. +$$ + +Taking $\eta = \frac{1 / 2 - \epsilon}{L}$ where $\epsilon \in (0,1 / 2)$ , we have + +$$ +n < \frac {1}{2} \left(\frac {2 L}{1 - 2 \epsilon}\right) ^ {d} \Rightarrow \mathbb {E} _ {S \sim \mathcal {D} ^ {n}} [ \mathcal {L} _ {\mathcal {D}} (\mathcal {A} (S)) ] \geq \min \left\{\frac {1}{4}, \epsilon \right\} + \sigma^ {2}. +$$ + +Thus in the worst case, $n$ has to be at least $\exp(\Omega(d))$ if one wants to achieve good astuteness by any $\mathcal{O}(1)$ -Lipschitz learning algorithm, this completes our proof. + +# B. Some basic concepts of Rademacher complexity + +Definition B.1 (Representativeness of $S$ ). + +$$ +R e p _ {\mathcal {D}} (l, \mathcal {F}, S) := \sup _ {f \in \mathcal {F}} (\mathcal {L} _ {D} (f) - \mathcal {L} _ {S} (f)). +$$ + +Definition B.2 (Rademacher complexity). For $A\in \mathbb{R}^n$ + +$$ +R (A) := \frac {1}{n} \mathbb {E} _ {\sigma_ {1}, \dots , \sigma_ {n} \in \{- 1, 1 \}} \left[ \sup _ {f \in \mathcal {F}} \sum_ {i = 1} ^ {n} \sigma_ {i} a _ {i} \right]. +$$ + +Lemma B.3. Assume that $\forall f\in \mathcal{F},\forall x\in \mathcal{X},|l(f,x)|\leq c$ . Then with probability at least $1 - \delta$ , for all $f\in \mathcal{F}$ + +$$ +\mathcal {L} _ {\mathcal {D}} (f) - \mathcal {L} _ {S} (f) \leq \mathbb {E} _ {S \in \mathcal {D} ^ {n}} [ R e p _ {D} (l, \mathcal {F}, S) ] + c \sqrt {\frac {2 \ln (2 / \delta)}{n}}. +$$ + +Lemma B.4 (Lemma 26.2 in Shalev-Shwartz & Ben-David (2014)). + +$$ +\mathbb {E} _ {S \in D ^ {n}} [ R e p _ {\mathcal {D}} (l, \mathcal {F}, S) ] \leq 2 \mathbb {E} _ {S \in \mathcal {D} ^ {n}} [ R (l \circ \mathcal {F} \circ S) ], +$$ + +where $S = \{x_{1},\dots,x_{n}\}$ and $l\circ \mathcal{F}\circ S = \{(l(f,x_1,y_1),\dots,l(f,x_n,y_n))\in \mathbb{R}^n\}$ . + +Lemma B.5 (Theorem 26.5 in Shalev-Shwartz & Ben-David (2014)). Assume $\forall f\in \mathcal{F},\forall x\in \mathcal{X},|l(f,x)|\leq a$ , then with probability at least $1 - \delta$ , for all $f\in \mathcal{F}$ + +$$ +\mathcal {L} _ {\mathcal {D}} (f) - \mathcal {L} _ {S} (f) \leq 2 \mathbb {E} _ {S ^ {\prime} \in \mathcal {D} ^ {n}} [ R (l \circ \mathcal {F} \circ S ^ {\prime}) ] + a \sqrt {\frac {2 \ln (2 / \delta)}{n}}. +$$ + +Lemma B.6 (Lemma 26.9 in Shalev-Shwartz & Ben-David (2014)). If $l(f(x), y)$ is $C_{||\cdot||}$ -Lipschitz w.r.t. $f(x)$ for arbitrary $y \in [-1, 1]$ , + +$$ +R (l \circ \mathcal {F} \circ S) \leq C \cdot R (\mathcal {F} \circ S). +$$ \ No newline at end of file diff --git a/alawofrobustnessbeyondisoperimetry/images.zip b/alawofrobustnessbeyondisoperimetry/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1a80781113ab4faefe54a1fc9c58d7f94ad3f2ad --- /dev/null +++ b/alawofrobustnessbeyondisoperimetry/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:482d34d466433451d92a879736eb500dc8abd6911ddb27135266219a2db2fd7d +size 797326 diff --git a/alawofrobustnessbeyondisoperimetry/layout.json b/alawofrobustnessbeyondisoperimetry/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..56981ee9679edd20c0527f300859dc67eeab5f5f --- /dev/null +++ b/alawofrobustnessbeyondisoperimetry/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f39ef2a3d8be92693e990b31c5f8dc1d4db3c562b613833fad9f187e36fe306 +size 1091953 diff --git a/amathematicalmodelforcurriculumlearningforparities/bf7f9999-28af-491a-9ade-6d9505969e1b_content_list.json b/amathematicalmodelforcurriculumlearningforparities/bf7f9999-28af-491a-9ade-6d9505969e1b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..30380d6208f640d9a3c47f4fe09fd2ba7c9754f1 --- /dev/null +++ b/amathematicalmodelforcurriculumlearningforparities/bf7f9999-28af-491a-9ade-6d9505969e1b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f12c9ec639d149a958597a1b6d63dc74aa6fcd3dfb7a53ea1e4cf3e12702427e +size 174115 diff --git a/amathematicalmodelforcurriculumlearningforparities/bf7f9999-28af-491a-9ade-6d9505969e1b_model.json b/amathematicalmodelforcurriculumlearningforparities/bf7f9999-28af-491a-9ade-6d9505969e1b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e8794cff8741a8ec6cf3c2a2947641b3f61ecfe1 --- /dev/null +++ b/amathematicalmodelforcurriculumlearningforparities/bf7f9999-28af-491a-9ade-6d9505969e1b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a06de0a33f1e414ca8e4d98c5d480f6c9bfd799e80dda9ea5f92abda6a2cf70 +size 206811 diff --git a/amathematicalmodelforcurriculumlearningforparities/bf7f9999-28af-491a-9ade-6d9505969e1b_origin.pdf b/amathematicalmodelforcurriculumlearningforparities/bf7f9999-28af-491a-9ade-6d9505969e1b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8ecee82e5632739fd6ad39c93f37d201f1763a9a --- /dev/null +++ b/amathematicalmodelforcurriculumlearningforparities/bf7f9999-28af-491a-9ade-6d9505969e1b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b0c7d3befa6d4c727e319eaf2cad6adc30bdc0a68215555bd2bf8daeabe9ad97 +size 1269778 diff --git a/amathematicalmodelforcurriculumlearningforparities/full.md b/amathematicalmodelforcurriculumlearningforparities/full.md new file mode 100644 index 0000000000000000000000000000000000000000..268b536cd8bf6167a9af79106add04332a2d00b7 --- /dev/null +++ b/amathematicalmodelforcurriculumlearningforparities/full.md @@ -0,0 +1,1037 @@ +# A Mathematical Model for Curriculum Learning for Parities + +Elisabetta Cornacchia1 Elchanan Mossel2 + +# Abstract + +Curriculum learning (CL) - training using samples that are generated and presented in a meaningful order - was introduced in the machine learning context around a decade ago. While CL has been extensively used and analysed empirically, there has been very little mathematical justification for its advantages. We introduce a CL model for learning the class of $k$ -parities on $d$ bits of a binary string with a neural network trained by stochastic gradient descent (SGD). We show that a wise choice of training examples involving two or more product distributions, allows to reduce significantly the computational cost of learning this class of functions, compared to learning under the uniform distribution. Furthermore, we show that for another class of functions - namely the 'Hamming mixtures' - CL strategies involving a bounded number of product distributions are not beneficial. + +# 1. Introduction + +Several experimental studies have shown that humans and animals learn considerably better if the learning materials are presented in a curated, rather than random, order (Elio & Anderson, 1984; Ross & Kennedy, 1990; Avrahami et al., 1997; Shafto et al., 2014). This is broadly reflected in the educational system of our society, where learning is guided by an highly organized curriculum. This may involve several learning steps: with easy concepts introduced at first and harder concepts built from previous stages. + +Inspired by this, (Bengio et al., 2009) formalized a curriculum learning (CL) paradigm in the context of machine learning and showed that for various learning tasks it provided improvements in both the training speed and the performance obtained at convergence. This seminal paper inspired + +$^{1}$ Institute of Mathematics, EPFL, Lausanne, Switzerland $^{2}$ Department of Mathematics and IDSS, MIT, US. Correspondence to: Elisabetta Cornacchia . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +many subsequent works, that studied curriculum learning strategies in various application domains, e.g. computer vision (Sarafianos et al., 2017; Dong et al., 2017), computational biology (Xiong et al., 2021), auto-ML (Graves et al., 2017), natural language modelling (Shi et al., 2013; Zaremba & Sutskever, 2014; Shi et al., 2015; Campos, 2021). While extensive empirical analysis of CL strategies have been carried out, there is a lack of theoretical analysis. In this paper, we make progress in this direction. + +A stylized family of functions that is known to pose computational barriers is the class of $k$ -parities over $d$ bits of a binary string. In this work we focus on this class. To define this class: for each subset $S$ of coordinates, the parity over $S$ is defined as $+1$ if the number of negative bits in $S$ is even, and $-1$ otherwise, i.e. $\chi_S(x) := \prod_{i \in S} x_i$ , $x_i \in \{\pm 1\}$ . The class of $k$ -parities contains all $\chi_S$ such that $|S| = k$ and it has cardinality $\binom{d}{k}$ . Learning $k$ -parities requires learning the support of $\chi_S$ by observing samples $(x, \chi_S(x))$ , $x \in \{\pm 1\}^d$ , with the knowledge of the cardinality of $S$ being $k$ . This requires finding the right target function among the $\binom{d}{k}$ functions belonging to the class. + +Learning parities is always possible, and efficiently so, by specialized methods (e.g. Gaussian elimination over the field of two elements). Moreover, ((Abbe & Sandon, 2020)) showed that there exists a neural net that learns parities of any degree if trained by SGD with small batch size. However, this is a rather unconventional net. In fact, under the uniform distribution, parities are not efficiently learnable by population queries with any polynomially small noise. The latter can be explained as follows. Assume we sample our binary string uniformly at random, i.e. for each $i \in \{1, \dots, d\}$ , $x_i \sim \mathrm{Rad}(1/2)^1$ . Then, the covariance between two parities $\chi_S, \chi_{S'}$ is given by: + +$$ +\mathbb {E} _ {x \sim \operatorname {R a d} (1 / 2) ^ {\otimes d}} \left[ \chi_ {S} (x) \chi_ {S ^ {\prime}} (x) \right] = \left\{ \begin{array}{l l} 1 & \text {i f} S = S ^ {\prime}, \\ 0 & \text {i f} S \neq S ^ {\prime}, \end{array} \right. +$$ + +where $x \sim \operatorname{Rad}(1/2)^{\otimes d}$ denotes the product measure such that $x_i \stackrel{iid}{\sim} \operatorname{Rad}(1/2)$ , $i \in \{1, \dots, d\}$ . More abstractly, a parity function of $k$ bits is uncorrelated with any function of $k - 1$ or less bits. This property makes parities hard to learn for any progressive algorithm, such as gradient descent. Indeed, when trying to learn the set of relevant + +$$ +{ } ^ { 1 } z \sim \operatorname { R a d } ( p ) \text { ~ i f ~ } \mathbb { P } ( z = 1 ) = 1 - \mathbb { P } ( z = - 1 ) = p . +$$ + +features, a learner cannot know how close its progressive guesses are to the true set. In other words, all wrong guesses are indistinguishable, which suggests that the learner might have to perform exhaustive search among all the $\binom{d}{k}$ sets. + +The hardness of learning unbiased parities - and more in general any classes of functions with low cross-correlations - with gradient descent has been analysed e.g. in (Abbe & Sandon, 2020), where the authors show a lower bound on the computational complexity of learning low cross-correlated classes with gradient-based algorithms with bounded gradient precision. For $k$ -parities, this gives a computational lower bound of $d^{\Omega(k)}$ for any architecture and initialization. + +However, if we look at different product distributions, then the inner product of a monomial and a component $x_{i}$ that is inside and outside the support becomes distinguishable. Suppose the inputs are generated as $x \sim \operatorname{Rad}(p)^{\otimes d}$ , for some $p \in (0,1)$ . Then the covariance between $\chi_{S}$ and $\chi_{S^{\prime}}$ is: + +$$ +\begin{array}{l} \mathbb {E} _ {x \sim \operatorname {R a d} (p) \otimes^ {d}} \left[ \left(\chi_ {S} (x) - \mathbb {E} [ \chi_ {S} (x) ]\right) \cdot \left(\chi_ {S ^ {\prime}} (x) - \mathbb {E} [ \chi_ {S ^ {\prime}} (x) ]\right) \right] \\ = \mu_ {p} ^ {2 k - | S \cap S ^ {\prime} |} - \mu_ {p} ^ {2 k}, \\ \end{array} +$$ + +where we denoted by $\mu_p \coloneqq \mathbb{E}_{z \sim \mathrm{Rad}(p)}[z] = 2p - 1$ . This implies that if for instance $|p - 0.5| > 0.1$ , just computing correlations with each bit, will recover the parity with complexity linear in $d$ and exponential in $k$ . If we choose $p = 1 - 1/k$ , say, we can get a complexity that is linear in $d$ and polynomial in $k$ . Moreover, the statements above hold even for parities with random noise. + +This may lead one to believe that learning biased parities is easy for gradient descent based methods for deep nets. Indeed, (Malach et al., 2021) showed that biased parities are learnable by SGD on a differentiable model consisting of a linear predictor and a fixed module implementing the parity. However, if we consider fully connected networks, as our experiments show (Figure 1), while gradient descent for a $p$ far from a half converges efficiently to zero training loss, the learned function actually has non-negligible error when computed with respect to the uniform measure. This is intuitively related to the fact that, by concentration of measure, there are essentially no examples with Hamming weight $^2$ close to $d/2$ in the training set sampled under $\operatorname{Rad}(p)^{\otimes d}$ , and therefore it is not reasonable to expect for a general algorithm like gradient descent on fully connected networks (that does not know that the target function is a parity) to learn the value of the function on such inputs. + +We thus propose a more subtle question: Is it possible to generate examples from different product distributions and present them in a specific order, in such a way that the error with respect to the unbiased measure becomes negligible? + +As we mentioned, training on examples sampled from a biased measure is not sufficient to learn the parity under the unbiased measure. However, it does identify the support of the parity. Our curriculum learning strategy is the following: We initially train on inputs sampled from $\mathrm{Rad}(p)^{\otimes d}$ with $p$ close to 1, then we move (either gradually or by sharp steps) towards the unbiased distribution $\mathrm{Rad}(1/2)^{\otimes d}$ . We show that this strategy allows to learn the $k$ -parity problem with a computational cost of $d^{O(1)}$ with SGD on the hinge loss or on the covariance loss (see Def. 3.2). In our proof, we consider layer-wise training (similarly to e.g. (Malach et al., 2021; Malach & Shalev-Shwartz, 2020; Barak et al., 2022)) and the result is valid for any (even) $k$ and $d$ . + +As we mentioned earlier, the failure of learning parities under the uniform distribution from samples coming from a different product measure is due to concentration of Hamming weight. This leads us to consider a family of functions that we call Hamming mixtures. Given an input $x$ , the output of a Hamming mixture is a parity of a subset $S$ of the coordinates, where the subset $S$ depends on the Hamming weight of $x$ (see Def. 2.4). Our intuition is based on the fact that given a polynomial number of samples from, say, the $p = 1/4$ biased measure, it is impossible to distinguish between a certain parity $\chi_S$ and a function that is $\chi_S$ , for $x$ 's whose Hamming weight is at most $3/8d$ , and a different function $\chi_T$ , for $x$ 's whose Hamming weight is more than $3/8d$ , for some $T$ that is disjoint from $S$ . In other words, a general algorithm does not know whether there is consistency between $x$ 's with different Hamming weight. We show a lower bound for learning Hamming mixtures with curriculum strategies that do not allow to get enough samples with relevant Hamming weight. + +Of course, curriculum learning strategies with enough learning steps allow to obtain samples from several product distributions, and thus with all relevant Hamming weights. Therefore, we expect that CL strategies with unboundedly many learning steps will be able to learn the Hamming mixtures. + +While our results are restricted to a limited and stylized setting, we believe they may open new research directions. Indeed, we believe that our general idea of introducing correlation among subsets of the input coordinates to facilitate learning, may apply to more general settings. We discuss some of these future directions in the conclusion section of the paper. + +Importantly, we remark that a limitation of the curriculum strategy presented in this paper is that it requires an oracle that provides labeled samples from arbitrary product measures. However, in applications one usually has a fixed dataset and would like to select samples in a suitable order, to facilitate learning. We leave to future work the analysis of a setting where curriculum and non-curriculum have a common sampling distribution. + +Contributions. Our contributions are the following. + +1. We propose and formalize a mathematical model for curriculum learning; +2. We prove that our curriculum strategy allows to learn $k$ -parities with SGD with the hinge loss or with the covariance loss on a two-layers fully connected network with a computational cost of $d^{O(1)}$ ; +3. We empirically verify the effectiveness of our curriculum strategy for a set of fully connected architectures and parameters; +4. We propose a class of functions - the Hamming mixtures - that is provably not learnable by some curriculum strategies with finitely many learning steps. We conjecture that a continuous curriculum strategy (see Def. 2.6) may allow to significantly improve the performance for learning such class of functions. + +# 1.1. Related Work + +Learning parities on uniform inputs. Learning $k$ -parities over $d$ bits requires determining the set of relevant features among $\binom{d}{k}$ possible sets. The statistical complexity of this problem is thus $\theta(k \log(d))$ . The computational complexity is harder to determine. $k$ -parities can be solved in $d^{O(1)}$ time by specialized algorithms (e.g. Gaussian elimination) that have access to at least $d$ samples. In the statistical query (SQ) framework (Kearns, 1998) - i.e. when the learner has access only to noisy queries over the input distribution - $k$ -parities cannot be learned in less than $\Omega(d^k)$ computations. (Abbe & Sandon, 2020; Shalev-Shwartz et al., 2017) showed that gradient-based methods suffer from the same SQ computational lower bound if the gradient precision is not good enough. On the other hand, (Abbe & Sandon, 2020) showed that one can construct a very specific network architecture and initialization that can learn parities beyond this limit. This architecture is however far from the architectures used in practice. (Barak et al., 2022) showed that SGD can learn sparse $k$ -parities with SGD with batch size $d^{\theta(k)}$ on a small network. Moreover, they empirically provide evidence of 'hidden progress' during training, ruling out the hypothesis of SGD doing random search. (Andoni et al., 2014) showed that parities are learnable by a $d^{\theta(k)}$ network. The problem of learning noisy parities (even with small noise) is conjectured to be intrinsically computationally hard, even beyond SQ models (Alekhnovich, 2003). + +Learning parities on non-uniform inputs. Several works showed that when the input distribution is not the Unif $\{\pm 1\}^d$ , then neural networks trained by gradient-based methods can efficiently learn parities. (Malach et al., 2021) showed that biased parities are learnable by SGD on a differentiable model consisting of a linear predictor and fixed + +module implementing the parity. (Daniely & Malach, 2020) showed that sparse parities are learnable on a two layers network if the input coordinates outside the support of the parity are uniformly sampled and the coordinates inside the support are correlated. To the best of our knowledge, none of these works propose a curriculum learning model to learn parities under the uniform distribution. + +Curriculum learning. Curriculum Learning (CL) in the context of machine learning has been extensively analysed from the empirical point of view (Bengio et al., 2009; Wang et al., 2021; Soviany et al., 2022). However, theoretical works on CL seem to be more scarce. In (Saglietti et al., 2022) the authors propose an analytical model for CL for functions depending on a sparse set of relevant features. In their model, easy samples have low variance on the irrelevant features, while hard samples have large variance on the irrelevant features. In contrast, our model does not require knowledge of the target task to select easy examples. In (Weinshall et al., 2018; Weinshall & Amir, 2020) the authors analyse curriculum learning strategies in convex models and show an improvement on the speed of convergence of SGD. In contrast, our work covers an intrinsically nonconvex problem. Some works also analysed variants of CL: e.g. self-paced CL (SPCL), i.e. curriculum is determined by both prior knowledge and the training process (Jiang et al., 2015), implicit curriculum, i.e. neural networks tend to consistently learn the samples in a certain order (Toneva et al., 2018). To a different purpose, (Abbe et al., 2021a; 2022a) analyse staircase functions - sum of nested monomials of increasing degree - and show that the hierarchical structure of such tasks guides SGD to learn high degree monomials. Moreover, (Refinetti et al., 2022; Kalimeris et al., 2019) show that SGD learns functions of increasing complexity during training. In a concurrent work (Abbe et al., 2023), the authors propose a curriculum learning algorithm (named 'Degree Curriculum') that consists of training on Boolean inputs of increasing Hamming weight, and they empirically show that it reduces the sample complexity of learning partitions on small input dimension. However, the paper does not include a theoretical analysis of such curriculum. + +# 2. Definitions and Main Results + +We define a curriculum strategy for learning a general Boolean target function. We will subsequently restrict our attention to the problem of learning parities or mixtures of parities. For brevity, we denote $[d] = \{1,\dots,d\}$ . Assume that the network is presented with samples $(x,f(x))$ , where $x\in \{\pm 1\} ^d$ is a Boolean vector and $f:\{\pm 1\}^{d}\to \mathbb{R}$ is a target function that generates the labels. We consider a neural network $\mathrm{NN}(x;\theta)$ , whose parameters are initialized at random from an initial distribution $P_0$ , and trained by + +stochastic gradient descent (SGD) algorithm, defined by: + +$$ +\theta^ {t + 1} = \theta^ {t} - \gamma_ {t} \frac {1}{B} \sum_ {i = 1} ^ {B} \nabla_ {\theta^ {t}} L \left(\theta^ {t}, f, x _ {i} ^ {t}\right), \tag {1} +$$ + +for all $t \in \{0, \dots, T - 1\}$ , where $L$ is an almost surely differentiable loss-function, $\gamma_t$ is the learning rate, $B$ is the batch size and $T$ is the total number of training steps. For brevity, we write $L(\theta^t, f, x) \coloneqq L(\mathrm{NN}(:, \theta^t), f, x)$ . We assume that for all $i \in [B]$ , $x_i^t \stackrel{iid}{\sim} \mathcal{D}^t$ , where $\mathcal{D}^t$ is a step-dependent input distribution supported on $\{\pm 1\}^d$ . We define our curriculum learning strategy as follows. Recall that $z \sim \operatorname{Rad}(p)$ if $\mathbb{P}(z = 1) = 1 - \mathbb{P}(z = -1) = p$ . + +Definition 2.1 (r-steps curriculum learning (r-CL)). For a fixed $r \in \mathbb{N}$ , let $T_{1}, \ldots, T_{r} \in \mathbb{N}$ and $p_{1}, \ldots, p_{r} \in [0,1]$ . Denote by $\bar{p} := (p_{1}, \ldots, p_{r})$ and $\bar{T} := (T_{1}, \ldots, T_{r-1})$ . We say that a neural network $\mathrm{NN}(x; \theta^{t})$ is trained by SGD with a $\mathrm{r - CL}(\bar{T}, \bar{p})$ if $\theta^{t}$ follows the iterations in (1) with: + +$$ +\mathcal {D} ^ {t} = \operatorname {R a d} (p _ {1}), \qquad 0 < t \leq T _ {1}, +$$ + +$$ +\mathcal {D} ^ {t} = \operatorname {R a d} (p _ {2}), \qquad T _ {1} < t \leq T _ {2}, +$$ + +$$ +\dots +$$ + +$$ +\mathcal {D} ^ {t} = \operatorname {R a d} (p _ {r}), \qquad T _ {r - 1} < t \leq T. +$$ + +We say that $r$ is the number of curriculum steps. + +We assume $r$ to be independent on $T$ , in order to distinguish the $r$ -CL from the continuous-CL (see Def. 2.6 below). We hypothesize that $r$ -CL may help to learn several Boolean functions, if one chooses appropriate $r$ and $\bar{p}$ . However, in this paper we focus on the problem of learning unbiased $k$ -parities. For such class, we obtained that choosing $r = 2$ , a wise $p_1 \in (0, 1/2)$ and $p_2 = 1/2$ brings a remarkable gain in the computational complexity, compared to the standard setting with no curriculum. An interesting future direction would be studying the optimal $r$ and $\bar{p}$ . Before stating our Theorem, let us clarify the generalization error that we are interested in. As mentioned before, we are interested in learning the target over the uniform input distribution. + +Definition 2.2 (Generalization error). We say that SGD on a neural network $\mathrm{NN}(x;\theta)$ learns a target function $f:\{\pm 1\}^d\to \mathbb{R}$ with $r$ -CL $(\bar{T},\bar{p})$ up to error $\epsilon$ , if it outputs a network $\mathrm{NN}(x;\theta^T)$ such that: + +$$ +\mathbb {E} _ {x \sim \operatorname {R a d} (1 / 2) ^ {\otimes d}} \left[ L \left(\theta^ {T}, f, x\right) \right] \leq \epsilon , \tag {2} +$$ + +where $L$ is any loss function such that $\mathbb{E}_{x\sim \mathrm{Rad}(1 / 2)^{\otimes d}}[L(f,f,x)] = 0.$ + +We state here our main theoretical result informally. We refer to Section 3.1 for the formal statement with exact exponents and remarks. + +Theorem 2.3 (Main positive result, informal). There exists a 2-CL strategy such that a 2-layer fully connected network + +of $d^{O(1)}$ size trained by SGD with batch size $d^{O(1)}$ can learn any $k$ -parities (for $k$ even) up to error $\epsilon$ in at most $d^{O(1)} / \epsilon^2$ iterations. + +Let us analyse the computational complexity of the above. At each step, the number of computations performed by a 2-layer fully connected network is given by: + +$$ +\left(d N + N\right) \cdot B, \tag {3} +$$ + +where $d$ is the input size, $N$ is the number of hidden neurons and $B$ is the batch size. Multiplying by the total number of steps and substituting the bounds from the Theorem we get that we can learn the $k$ -parity problem with a 2-CL strategy in at most $d^{O(1)}$ total computations. Specifically, $O(1)$ denotes quantities that do not depend on $k$ or on $d$ , and the statement holds also for large $k, d$ . We prove the Theorem in two slightly different settings, see Section 3.1. + +One may ask whether the $r$ -CL strategy is beneficial for learning general target tasks (i.e. beyond parities). While we do not have a complete picture to answer this question, we propose a class of functions for which some $r$ -CL strategies are not beneficial. We call those functions the Hamming mixtures, and we define them as follows. + +Definition 2.4 ((S,T,ε)-Hamming mixture). For $\epsilon \in [0,1]$ , $S, T \in [d]$ , we say that $G_{S,T,\epsilon} : \{\pm 1\}^d \to \mathbb{R}$ is a (S,T,ε)-Hamming mixture if + +$$ +G _ {S, T, \epsilon} (x) := \chi_ {S} (x) 1 (H (x) \leq \epsilon d) + \chi_ {T} (x) 1 (H (x) > \epsilon d), +$$ + +where $H(x) \coloneqq \sum_{i=1}^{d} 1(x_i = 1)$ is the Hamming weight of $x$ , $\chi_S(x) \coloneqq \prod_{i \in S} x_i$ and $\chi_T(x) \coloneqq \prod_{i \in T} x_i$ are the parity functions over set $S$ and $T$ respectively. + +The intuition of why such functions are hard for some $r$ -CL strategies is the following. Assume we train on samples $(x, G_{S,T,\epsilon}(x))$ , with $S, T$ disjoint and $\epsilon \in (0,1/2)$ . Assume that we use a 2-CL strategy and we initially train on samples $x \sim \operatorname{Rad}(p)^{\otimes d}$ for some $p < \epsilon$ . If the input dimension $d$ is large, then the Hamming weight of $x$ is with high probability concentrated around $pd$ (e.g. by Hoeffding's inequality). Thus, in the first part of training the network will see, with high probability, only samples of the type $(x, \chi_S(x))$ , and it will not see the second addend of $G_{S,T,\epsilon}$ . When we change our input distribution to $\operatorname{Rad}(1/2)^{\otimes d}$ , the network will suddenly observe samples of the type $(x, \chi_T(x))$ . Thus, the pre-training on $p$ will not help determining the support of the new parity $\chi_T$ (in some sense the network will "forget" the first part of training). This intuition holds for all $r$ -CL such that $p_1, \ldots, p_{r-1} < \epsilon$ . We state our negative result for Hamming mixtures here informally, and refer to Section 4 for a formal statement and remarks. + +Theorem 2.5 (Main negative result, informal). For each $r$ -CL strategy with $r$ bounded, there exists a Hamming mixture + +![](images/b0b9d456e9c04aa3ce985fc3dd64afe0b1c33031501b34d98100ad082ea7bc71.jpg) + +![](images/f3495f703a6d00a6e457b7e5e0f13453b1b42cdd0a419a9573d15a889a123746.jpg) + +![](images/dff14ed398188cb86adf90713ee8a57bba2e0d91055cd400316849b0cc8a9d30.jpg) + +![](images/2870db9b89d8ad30dfea9fde798c70a642576ee76264e5d0a32564d79f8323b3.jpg) +Figure 1. Learning 20-parities with 2-steps curriculum, with initial bias $p_1 = 39/40$ (top-left), $p_1 = 19/20$ (top-center), $p_1 = 1/20$ (top-right), with continuous curriculum (bottom-left) and with no curriculum (bottom-right). In all plots, we use a 2-layers ReLU MLP with batch size 1024, input dimension 100, and 100 hidden units. + +![](images/4271838694605099d2c95059bf9c6b36bd021806e83927faca74baf0d9269a0b.jpg) + +$G_{S,T,\epsilon}$ that is not learnable by any fully connected neural network of $\mathrm{poly}(d)$ size and permutation-invariant initialization trained by the noisy gradient descent algorithm (see Def. 4.1) with $\mathrm{poly}(d)$ gradient precision in $\mathrm{poly}(d)$ steps. + +Inspired by the hardness of Hamming mixtures, we define another curriculum learning strategy, where, instead of having finitely many discrete curriculum steps, we gradually move the bias of the input distribution during training from a starting point $p_0$ to a final point $p_T$ . We call this strategy a continuous-CL strategy. + +Definition 2.6 (Continuous curriculum learning (C-CL)). Let $p_0, p_T \in [0,1]$ . We say that a neural network $\mathrm{NN}(x; \theta^t)$ is trained by SGD with a C-CL $(p_0, p_T, T)$ if $\theta^t$ follows the iterations in (1) with: + +$$ +\mathcal {D} ^ {t} = \operatorname {R a d} \left(p _ {0} + t \cdot \frac {p _ {T} - p _ {0}}{T}\right) \quad t \in [ T ]. \tag {4} +$$ + +We conjecture that a well chosen C-CL might be beneficial for learning any Hamming mixture. A positive result for C-CL and comparison between $r$ -CL and C-CL are left for future work. + +# 3. Learning Parities + +# 3.1. Theoretical Results + +Our goal is to show that the curriculum strategy that we propose allows to learn $k$ -parities with a computational complexity of $d^{O(1)}$ . We prove two different results. In the first one, we consider SGD on the hinge loss and prove that a network with $\theta(d^2)$ hidden units can learn the $k$ -parity + +problem in $d^{\mathcal{O}(1)}$ computations, if trained with a well chosen 2-CL strategy. Let us state our first Theorem. + +Theorem 3.1 (Hinge Loss). Let $k, d$ be both even integers, such that $k \leq d/2$ . Let $\mathrm{NN}(x; \theta) = \sum_{i=1}^{N} a_i \sigma(w_i x + b_i)$ be a 2-layers fully connected network with activation $\sigma(y) \coloneqq \mathrm{Ramp}(y)$ (as defined in (9)) and $N = \tilde{\theta}(d^2 \log(1/\delta))^3$ . Consider training $\mathrm{NN}(x; \theta)$ with SGD on the hinge loss with batch size $B = \tilde{\theta}(d^{10}/\epsilon^2 \log(1/\delta))$ . Then, there exists an initialization, a learning rate schedule, and a 2-CL strategy such that after $T = \tilde{\theta}(d^6/\epsilon^2)$ iterations, with probability $1 - 3\delta$ , SGD outputs a network with generalization error at most $\epsilon$ . + +For our second Theorem, we consider another loss function, that is convenient for the analysis, namely the covariance loss, for which we give a definition here. + +Definition 3.2 (Covariance loss). Let $f: \mathcal{X} \to \mathbb{R}$ be a target function and let $\hat{f}: \mathcal{X} \to \mathbb{R}$ be an estimator. Let + +$$ +\begin{array}{l} \operatorname {c o v} (f, \hat {f}, x, P _ {\mathcal {X}}) := \\ := \Big (f (x) - \mathbb {E} _ {x ^ {\prime} \sim P _ {\mathcal {X}}} [ f (x ^ {\prime}) ] \Big) \cdot \Big (\hat {f} (x) - \mathbb {E} _ {x ^ {\prime} \sim P _ {\mathcal {X}}} [ \hat {f} (x ^ {\prime}) ] \Big), \\ \end{array} +$$ + +where $P_{\mathcal{X}}$ is an input distribution supported in $\mathcal{X}$ . We define the covariance loss as + +$$ +L _ {\operatorname {c o v}} (f, \hat {f}, x, P _ {\mathcal {X}}) := \max \{0, 1 - \operatorname {c o v} (f, \hat {f}, x, P _ {\mathcal {X}}) \}. +$$ + +Remark 3.3. We will consider optimization over the covariance loss through SGD with large batch size ( $B = \tilde{\theta}(d^2 k^3)$ ). At each step, we use the batch to estimate first the inner expectations (i.e. $E_x[f(x)]$ and $E_x[\mathrm{NN}(x; \theta^t)]$ ) and then the + +![](images/269fc9ba8a2eae0508e8e9302f9c4f201f88e37e75888ee22662ef7a4238125c.jpg) +Figure 2. Convergence time for different values of $d, k$ . Left: we take $p_1 = 1/16$ and a 2-layers ReLU architecture with $h = 2^k$ hidden units. Right: we take $p_1 = 1 - \frac{1}{2k}$ and a 2-layers ReLU architecture with $h = d$ hidden units. + +![](images/225406618f28afacce46add22a64fd64609848bca07711fe4dfae9b321b16d52.jpg) + +gradients. The expectation of the labels (i.e. $E_{x}[f(x)]$ ) does not need to be estimated at each training step and could be estimated once per curriculum step. One could also use part of the batch at each step to estimate the inner expectations and part of the batch to estimate the gradients. + +We show that SGD on the covariance loss can learn the $k$ -parity problem in $d^{O(1)}$ computations using a network with only $O(k)$ hidden units. The reduction of the size of the network, compared to the hinge loss case, allows to get a tighter bound on the computational cost, see Remark 3.5. + +Theorem 3.4 (Covariance Loss). Let $k, d$ be integers, such that $k \leq d$ and $k$ even. Let $\mathrm{NN}(x; \theta) = \sum_{i=1}^{N} a_i \sigma(w_i x + b_i)$ be a 2-layers fully connected network with activation $\sigma(y) := \mathrm{ReLU}(y)$ and $N = \tilde{\theta}(k)$ . Consider training $\mathrm{NN}(x; \theta)$ with SGD on the covariance loss with batch size $B = \tilde{\theta}(d^2 k^3 / \epsilon^2 \log(1 / \delta))$ . Then, there exists an initialization, a learning rate schedule, and a 2-CL strategy such that after $T = \tilde{\theta}(k^4 / \epsilon^2)$ iterations, with probability $1 - 3\delta$ , SGD outputs a network with generalization error at most $\epsilon$ . + +The proofs of Theorem 3.1 and Theorem 3.4 follow a similar outline. Firstly, we prove that training the first layer of the network on one batch of size $d^{O(1)}$ sampled from a biased input distribution (with appropriate bias), allows to recover the support of the parity. We then show that training the second layer on the uniform distribution allows to achieve the desired generalization error under the uniform distribution. We refer to Appendices A and B for restatements of the Theorems and their full proofs. + +Remark 3.5. Let us look at the computational complexity given by the two Theorems. Theorem 3.1 tells that we can learn $k$ -parities in $dNB + (T - 1)N = \tilde{\theta}(d^{19})$ computations. We remark that our result holds also for large $k$ (we however need to assume $k, d$ even and $k \leq d/2$ , for technical reasons). On the other hand, Theorem 3.4 tells that we can learn $k$ -parities in $\tilde{\theta}(d^3k^8)$ , which is much lower than the bound given by Theorem 3.1. Furthermore, the proof holds for all $k \leq d$ . The price for getting this tighter bound + +is the use of a loss that (to the best of our knowledge) is not common in the machine learning literature, and that is particularly convenient for our analysis. + +Remark 3.6. We remark that our proofs extend to the gradient descent model with bounded gradient precision, used in (Abbe & Sandon, 2020), with gradient precision bounded by $d^{O(1)}$ . Thus, for large $k, d$ , our result provides a separation to their $d^{\Omega(k)}$ computational lower bound for learning $k$ -parities under the uniform distribution with no curriculum. + +Remark 3.7. Let us comment on the $p_1$ (i.e. the bias of the initial distribution) that we used. In both Theorems we take $p_1$ close to 1. In Theorem 3.1 we take $p_1 \approx 1 - \theta(1/d)$ , and the proof is constructed specifically for this value of $p_1$ . In Theorem 3.4, the proof holds for any $p_1 \in (1/2, 1)$ and the asymptotic complexity in $d$ does not depend on the specific choice of $p_1$ . However, to get $\mathrm{poly}(k)$ complexity we need to take $p_1 = 1 - \theta(1/k)$ , while we get $\exp(k)$ complexity for all $p_1 = \theta_{d,k}(1)$ . + +Our theoretical analysis captures a fairly restricted setting: in our proofs we use initializations and learning schedules that are convenient for the analysis. We conduct experiments to verify the usefulness of our CL strategy in more standard settings of fully connected architectures. + +# 3.2. Empirical Results + +In all our experiments we use fully connected ReLU networks and we train them by SGD on the square loss4. + +In Figure 1, we compare different curriculum strategies for learning 20-parities over 100 bits, with a fixed architecture, i.e. a 2-layer ReLU network with 100 hidden units. We run a 2-steps curriculum strategy for 3 values of $p_1$ , namely $p_1 = 39/40, 19/20, 1/20$ . In all the 2-CL experiments we train on the biased distribution until convergence, and then we move to the uniform distribution. We observe that + +![](images/3de6822694ea6b3a4b3a2feeac8feb3e09b4a19f386a211d68c52669568ac1a2.jpg) +Figure 3. Convergence time with respect to the initial bias $p_1$ . We compute the convergence time for learning a 10-parity over 100 bits with a 2-layer ReLU network. We omitted all points with convergence time above 100,000. + +training with an initial bias of $p_1 = 39/40$ allows to learn the 20-parity in 16,000 epochs. One can see that during the first part of training (on the biased distribution), the test error under the uniform distribution stays at $1/2$ (orange line), and then drops quickly to zero when we start training on the uniform distribution. This trend of hidden progress followed by a sharp drop has been already observed in the context of learning parities with SGD in the standard setting with no-curriculum (Barak et al., 2022). Here, the length of the 'hidden progress' phase is controlled by the length of the first phase of training. Interestingly, when training with continuous curriculum, we do not have such hidden progress and the test error under the uniform distribution decreases slowly to zero. With no curriculum, the network does not achieve non-trivial correlation with the target in 25,000 epochs. + +In Figure 2 we study the convergence time of a 2-CL strategy on a 2-layers ReLU network for different values of the input dimension $(d)$ and size of the parity $(k)$ . We take two slightly different settings. In the plot on the left, we take a fixed initial bias $p_1 = 1/16$ and $h = 2^k$ hidden units. On the right we take $p_1 = 1 - \frac{1}{2k}$ initial bias and an architecture with $h = d$ hidden units. The convergence time is computed as $T_1 + T_2$ , where $T_1$ and $T_2$ are the number of steps needed to achieve training error below 0.01 in the first and second part of training, respectively. We compute the convergence time for $k = 5, 6, 7, 8, 9, 10$ and $d = 25, 50, 75, 100$ , and for each $k$ we plot the convergence time with respect to $d$ in log-log scale. Each point is obtained by averaging over 10 runs. We observe that for each $k$ , the convergence time scales (roughly) polynomially as $d^{c_k}$ , with $c_k$ varying mildly with $k$ . + +In Figure 3, we study the convergence time of a 2-CL strategy for different values of the initial bias $p_1$ . We consider the problem of learning a 10-parity over 100 bits with a 2-layers ReLU network with $h = 100$ hidden + +units. As before, we computed the convergence time as $T_{1} + T_{2}$ , where $T_{1}$ and $T_{2}$ are the number of steps needed to achieve training error below 0.01 in the first and second part of training, respectively. We ran experiments for $p_{1} = 0.001, 0.05, 0.1, 0.15, \ldots, 0.95, 0.999$ . We omitted from the plot any point for which the convergence time exceeded 100,000 iterations: these correspond to $p_{1}$ near $1/2$ and $p_{1} = 0.001, 0.999$ . Each point is obtained by averaging over 10 runs. We observe that the convergence time is smaller for $p_{1}$ close to 0 or to 1. Moreover, $T_{2}$ has modest variations across different $p_{1}$ 's. + +# 4. Learning Hamming Mixtures + +In this section we consider the class of functions defined in Def. 2.4 and named Hamming mixtures. We consider a specific descent algorithm, namely the noisy GD algorithm with batches (used also in (Abbe & Sandon, 2020; Abbe et al., 2021b)). We give a formal definition here of noisy GD with curriculum. + +Definition 4.1 (Noisy GD with CL). Consider a neural network $\mathrm{NN}(.;\theta)$ , with initialization of the weights $\theta^0$ . Given an almost surely differentiable loss function, the updates of the noisy GD algorithm with learning rate $\gamma_t$ and gradient range $A$ are defined by + +$$ +\theta^ {t + 1} = \theta^ {t} - \gamma_ {t} \left(\mathbb {E} _ {x ^ {t}} \left[ \nabla_ {\theta^ {t}} L \left(\theta^ {t}, f, x ^ {t}\right) \right] _ {A} + Z ^ {t}\right), \tag {5} +$$ + +where for all $t \in \{0, \dots, T - 1\}$ , $Z^t$ are i.i.d. $\mathcal{N}(0, \tau^2)$ , for some $\tau$ , and they are independent from other variables, $x^t \sim \mathcal{D}^t$ , for some time-dependent input distribution $\mathcal{D}^t$ , $f$ is the target function, from which the labels are generated, and by $[.]_A$ we mean that whenever the argument is exceeding $A$ (resp. $-A$ ) it is rounded to $A$ (resp. $-A$ ). We call $A / \tau$ the gradient precision. In the noisy-GD algorithm with $r$ -CL, we choose $\mathcal{D}^t$ according to Def. 2.1. + +Let us state our hardness result for learning Hamming mixtures with $r$ -CL strategies with $r$ bounded. + +Theorem 4.2. Assume the network observes samples generated by $G_{S,T,\epsilon}(x)$ (see Def. 2.4), where $|S| = k_S$ , $|T| = k_T$ such that $k_S, k_T = o(\sqrt{d})$ , and $|S \cap T| = 0$ . Then, for any $r$ - $CL(\bar{T},\bar{p})$ with $r$ bounded and $p_r = 1/2$ , there exists an $\epsilon$ such that the noisy $GD$ algorithm with $r$ - $CL(\bar{T},\bar{p})$ (as in (5)) on a fully connected neural network with $|\theta|$ weights and permutation-invariant initialization, after $T$ training steps, outputs a network $\mathrm{NN}(x,\theta^T)$ such that + +$$ +\begin{array}{l} \left| \mathbb {E} _ {x \sim \operatorname {R a d} (1 / 2) \otimes d} \left[ G _ {S, T, \epsilon} (x) \cdot \operatorname {N N} (x; \theta^ {T}) \right] \right| \\ \leq \frac {A T \sqrt {| \theta |}}{\tau} \left(\frac {1}{d ^ {k _ {T} / 2}} + e ^ {- d \delta^ {2}}\right) + \frac {2 k _ {S} k _ {T}}{d} + O (d ^ {- 2}), \\ \end{array} +$$ + +where $A, \tau$ are the gradient range and the noise level in the noisy-GD algorithm and $\delta$ is a constant. + +The proof uses an SQ-like lower bound argument for noisy GD, in a similar flavour of (Abbe et al., 2022b; Abbe & Boix-Adsera, 2022). We refer to Appendix C for the full proof. + +Remark 4.3. In Theorem 4.2, the neural network can have any fully connected architecture and any activation such that the gradients are well defined almost everywhere. The initialization can be from any distribution that is invariant to permutations of the input neurons. + +For the purposes of $\mathbb{E}\left[G_{S,T,\epsilon}(x)\cdot \mathrm{NN}(x;\theta^T)\right]$ , it is assumed that the neural network outputs a guess in $\{\pm 1\}$ . This can be done with any form of thresholding, e.g. taking the sign of the value of the output neuron. + +Remark 4.4. One can remove the $\frac{2k_S k_V}{d}$ term in the right hand side by further assuming e.g. that set $S$ is supported on the first $d/2$ coordinates and set $V$ on the last $d/2$ coordinates. This also allows to weaken the assumption on the cardinality of $S$ and $V$ . We formalize this in the following Corollary. + +Corollary 4.5. Assume the network observes samples generated by $G_{S,V,\epsilon}(x)$ , where $S \subseteq \{1, \dots, d/2\}$ , and $V \subseteq \{d/2 + 1, \dots, d\}$ (where we assumed $d$ to be even for simplicity). Denote $k_V = |V|$ . Then, for any $r-CL(\bar{T}, \bar{p})$ with $r$ bounded and $p_r = 1/2$ , there exists an $\epsilon$ such that the noisy GD algorithm with $r-CL(\bar{T}, \bar{p})$ (as in (5)) on a fully connected neural network with $|\theta|$ weights and permutation-invariant initialization, after $T$ training steps, outputs a network $\mathrm{NN}(x, \theta^{(T)})$ such that + +$$ +\begin{array}{l} \left| \mathbb {E} _ {x \sim \operatorname {R a d} (1 / 2) ^ {\otimes d}} \left[ G _ {S, V, \epsilon} (x) \cdot \operatorname {N N} (x; \theta^ {(T)}) \right] \right| \\ \leq \frac {2 A T \sqrt {| \theta |}}{\tau} \left(\binom {d / 2} {k _ {V}} ^ {- 1 / 2} + e ^ {- d \delta^ {2}}\right), \\ \end{array} +$$ + +for some $\delta > 0$ . + +The proof of Corollary 4.5 is deferred to Appendix D. + +Theorem 4.2 states failure at the weakest form of learning, i.e. achieving correlation better than guessing in the asymptotic of large $d$ . More specifically, it tells that if the network size, the number of training steps and the gradient precision (i.e. $A / \tau$ ) are such that $\frac{AT\sqrt{|\theta|}}{\tau} = o(d^{-k_T / 2})$ , then the network achieves correlation with the target under the uniform distribution of $o_d(1)$ . Corollary 4.6 follows immediately from the Theorem. + +Corollary 4.6. Under the assumptions of Theorem 4.2, if $k_{T} = \omega_{d}(1)$ (i.e. $k_{T}$ grows with $d$ ), $|\theta|, A / \tau, T$ are all polynomially bounded in $d$ , then + +$$ +\left| \mathbb {E} _ {x \sim \operatorname {R a d} (1 / 2) ^ {\otimes d}} \left[ G _ {S, T, \epsilon} (x) \cdot \mathrm {N N} (x; \theta^ {T}) \right] \right| = o _ {d} (1), \tag {6} +$$ + +i.e. in $\mathrm{poly}(d)$ computations the network will fail at weaklearning $G_{S,T,\epsilon}$ + +We conjecture that if we take instead a C-CL strategy with an unbounded number of curriculum steps, we can learn efficiently (i.e. in $\mathrm{poly}(d)$ time) any $G_{S,T,\epsilon}$ (even with $k_{T} = \omega_{d}(1)$ and for any $\epsilon$ ). Furthermore, we believe this conjecture to hold for any bounded mixture, i.e. any function of the type: + +$$ +\sum_ {m = 1} ^ {M} \chi_ {S _ {m}} (x) 1 \left(\epsilon_ {m - 1} d \leq H (x) < \epsilon_ {m} d\right), \tag {7} +$$ + +with $S_{1},\ldots ,S_{M}$ being distinct sets of coordinates, $0 = \epsilon_0 < \epsilon_{1}\dots < \epsilon_{M}\leq 1$ , and $M$ bounded. + +# 5. Conclusion and Future Work + +In this work, we mainly focused on learning parities and Hamming mixtures with $r$ -CL strategies with bounded $r$ . Some natural questions arise, for instance: does the depth of the network help? What is the optimal number of curriculum steps for learning parities? We leave to future work the analysis of C-CL with unboundedly many curriculum steps and the comparison between $r$ -CL and C-CL. In the previous Section, we also raised a conjecture concerning the specific case of Hamming mixtures. + +Furthermore, we believe that our results can be extended to more general families of functions. First, consider the set of $k$ -Juntas, i.e., the set of functions that depend on $k$ out of $d$ coordinates. This set of functions contains the set of $k$ -parities so it is at least as hard to learn. Moreover, as in the case of parities, Juntas are correlated with each of their inputs for generic $p$ , see e.g. (Mossel et al., 2004). So it is natural to expect that curriculum learning can learn such functions in time $d^{O(1)}2^{O(k)}$ (the second term is needed since there is a doubly exponential number of Juntas on $k$ bits). In this work we propose to learn parities using a mixture of product distributions, but there are other ways to correlate samples that may be of interest. For example, some works in PAC learning showed that, even for the uniform measure, samples that are generated by a random walk often lead to better learning algorithms (Bshouty et al., 2005; Arpe & Mossel, 2008). Do such random walk based algorithms provide better convergence for gradient based methods? + +We further believe that a similar idea to the one presented in this paper can be applied to product distributions with orthogonal basis (such as Hermite monomials for the i.i.d. standard Gaussian distribution or spherical harmonics for the uniform distribution over a sphere). These basis elements are no longer orthogonal under biased distributions, and we anticipate that the footprints of our proof would extend to these scenarios. However, in real-world datasets, input coordinates are often not i.i.d., and each coordinate may depend on multiple other coordinates. Nevertheless, we are hopeful that in certain real-world datasets it may be + +possible to identify easy and hard samples by means of the variance of the input coordinates (i.e. $\frac{1}{d - 1}\sum_{i = 1}^{d}(x_i - \bar{x})^2$ for $x\in \mathbb{R}^d$ ). For instance, consider a task where a learner is required to identify a small object in an image (e.g. a 'stop' signal or a traffic light). In each image, the learner has to identify the relevant subset of coordinates and, intuitively, this is easier in images where the background is plain (samples with low variance) than in images where the background is noisy (samples with large variance). + +To conclude, we remark that an important limitation of the curriculum strategy presented in this paper is that it requires an oracle that provides labeled samples from arbitrary product measures. However, in applications one usually has a fixed dataset and would like to select samples in a suitable order, to facilitate learning. It would be an interesting future direction to consider settings where curriculum and non-curriculum have a common sampling distribution. + +# Acknowledgement + +This work was supported in part by the Simons-NSF Collaboration on the Theoretical Foundations of Deep Learning (deepfoundations.ai). It started while E.C. was visiting the MIT Institute for Data, Systems, and Society (IDSS) under the support of the collaboration grant. E.M is also partially supported by the Vannevar Bush Faculty Fellowship award ONR-N00014-20-1-2826 and by a Simons Investigator Award in Mathematics (622132). + +# References + +Abbe, E. and Boix-Adsera, E. On the non-universality of deep learning: quantifying the cost of symmetry. arXiv preprint arXiv:2208.03113, 2022. +Abbe, E. and Sandon, C. On the universality of deep learning. In Advances in Neural Information Processing Systems, volume 33, pp. 20061-20072, 2020. +Abbe, E., Boix Adsera, E., Brennan, M., Bresler, G., and Nagaraj, D. The staircase property: How hierarchical structure can guide deep learning. In Advances in Neural Information Processing Systems, volume 34, 2021a. +Abbe, E., Kamath, P., Malach, E., Sandon, C., and Srebro, N. On the power of differentiable learning versus PAC and SQ learning. In Advances in Neural Information Processing Systems, volume 34, 2021b. +Abbe, E., Adsera, E. B., and Misiakiewicz, T. The merged-staircase property: a necessary and nearly sufficient condition for sgd learning of sparse functions on two-layer neural networks. In Conference on Learning Theory, pp. 4782-4887. PMLR, 2022a. + +Abbe, E., Cornacchia, E., Hazla, J., and Marquis, C. An initial alignment between neural network and target is needed for gradient descent to learn. In International Conference on Machine Learning, pp. 33-52. PMLR, 2022b. +Abbe, E., Bengio, S., Lotfi, A., and Rizk, K. Generalization on the unseen, logic reasoning and degree curriculum. arXiv preprint arXiv:2301.13105, 2023. +Alekhnovich, M. More on average case vs approximation complexity. In 44th Annual IEEE Symposium on Foundations of Computer Science, 2003. Proceedings., pp. 298-307. IEEE, 2003. +Andoni, A., Panigrahy, R., Valiant, G., and Zhang, L. Learning polynomials with neural networks. In International conference on machine learning, pp. 1908-1916. PMLR, 2014. +Arpe, J. and Mossel, E. Agnostically learning juntas from random walks. arXiv preprint arXiv:0806.4210, 2008. +Avrahami, J., Kareev, Y., Bogot, Y., Caspi, R., Dunaevsky, S., and Lerner, S. Teaching by examples: Implications for the process of category acquisition. The Quarterly Journal of Experimental Psychology Section A, 50(3): 586-606, 1997. +Barak, B., Edelman, B. L., Goel, S., Kakade, S., Malach, E., and Zhang, C. Hidden progress in deep learning: Sgd learns parities near the computational limit. arXiv preprint arXiv:2207.08799, 2022. +Bengio, Y., Louradour, J., Collobert, R., and Weston, J. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pp. 41-48, 2009. +Bshouty, N. H., Mossel, E., O'Donnell, R., and Servedio, R. A. Learning dnf from random walks. Journal of Computer and System Sciences, 71(3):250-265, 2005. +Campos, D. Curriculum learning for language modeling. arXiv preprint arXiv:2108.02170, 2021. +Daniely, A. and Malach, E. Learning parities with neural networks. Advances in Neural Information Processing Systems, 33:20356-20365, 2020. +Dong, Q., Gong, S., and Zhu, X. Multi-task curriculum transfer deep learning of clothing attributes. In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 520-529. IEEE, 2017. +Elio, R. and Anderson, J. R. The effects of information order and learning mode on schema abstraction. Memory & cognition, 12(1):20-30, 1984. + +Graves, A., Bellemare, M. G., Menick, J., Munos, R., and Kavukcuoglu, K. Automated curriculum learning for neural networks. In international conference on machine learning, pp. 1311-1320. PMLR, 2017. +Jiang, L., Meng, D., Zhao, Q., Shan, S., and Hauptmann, A. G. Self-paced curriculum learning. In Twenty-ninth AAAI conference on artificial intelligence, 2015. +Kalimeris, D., Kaplun, G., Nakkiran, P., Edelman, B., Yang, T., Barak, B., and Zhang, H. Sgd on neural networks learns functions of increasing complexity. Advances in neural information processing systems, 32, 2019. +Kearns, M. Efficient noise-tolerant learning from statistical queries. Journal of the ACM, 45(6):983-1006, 1998. +Malach, E. and Shalev-Shwartz, S. Computational separation between convolutional and fully-connected networks. arXiv preprint arXiv:2010.01369, 2020. +Malach, E., Kamath, P., Abbe, E., and Srebro, N. Quantifying the benefit of using differentiable learning over tangent kernels. In International Conference on Machine Learning, pp. 7379-7389. PMLR, 2021. +Mossel, E., O'Donnell, R., and Servedio, R. A. Learning functions of k relevant variables. Journal of Computer and System Sciences, 69(3):421-434, 2004. +Refinetti, M., Ingrosso, A., and Goldt, S. Neural networks trained with sgd learn distributions of increasing complexity. arXiv preprint arXiv:2211.11567, 2022. +Ross, B. H. and Kennedy, P. T. Generalizing from the use of earlier examples in problem solving. Journal of Experimental Psychology: Learning, Memory, and Cognition, 16(1):42, 1990. +Saglietti, L., Mannelli, S. S., and Saxe, A. An analytical theory of curriculum learning in teacher-student networks. Journal of Statistical Mechanics: Theory and Experiment, 2022(11):114014, 2022. +Sarafianos, N., Giannakopoulos, T., Nikou, C., and Kakadiaris, I. A. Curriculum learning for multi-task classification of visual attributes. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 2608-2615, 2017. +Shafto, P., Goodman, N. D., and Griffiths, T. L. A rational account of pedagogical reasoning: Teaching by, and learning from, examples. Cognitive psychology, 71:55-89, 2014. +Shalev-Shwartz, S. and Ben-David, S. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014. + +Shalev-Shwartz, S., Shamir, O., and Shammah, S. Failures of gradient-based deep learning. In International Conference on Machine Learning, pp. 3067-3075. PMLR, 2017. +Shalev-Shwartz, S. et al. Online learning and online convex optimization. Foundations and Trends® in Machine Learning, 4(2):107-194, 2012. +Shi, Y., Larson, M., and Jonker, C. M. K-component recurrent neural network language models using curriculum learning. In 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, pp. 1-6. IEEE, 2013. +Shi, Y., Larson, M., and Jonker, C. M. Recurrent neural network language model adaptation with curriculum learning. Computer Speech & Language, 33(1):136-154, 2015. +Soviany, P., Ionescu, R. T., Rota, P., and Sebe, N. Curriculum learning: A survey. International Journal of Computer Vision, pp. 1-40, 2022. +Toneva, M., Sordoni, A., Combes, R. T. d., Trischler, A., Bengio, Y., and Gordon, G. J. An empirical study of example forgetting during deep neural network learning. arXiv preprint arXiv:1812.05159, 2018. +Wang, X., Chen, Y., and Zhu, W. A survey on curriculum learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021. +Weinshall, D. and Amir, D. Theory of curriculum learning, with convex loss functions. Journal of Machine Learning Research, 21(222):1-19, 2020. +Weinshall, D., Cohen, G., and Amir, D. Curriculum learning by transfer learning: Theory and experiments with deep networks. In International Conference on Machine Learning, pp. 5238-5246. PMLR, 2018. +Xiong, Y., He, X., Zhao, D., Tian, T., Hong, L., Jiang, T., and Zeng, J. Modeling multi-species rna modification through multi-task curriculum learning. *Nucleic acids research*, 49(7):3719-3734, 2021. +Zaremba, W. and Sutskever, I. Learning to execute. arXiv preprint arXiv:1410.4615, 2014. + +# A. Proof of Theorem 3.1 + +Theorem A.1 (Theorem 3.1, restatement). Let $k, d$ be both even integers, such that $k \leq d/2$ . Let $\mathrm{NN}(x; \theta) = \sum_{i=1}^{N} a_i \sigma(w_i x + b_i)$ be a 2-layers fully connected network with activation $\sigma(y) := \mathrm{Ramp}(y)$ (as defined in (9)) and $N \geq (d+1)(d-k+1) \log((d+1)(d-k+1)/\delta)$ . Consider training $\mathrm{NN}(x; \theta)$ with SGD on the hinge loss with batch size $B \geq (8\zeta^2 N^2)^{-1} \log(\frac{Nd+N}{\delta})$ , with $\zeta \leq \frac{\epsilon \mu^k}{24(d+1)^2(d-k+1)^2N}$ and $\mu = \sqrt{1 - \frac{1}{2(d-k)}}$ . Then, there exists an initialization and a learning rate schedule, and a 2-CL strategy such that after $T \geq \frac{64}{\epsilon^2} (d-k+1)^3 (d+1)N$ iterations, with probability $1 - 3\delta$ SGD outputs a network with generalization error at most $\epsilon$ . + +# A.1. Proof Setup + +We consider a 2-layers neural network, defined as: + +$$ +\mathrm {N N} (x; \theta) = \sum_ {i = 1} ^ {N} a _ {i} \sigma \left(w _ {i} x + b _ {i}\right), \tag {8} +$$ + +where $N$ is the number of hidden units, $\theta = (a,b,w)$ and $\sigma \coloneqq \mathrm{Ramp}$ denotes the activation defined as: + +$$ +\operatorname {R a m p} (x) = \left\{ \begin{array}{l l} 0 & x \leq 0, \\ x & 0 < x \leq 1, \\ 1 & x > 1 \end{array} \right.. \tag {9} +$$ + +Without loss of generality, we assume that the labels are generated by $\chi_{[k]}(x) \coloneqq \prod_{i=1}^{k} x_i$ . Indeed, SGD on fully connected networks with permutation-invariant initialization is invariant to permutation of the input neurons, thus our result will hold for all $\chi_S(x)$ such that $|S| = k$ . Our proof scheme is the following: + +1. We train only the first layer of the network for one step on data $(x_{i},\chi_{[k]}(x_{i}))_{i\in [B]}$ with $x_{i}\sim \mathrm{Rad}(p)^{\otimes d}$ for $i\in [B]$ , with $p = \frac{1}{2}\sqrt{1 - \frac{1}{2(d - k)}} +\frac{1}{2}$ ; +2. We show that after one step of training on such biased distribution, the target parity belongs to the linear span of the hidden units of the network; +3. We subsequently train only the second layer of the network on $(x_{i},\chi_{[k]}(x_{i}))_{i\in [B]}$ with $x_{i}\sim \mathrm{Rad}(1 / 2)^{\otimes d}$ for $i\in [B]$ , until convergence; +4. We use established results on convergence of SGD on convex losses to conclude. + +We train our network with SGD on the hinge loss. Specifically, we apply the following updates, for all $t \in \{0,1,\dots,T - 1\}$ : + +$$ +w _ {i, j} ^ {t + 1} = w _ {i, j} ^ {t} - \gamma_ {t} \frac {1}{B} \sum_ {s = 1} ^ {B} \nabla_ {w _ {i, j} ^ {t}} L (\theta^ {t}, \chi_ {[ k ]}, x _ {s} ^ {t}), +$$ + +$$ +a _ {i} ^ {t + 1} = a _ {i} ^ {t} - \xi_ {t} \frac {1}{B} \sum_ {s = 1} ^ {B} \nabla_ {a _ {i} ^ {t}} L \left(\theta^ {t}, \chi_ {[ k ]}, x _ {s} ^ {t}\right) + c _ {t}, \tag {10} +$$ + +$$ +b _ {i} ^ {t + 1} = \lambda_ {t} \left(b _ {i} ^ {t} + \psi_ {t} \frac {1}{B} \sum_ {s = 1} ^ {B} \nabla_ {b _ {i} ^ {t}} L \left(\theta^ {t}, \chi_ {[ k ]}, x _ {s} ^ {t}\right)\right) + d _ {t}, +$$ + +where $L(\theta^t,\chi_{[k]},x) = \max \{0,1 - \chi_{[k]}(x)\mathrm{NN}(x;\theta^t)\}$ . Following the 2-steps curriculum strategy introduced above, we set + +$$ +x _ {s} ^ {0} \stackrel {i i d} {\sim} \operatorname {R a d} (p) ^ {\otimes d} \quad \forall s \in [ B ], \tag {11} +$$ + +$$ +x _ {s} ^ {t} \stackrel {i i d} {\sim} \operatorname {R a d} (1 / 2) ^ {\otimes d} \quad \forall t \geq 1, s \in [ B ], \tag {12} +$$ + +where $p = \frac{1}{2}\sqrt{1 - \frac{1}{2(d - k)}} +\frac{1}{2}$ . For brevity, we denote $\mu \coloneqq 2p - 1 = \sqrt{1 - \frac{1}{2(d - k)}}$ . We set the parameters of SGD to: + +$$ +\gamma_ {0} = \mu^ {- (k - 1)} 2 N, \quad \gamma_ {t} = 0 \quad \forall t \geq 1, \tag {13} +$$ + +$$ +\xi_ {0} = 0, \quad \xi_ {t} = \frac {\epsilon}{2 N} \quad \forall t \geq 1, \tag {14} +$$ + +$$ +\psi_ {0} = \frac {N}{\mu^ {k}}, \quad \psi_ {t} = 0 \quad \forall t \geq 1, \tag {15} +$$ + +$$ +c _ {0} = - \frac {1}{2 N}, \quad c _ {t} = 0 \quad \forall t \geq 1, \tag {16} +$$ + +$$ +\lambda_ {0} = (d + 1), \quad \lambda_ {t} = 1 \quad \forall t \geq 1, \tag {17} +$$ + +$$ +d _ {0} = 0, \quad d _ {t} = 0 \quad \forall t \geq 1, \tag {18} +$$ + +and we consider the following initialization scheme: + +$$ +w _ {i, j} ^ {(0)} = 0 \quad \forall i \in [ N ], j \in [ d ]; +$$ + +$$ +a _ {i} ^ {(0)} = \frac {1}{2 N} \quad \forall i \in [ N ]; \tag {19} +$$ + +$$ +b _ {i} ^ {(0)} \sim \mathrm {U n i f} \left\{\frac {b _ {l m}}{d + 1} + \frac {1}{2}: l \in \{0, \dots , d \}, m \in \{- 1, \dots , d - k \} \right\}, +$$ + +where we define + +$$ +b _ {l m} := - d + 2 l - \frac {1}{2} + \frac {m + 1}{d - k}. \tag {20} +$$ + +Note that such initialization is invariant to permutations of the input neurons. We choose such initialization because it is convenient for our proof technique. We believe that the argument may generalize to more standard initialization (e.g. uniform, Gaussian), however this would require more work and it may not be a trivial extension. + +# A.2. First Step: Recovering the Support + +As mentioned above, we train our network for one step on $(x_{i},\chi_{[k]}(x_{i}))_{i\in [B]}$ with $x_{i}\sim \mathrm{Rad}(p)^{\otimes d}$ . + +Population gradient at initialization. Let us compute the population gradient at initialization. Since we set $\xi_0 = 0$ , we do not need to compute the initial gradient for $a$ . Note that at initialization $|\mathrm{NN}(x;\theta^0)| < 1$ . Thus, the initial population gradients are given by + +$$ +\forall j \in [ k ], i \in [ N ] \quad G _ {w _ {i, j}} = - a _ {i} \mathbb {E} _ {x \sim \operatorname {R a d} (p) ^ {\otimes d}} \left[ \prod_ {l \in [ k ] \backslash j} x _ {l} \cdot 1 (\langle w _ {i}, x \rangle + b _ {i} \in [ 0, 1 ]) \right] \tag {21} +$$ + +$$ +\forall j \notin [ k ], i \in [ N ] \quad G _ {w _ {i, j}} = - a _ {i} \mathbb {E} _ {x \sim \operatorname {R a d} (p) ^ {\otimes d}} \left[ \prod_ {l \in [ k ] \cup j} x _ {l} \cdot 1 (\langle w _ {i}, x \rangle + b _ {i} \in [ 0, 1 ]) \right] \tag {22} +$$ + +$$ +\forall i \in [ N ] \quad G _ {b _ {i}} = - a _ {i} \mathbb {E} _ {x \sim \operatorname {R a d} (p) \otimes d} \left[ \prod_ {l \in [ k ]} x _ {l} \cdot 1 (\langle w _ {i}, x \rangle + b _ {i} \in [ 0, 1 ]) \right] \tag {23} +$$ + +Lemma A.2. Initialize $a, b, w$ according to (19). Then, + +$$ +\forall j \in [ k ], \quad G _ {w _ {i, j}} = - \frac {\mu^ {k - 1}}{2 N}; \tag {24} +$$ + +$$ +\forall j \notin [ k ], \quad G _ {w _ {i, j}} = - \frac {\mu^ {k + 1}}{2 N}; \tag {25} +$$ + +$$ +G _ {b _ {i}} = - \frac {\mu^ {k}}{2 N}. \tag {26} +$$ + +Proof. If we initialize according to (19), we have $\langle w_i, x \rangle + b_i \in [0, 1]$ for all $i$ . The results hold since $\mathbb{E}_{x \sim \mathrm{Rad}(p) \otimes d}[\chi_S(x)] = \mu^{|S|}$ . + +# Effective gradient at initialization. + +# Lemma A.3. Let + +$$ +\hat {G} _ {w _ {i, j}} := \frac {1}{B} \sum_ {s = 1} ^ {B} \nabla_ {w _ {i, j} ^ {0}} L \left(\theta^ {0}, \chi_ {[ k ]}, x _ {s} ^ {t}\right) \tag {27} +$$ + +$$ +\hat {G} _ {b _ {i}} := \frac {1}{B} \sum_ {s = 1} ^ {B} \nabla_ {b _ {i} ^ {0}} L \left(\theta^ {0}, \chi_ {[ k ]}, x _ {s} ^ {t}\right) \tag {28} +$$ + +be the effective gradients at initialization. If $B \geq (8\zeta^2 N^2)^{-1}\log \left(\frac{Nd + N}{\delta}\right)$ , then with probability $1 - 2\delta$ + +$$ +\left\| \hat {G} _ {w _ {i, j}} - G _ {w _ {i, j}} \right\| _ {\infty} \leq \zeta , \tag {29} +$$ + +$$ +\left\| \hat {G} _ {b _ {i}} - G _ {b _ {i}} \right\| _ {\infty} \leq \zeta , \tag {30} +$$ + +where $G_{w_{i,j}}$ , $G_{b_i}$ are the population gradients. + +Proof. We note that $\mathbb{E}[\hat{G}_{w_{i,j}}] = G_{w_{i,j}}$ , $\mathbb{E}[\hat{G}_{b_i}] = G_{b_i}$ , and $|\hat{G}_{w_{i,j}}|, |\hat{G}_{b_i}| \leq \frac{1}{2N}$ + +$$ +\mathbb {P} _ {x \sim \operatorname {R a d} (p) ^ {\otimes d}} \left(\left| \hat {G} _ {w _ {i, j}} - G _ {w _ {i, j}} \right| \geq \zeta\right) \leq 2 \exp \left(- 8 \zeta^ {2} N ^ {2} B\right) \leq \frac {2 \delta}{N d + N}, \tag {31} +$$ + +$$ +\mathbb {P} _ {x \sim \operatorname {R a d} (p) ^ {\otimes d}} \left(\mid \hat {G} _ {b _ {i}} - G _ {b _ {i}} \mid \geq \zeta\right) \leq 2 \exp \left(- 8 \zeta^ {2} N ^ {2} B\right) \leq \frac {2 \delta}{N d + N}. \tag {32} +$$ + +The result follows by union bound. + +# Lemma A.4. Let + +$$ +w _ {i, j} ^ {(1)} = w _ {i, j} ^ {(0)} - \gamma_ {0} \hat {G} _ {w _ {i, j}} \tag {33} +$$ + +$$ +b _ {i} ^ {(1)} = \lambda_ {0} \left(b _ {i} ^ {(0)} - \psi_ {0} \hat {G} _ {b _ {i}}\right) \tag {34} +$$ + +![](images/3ff81fe890fe8a8debe1bcc85a9dde1df62529655d8ecf322bf8cc12b59e7e89.jpg) + +(35) + +If $B \geq (8\zeta^2 N^2)^{-1}\log \left(\frac{Nd + N}{\delta}\right)$ , with probability $1 - 2\delta$ + +i) For all $j\in [k],i\in [N],|w_{i,j}^{(1)} - 1|\leq \frac{2N\zeta}{\mu^{k - 1}};$ +ii) For all $j \notin [k]$ , $|w_{i,j}^{(1)} - (1 - \frac{1}{2(d - k)})| \leq \frac{2N\zeta}{\mu^{k - 1}}$ ; +iii) For all $i \in [N]$ , $|b_i^{(1)} - (d + 1)(b_i^{(0)} - \frac{1}{2})| \leq \frac{N(d + 1)\zeta}{\mu^k}$ . + +# Proof. We apply Lemma A.4: + +i) For all $j\in [k],i\in [N],|\hat{w}_{i,j}^{(1)} - 1| = \gamma_0|\hat{G}_{w_{i,j}} - G_{w_{i,j}}|\leq \frac{2N\zeta}{\mu^{k - 1}};$ +ii) For all $j \notin [k]$ , $i \in [N]$ , $|\hat{w}_{i,j} - (1 - \frac{1}{2(d - k)})| = \gamma_0 |\hat{G}_{w_{i,j}} - G_{w_{i,j}}| \leq \frac{2N\zeta}{\mu^{k-1}}$ ; +iii) For all $i \in [N]$ , + +$$ +\begin{array}{l} \left| \hat {b} _ {i} ^ {(1)} - (d + 1) \left(b _ {i} ^ {(0)} - \frac {1}{2}\right) \right| = \left| \lambda_ {0} \left(b _ {i} ^ {(0)} + \psi_ {0} \hat {G} _ {b _ {i}}\right) - \lambda_ {0} \left(b _ {i} ^ {(0)} + \psi_ {0} G _ {b _ {i}}\right) \right| (36) \\ \leq | \lambda_ {0} | \cdot | \psi_ {0} | \cdot | \hat {G} _ {b _ {i}} - G _ {b _ {i}} | (37) \\ \leq \frac {N (d + 1) \zeta}{\mu^ {k}}. (38) \\ \end{array} +$$ + +Lemma A.5. If $N \geq (d + 1)(d - k + 1)\log((d + 1)(d - k + 1) / \delta)$ , then with probability $1 - \delta$ , for all $l \in \{0, \dots, d\}$ , and for all $m \in \{-1, \dots, d - k\}$ there exists $i$ such that $b_i^{(0)} = \frac{b_{l_m}}{d + 1} + \frac{1}{2}$ . + +Proof. The probability that there exist $l, m$ such that the above does not hold is + +$$ +\left(1 - \frac {1}{(d + 1) (d - k + 1)}\right) ^ {N} \leq \exp \left(- \frac {N}{(d + 1) (d - k + 1)}\right) \leq \frac {\delta}{(d + 1) (d - k + 1)}. \tag {39} +$$ + +The result follows by union bound. + +Lemma A.6. Let $\sigma_{lm}(x) = \mathrm{Ramp}\left(\sum_{j=1}^{d} x_j - \frac{1}{2(d-k)} \sum_{j>k} x_j + b_{lm}\right)$ , with $b_{lm}$ given in (20). If $B \geq (8\zeta^2 N^2)^{-1} \log \left(\frac{Nd+N}{\delta}\right)$ and $N \geq (d+1)(d-k+1) \log \left((d+1)(d-k+1)/\delta\right)$ , with probability $1-3\delta$ , for all $l, m$ there exists $i$ such that + +$$ +\left| \sigma_ {l m} (x) - \operatorname {R a m p} \left(\sum_ {j = 1} ^ {d} \hat {w} _ {i, j} ^ {(1)} x _ {j} + \hat {b} _ {i} ^ {(1)}\right) \right| \leq 3 N (d + 1) \zeta \mu^ {- k}. \tag {40} +$$ + +Proof. By Lemma A.5, with probability $1 - \delta$ , for all $l, m$ there exists $i$ such that $b_{i}^{(0)} = \frac{b_{lm}}{d + 1} + \frac{1}{2}$ . For ease of notation, we replace indices $i \mapsto (lm)$ , and denote $\hat{\sigma}_{lm}(x) = \mathrm{Ramp}\left(\sum_{j=1}^{d} w_{lm,j}^{(1)} x_{j} + b_{lm}^{(1)}\right)$ . Then, by Lemma A.4 with probability $1 - 2\delta$ , + +$$ +\begin{array}{l} \left| \sigma_ {l m} (x) - \hat {\sigma} _ {l m} (x) \right| \leq \left| \sum_ {j = 1} ^ {k} \left(w _ {l m, j} ^ {(1)} - 1\right) x _ {j} + \sum_ {j = k + 1} ^ {d} \left(w _ {l m, j} ^ {(1)} - \left(1 - \frac {1}{2 (d - k)}\right)\right) x _ {j} + b _ {l m} ^ {(1)} - b _ {l m} \right| (41) \\ \leq k 2 N \zeta \mu^ {- (k - 1)} + (d - k) 2 N \zeta \mu^ {- (k - 1)} + N (d + 1) \zeta \mu^ {k} (42) \\ \leq 3 N (d + 1) \zeta \mu^ {- k}. (43) \\ \end{array} +$$ + +Lemma A.7. There exists $a^*$ with $\| a^{*}\|_{\infty}\leq 4(d - k)$ such that + +$$ +\sum_ {l = 0} ^ {d} \sum_ {m = - 1} ^ {d - k} a _ {l m} ^ {*} \sigma_ {l m} (x) = \chi_ {[ k ]} (x). \tag {44} +$$ + +Proof. Recall, that we assumed $d, k$ even and recall that + +$$ +\sigma_ {l m} (x) = \operatorname {R a m p} \left(\sum_ {j = 1} ^ {d} x _ {j} - \frac {1}{2 (d - k)} \sum_ {j > k} x _ {j} + b _ {l m}\right), \tag {45} +$$ + +where $b_{lm} = -d + 2l - \frac{1}{2} + \frac{m + 1}{d - k}$ for $l \in [d]$ , $m \in \{-1, \dots, d - k + 1\}$ and $\operatorname{Ramp}(x) = \begin{cases} 0 & x \leq 0, \\ x & 0 < x \leq 1, \\ 1 & x > 1 \end{cases}$ . + +Let $\sum_{j=1}^{d} x_{j} = d - 2t$ , where $t$ is the total number of $-1$ , and similarly let $-\sum_{j=k+1}^{d} x_{j} = (d - k) - 2s$ , where $s$ is the number of $+1$ outside the support of the parity $\chi_{[k]}(x)$ . We have, + +$$ +\sigma_ {l m} (x) = \operatorname {R a m p} \left(2 (l - t) + \frac {m + 1 - s}{d - k}\right). \tag {46} +$$ + +We take + +$$ +a _ {l m} ^ {*} = (- 1) ^ {l} (- 1) ^ {m} 2 (d - k) \quad \forall l \in [ d ], m = - 1, \tag {47} +$$ + +$$ +a _ {l m} ^ {*} = (- 1) ^ {l} (- 1) ^ {m} 4 (d - k) \quad \forall l \in [ d ], m \in \{0, 1, \dots , d - k - 2 \}, \tag {48} +$$ + +$$ +a _ {l m} ^ {*} = (- 1) ^ {l} (- 1) ^ {m} 3 (d - k) \quad \forall l \in [ d ], m = d - k - 1, \tag {49} +$$ + +$$ +a _ {l m} ^ {*} = (- 1) ^ {l} (- 1) ^ {m} (d - k) \quad \forall l \in [ d ], m = d - k, \tag {50} +$$ + +Note that for all $l < t$ + +$$ +2 (l - t) + \frac {m + 1 - s}{d - k} \leq - 2 + \frac {d - k + 1}{d - k} \leq 0, \tag {51} +$$ + +thus, $\sigma_{lm}(x) = 0$ for all $m$ . Moreover, for all $l > t$ , + +$$ +2 (l - t) + \frac {m - s + 1}{d - k} \geq 2 - \frac {d - k}{d - k} = 1. \tag {52} +$$ + +Thus, $\sigma_{lm}(x) = 1$ for all $m$ and + +$$ +\sum_ {m = - 1} ^ {d - k} a _ {l m} ^ {*} \sigma_ {l m} (x) = \sum_ {m = - 1} ^ {d - k} a _ {l m} ^ {*} = 0. \tag {53} +$$ + +If $l = t$ + +$$ +\begin{array}{l} \sum_ {m = - 1} ^ {d - k} a _ {t m} ^ {*} \sigma_ {t m} (x) \\ = (- 1) ^ {t} (d - k) \left[ \sum_ {m = 0} ^ {d - k - 2} 4 (- 1) ^ {m} \mathrm {R a m p} \left(\frac {m + 1 - s}{d - k}\right) - 3 \mathrm {R a m p} \left(\frac {d - k - s}{d - k}\right) + \mathrm {R a m p} \left(\frac {d - k + 1 - s}{d - k}\right) \right] \\ = (- 1) ^ {t} \left[ \sum_ {m = s} ^ {d - k - 2} 4 (- 1) ^ {m} (m + 1 - s) _ {+} - 3 (d - k - s) _ {+} + (d - k + 1 - s) _ {+} \right] \\ = (- 1) ^ {t} (- 1) ^ {s}. \\ \end{array} +$$ + +Since we assumed $d, k$ even, $(-1)^{s} = \prod_{i=k+1}^{d} x_{i}$ . Moreover, observe that $\chi_{[k]}(x) = \prod_{i=k+1}^{d} x_{i} \cdot \prod_{i=1}^{d} x_{i}$ . Thus, + +$$ +\sum_ {l m} a _ {l m} ^ {*} \sigma_ {l m} (x) = (- 1) ^ {t} (- 1) ^ {s} = \chi_ {[ k ]} (x). \tag {54} +$$ + +Lemma A.8. Let $f^{*}(x) = \sum_{l,m}a_{lm}^{*}\sigma_{lm}(x)$ and let $\hat{f} (x) = \sum_{l,m}a_{lm}^{*}\hat{\sigma}_{lm}(x)$ , with $\sigma_{lm},\hat{\sigma}_{lm}$ defined in Lemma B.4 and $a^*$ defined in Lemma A.7. If $B\geq (8\zeta^{2}N^{2})^{-1}\log (\frac{Nd + N}{\delta})$ and $N\geq (d + 1)(d - k + 1)\log ((d + 1)(d - k + 1) / \delta)$ , with probability $1 - 3\delta$ for all $x$ , + +$$ +L \left(\hat {f}, f ^ {*}, x\right) \leq (d + 1) ^ {2} (d - k + 1) ^ {2} 1 2 N \zeta \mu^ {- k}. \tag {55} +$$ + +Proof. + +$$ +\left| f ^ {*} (x) - \hat {f} (x) \right| = \left| \sum_ {l, m} a _ {l, m} ^ {*} \left(\sigma_ {l m} (x) - \hat {\sigma} _ {l m} (x)\right) \right| \tag {56} +$$ + +$$ +\leq d (d - k + 1) \| a ^ {*} \| _ {\infty} \sup _ {l m} | \sigma_ {l m} (x) - \hat {\sigma} _ {l m} (x) | \tag {57} +$$ + +$$ +\leq (d + 1) ^ {2} (d - k + 1) ^ {2} 1 2 N \zeta \mu^ {- k}. \tag {58} +$$ + +Thus, + +$$ +\begin{array}{l} (1 - f (x) f ^ {*} (x)) _ {+} \leq | 1 - f (x) f ^ {*} (x) | (59) \\ = \left| f ^ {* ^ {2}} (x) - f (x) f ^ {*} (x) \right| (60) \\ = | f ^ {*} (x) | \cdot | f ^ {*} (x) - f (x) | \leq (d + 1) ^ {2} (d - k + 1) ^ {2} 1 2 N \zeta \mu^ {- k}, (61) \\ \end{array} +$$ + +which implies the result. + +![](images/7ef1f7f2145e45bdb53eb5a4a6e40f21632fcd6d51f59e3a2c8128f1d123bc8f.jpg) + +# A.3. Second Step: Convergence + +To conclude, we use an established result on convergence of SGD on convex losses (see e.g. (Shalev-Shwartz et al., 2012; Shalev-Shwartz & Ben-David, 2014; Daniely & Malach, 2020; Malach & Shalev-Shwartz, 2020; Barak et al., 2022)). + +Theorem A.9. Let $\mathcal{L}$ be a convex function and let $a^* \in \operatorname{argmin}_{\|a\|_2 \leq \mathcal{B}} \mathcal{L}(a)$ , for some $\mathcal{B} > 0$ . For all $t$ , let $\alpha^{(t)}$ be such that $\mathbb{E}\left[\alpha^{(t)} \mid a^{(t)}\right] = -\nabla_{a^{(t)}} \mathcal{L}(a^{(t)})$ and assume $\|\alpha^{(t)}\|_2 \leq \rho$ for some $\rho > 0$ . If $a^{(0)} = 0$ and for all $t \in [T]$ $a^{(t+1)} = a^{(t)} + \gamma \alpha^{(t)}$ , with $\gamma = \frac{\mathcal{B}}{\rho \sqrt{T}}$ , then: + +$$ +\frac {1}{T} \sum_ {t = 1} ^ {T} \mathcal {L} \left(a ^ {(t)}\right) \leq \mathcal {L} \left(a ^ {*}\right) + \frac {\mathcal {B} \rho}{\sqrt {T}}. \tag {62} +$$ + +Let $\mathcal{L}(a) := \mathbb{E}_{x \sim \mathrm{Rad}(1/2) \otimes d} \left[ L((a, b^{(1)}, w^{(1)}), \chi_{[k]}, x) \right]$ . Then, $\mathcal{L}$ is convex in $a$ and for all $t \in [T]$ , + +$$ +\begin{array}{l} \alpha^ {(t)} = - \frac {1}{B} \sum_ {s = 1} ^ {B} \nabla_ {a ^ {(t)}} L \left(\left(a ^ {(t)}, b ^ {(1)}, w ^ {(1)}\right), \chi_ {[ k ]}, x\right) (63) \\ = - \frac {1}{B} \sum_ {s = 1} ^ {B} \sigma \left(w ^ {(1)} x + b ^ {(1)}\right). (64) \\ \end{array} +$$ + +Thus, recalling $\sigma = \mathrm{Ramp}$ , we have $\| \alpha^{(t)}\|_2 \leq \sqrt{N}$ . Let $a^*$ be as in Lemma A.7. Clearly, $\| a^*\|_2 \leq 4(d - k + 1)^{3/2}(d + 1)^{1/2}$ . Moreover, $a^{(1)} = 0$ . Thus, we can apply Theorem A.9 with $\mathcal{B} = 4(d - k + 1)^{3/2}(d + 1)^{1/2}$ , $\rho = \sqrt{N}$ and obtain that if + +1. $N\geq (d + 1)(d - k + 1)\log ((d + 1)(d - k + 1) / \delta);$ +2. $\zeta \leq \frac{\epsilon\mu^k}{24(d + 1)^2(d - k + 1)^2N};$ +3. $B \geq (8\zeta^2 N^2)^{-1} \log \left(\frac{Nd + N}{\delta}\right)$ ; +4. $T \geq \frac{64}{\epsilon^2} (d - k + 1)^3 (d + 1)N.$ + +then, with probability $1 - 3\delta$ over the initialization + +$$ +\mathbb {E} _ {x \sim \operatorname {R a d} (1 / 2) ^ {\otimes d}} \left[ \min _ {t \in [ T ]} L \left(\theta^ {(t)}, \chi_ {[ k ]}, x\right) \right] \leq \frac {\epsilon}{2} + \frac {\epsilon}{2} = \epsilon . \tag {65} +$$ + +Remark A.10. We assume $k \leq d / 2$ to avoid exponential dependence of $\zeta$ (and consequently of the batch size and of the computational complexity) in $d$ . Indeed, if $k \leq d / 2$ , then, + +$$ +\mu^ {k} = \left(1 - \frac {1}{2 (d - k)}\right) ^ {k / 2} \geq \left(1 - \frac {1}{d}\right) ^ {d / 4} \sim e ^ {- 1 / 4}. \tag {66} +$$ + +# B. Proof of Theorem 3.4 + +Theorem B.1 (Theorem 3.4, restatement). Let $k, d$ be integers, such that $d \geq k$ and $k$ even. Let $\mathrm{NN}(x; \theta) = \sum_{i=1}^{N} a_i \sigma(w_i x + b_i)$ be a 2-layers fully connected network with activation $\sigma(y) \coloneqq \mathrm{ReLU}(y)$ and $N \geq (k+1) \log \left( \frac{k+1}{\delta} \right)$ . Consider training $\mathrm{NN}(x; \theta)$ with SGD on the covariance loss with batch size $B \geq (2\zeta^2)^{-1} \log \left( \frac{dN}{\delta} \right)$ , with $\zeta \leq \frac{\epsilon(\mu^{k-1} - \mu^{k+1})}{64k^2N} \cdot \left( 1 + \frac{d-k}{k} \right)^{-1}$ , for some $\mu \in (0,1)$ . Then, there exists an initialization, a learning rate schedule, and a 2-CL strategy such that after $T \geq \frac{64k^3N}{\epsilon^2}$ iterations, with probability $1 - 3\delta$ SGD outputs a network with generalization error at most $\epsilon$ . + +# B.1. Proof Setup + +Similarly as before, we consider a 2-layers neural network, defined as $\mathrm{NN}(x;\theta) = \sum_{i=1}^{N}a_i\sigma(w_i x + b_i)$ , where $N$ is the number of hidden units, $\theta = (a,b,w)$ and $\sigma \coloneqq \mathrm{ReLU}$ . Our proof scheme is similar to the previous Section. Again, we assume without loss of generality that the labels are generated by $\chi_{[k]}(x) \coloneqq \prod_{i=1}^{k}x_i$ . We assume $k$ to be even. We train our network with SGD on the covariance loss, defined in Def. 3.2. We use the same updates as in (10) with: + +$$ +x _ {s} ^ {0} \stackrel {i i d} {\sim} \operatorname {R a d} (p) ^ {\otimes d} \quad \forall s \in [ B ], \tag {67} +$$ + +$$ +x _ {s} ^ {t} \stackrel {i i d} {\sim} \operatorname {R a d} (p) ^ {\otimes d} \quad \forall t \geq 1, s \in [ B ], \tag {68} +$$ + +for some $p\in (1 / 2,1)$ . We denote $\mu \coloneqq 2p - 1$ . We set the parameters to: + +$$ +\gamma_ {0} = 1 6 N \left(\mu^ {k - 1} - \mu^ {k + 1}\right) ^ {- 1} k ^ {- 1}, \quad \gamma_ {t} = 0 \quad \forall t \geq 1, \tag {69} +$$ + +$$ +\xi_ {0} = 0, \quad \xi_ {t} = \frac {\epsilon}{8 N} \quad \forall t \geq 1, \tag {70} +$$ + +$$ +\psi_ {0} = 0, \quad \psi_ {t} = 0 \quad \forall t \geq 1, \tag {71} +$$ + +$$ +c _ {0} = - 1, \quad c _ {t} = 0 \quad \forall t \geq 1, \tag {72} +$$ + +$$ +\lambda_ {0} = 1, \quad \lambda_ {t} = 1 \quad \forall t \geq 1, \tag {73} +$$ + +$$ +d _ {0} = - 1, \quad d _ {t} = 0 \quad \forall t \geq 1, \tag {74} +$$ + +and we consider the following initialization scheme: + +$$ +w _ {i, j} ^ {(0)} = 0 \quad \forall i \in [ N ], j \in [ d ]; +$$ + +$$ +a _ {i} ^ {(0)} = \frac {1}{1 6 N} \quad \forall i \in [ N ]; \tag {75} +$$ + +$$ +b _ {i} ^ {(0)} \sim \operatorname {U n i f} \left\{\frac {2 (i + 1)}{k}: i \in \{0, 1, \dots , k \} \right\}. +$$ + +# B.2. First Step: Recovering the Support + +Population gradient at initialization. At initialization, we have $|\mathrm{NN}(x;\theta^0)| < \frac{1}{4}$ , thus + +$$ +\left| \operatorname {c o v} \left(\chi_ {[ k ]}, \theta^ {0}, x, \operatorname {R a d} (p) ^ {\otimes d}\right) \right| < 1. \tag {76} +$$ + +The initial gradients are therefore given by: + +$$ +\forall i, j \quad G _ {w _ {i, j}} = - a _ {i} \mathbb {E} _ {x \sim \operatorname {R a d} (p) ^ {\otimes d}} \left[ \left(\prod_ {l \in [ k ]} x _ {l} - \mu^ {k}\right) \cdot \left(x _ {j} 1 \left(\langle w _ {i}, x \rangle + b _ {i} > 0\right) - \mathbb {E} x _ {j} 1 \left(\langle w _ {i}, x \rangle + b _ {i} > 0\right)\right) \right] \tag {77} +$$ + +$$ +\forall i \in [ N ] \quad G _ {b _ {i}} = - a _ {i} \mathbb {E} _ {x \sim \operatorname {R a d} (p) ^ {\otimes d}} \left[ \left(\prod_ {l \in [ k ]} x _ {l} - \mu^ {k}\right) \cdot \left(1 \left(\langle w _ {i}, x \rangle + b _ {i} > 0\right) - \mathbb {E} 1 \left(\langle w _ {i}, x \rangle + b _ {i} > 0\right)\right) \right] \tag {78} +$$ + +If we initialize $a, b, w$ according to (75). Then, + +$$ +\forall j \in [ k ], \quad G _ {w _ {i, j}} = - \frac {\mu^ {k - 1} - \mu^ {k + 1}}{1 6 N}; \tag {79} +$$ + +$$ +\forall j \notin [ k ], \quad G _ {w _ {i, j}} = 0; \tag {80} +$$ + +$$ +G _ {b _ {i}} = 0. \tag {81} +$$ + +Effective gradient at initialization. + +# Lemma B.2. Let + +$$ +w _ {i, j} ^ {(1)} = w _ {i, j} ^ {(0)} - \gamma_ {0} \hat {G} _ {w _ {i, j}} \tag {82} +$$ + +$$ +b _ {i} ^ {(1)} = \lambda_ {0} \left(b _ {i} ^ {(0)} - \psi_ {0} \hat {G} _ {b _ {i}}\right) + d _ {0}, \tag {83} +$$ + +where $\hat{G}_{w_{i,j}},\hat{G}_{b_i}$ are the gradients estimated from the initial batch. Then, with probability $1 - 2\delta$ , if $B\geq (2\zeta^{2})^{-1}\log \left(\frac{dN}{\delta}\right)$ + +i) For all $j\in [k],i\in [N],|w_{i,j}^{(1)} - \frac{1}{k} |\leq \frac{\zeta 16N}{k(\mu^{k - 1} - \mu^{k + 1})}$ +ii) For all $j \notin [k]$ , $|w_{i,j}^{(1)}| \leq \frac{\zeta 16N}{k(\mu^{k-1}-\mu^{k+1})}$ ; +iii) For all $i \in [N]$ , $b_i^{(1)} = b^{(0)} - 1$ + +Proof. By Lemma A.4, if $B \geq (2\zeta^2)^{-1}\log \left(\frac{dk}{\delta}\right)$ , then for all $j \in [k], i \in [N]$ , $|\hat{G}_{w_{i,j}} - G_{w_{i,j}}| \leq \zeta$ . Thus, + +i) For all $j\in [k],i\in [N],|w_{i,j}^{(1)} - \frac{1}{k} | = \gamma_0|\hat{G}_{w_{i,j}} - G_{w_{i,j}}|\leq \frac{\zeta 16N}{k(\mu^{k - 1} - \mu^{k + 1})}$ +ii) For all $j\in [k],i\in [N],|w_{i,j}^{(1)}| = \gamma_0|\hat{G}_{w_{i,j}} - G_{w_{i,j}}|\leq \frac{\zeta 16N}{k(\mu^{k - 1} - \mu^{k + 1})}$ + +iii) follows trivially. + +Lemma B.3. If $N \geq (k + 1)\log\left(\frac{k + 1}{\delta}\right)$ , with probability $1 - \delta$ for all $i \in \{0, \dots, k\}$ there exists $l \in [N]$ such that $b_l^{(0)} = \frac{2(i + 1)}{k}$ . + +Proof. The probability that there exists $i$ such that the above does not hold is + +$$ +\left(1 - \frac {1}{k + 1}\right) ^ {N} \leq \exp \left(- \frac {N}{k + 1}\right) \leq \frac {\delta}{k + 1}. \tag {85} +$$ + +The result follows by union bound. + +Lemma B.4. Let $\sigma_{i}(x) := \mathrm{ReLU}\left(\frac{1}{k}\sum_{j=1}^{k}x_{j} + b_{i}\right)$ , with $b_{i} = -1 + \frac{2(i+1)}{k}$ . Then, with probability $1 - 3\delta$ , for all $i \in \{0,\dots,k\}$ there exists $l \in [N]$ such that + +$$ +\left| \sigma_ {i} (x) - \operatorname {R e L U} \left(\sum_ {j = 1} ^ {d} w _ {l, j} ^ {(1)} x _ {j} + b _ {l} ^ {(1)}\right) \right| \leq \frac {\zeta 1 6 N}{\mu^ {k - 1} - \mu^ {k + 1}} \cdot \left(1 + \frac {d - k}{k}\right). \tag {86} +$$ + +Proof. By Lemma B.3 and Lemma B.2, with probability $1 - 3\delta$ , for all $i$ there exists $l$ such that $b_{l}^{(1)} = -1 + \frac{2(i + 1)}{k}$ , and + +$$ +\begin{array}{l} \left| \sigma_ {i} (x) - \operatorname {R e L U} \left(\sum_ {j = 1} ^ {d} w _ {l, j} ^ {(1)} x _ {j} + b _ {l} ^ {(1)}\right) \right| \leq \left| \sum_ {j = 1} ^ {k} \left(\frac {1}{k} - w _ {l, j} ^ {(1)} x _ {j}\right) \right| + \left| \sum_ {j = k + 1} ^ {d} w _ {l, j} ^ {(1)} x _ {j} \right| (87) \\ \leq \frac {\zeta 1 6 N}{\mu^ {k - 1} - \mu^ {k + 1}} + \frac {\zeta 1 6 N (d - k)}{(\mu^ {k - 1} - \mu^ {k + 1}) k} (88) \\ = \frac {\zeta 1 6 N}{\mu^ {k - 1} - \mu^ {k + 1}} \cdot \left(1 + \frac {d - k}{k}\right). (89) \\ \end{array} +$$ + +![](images/77ce7919f405771b6edc94811052fd61667e8c09b435a4707de11cb83d57beb8.jpg) + +Lemma B.5. There exist $a^*$ with $\| a^* \|_{\infty} \leq 2k$ such that + +$$ +\sum_ {i = 0} ^ {k} a _ {i} ^ {*} \sigma_ {i} (x) = \chi_ {[ k ]} (x). \tag {90} +$$ + +Proof. We assume $k$ to be even. Let $\sum_{j=1}^{k} x_{j} = k - 2t$ , where $t := |\{i : x_{i} = -1, i \in [k]\}|$ . Thus, + +$$ +\sigma_ {i} (x) = \operatorname {R e L U} \left(\frac {2 (i + 1 - t)}{k}\right). \tag {91} +$$ + +We choose + +$$ +a _ {i} ^ {*} = (- 1) ^ {i} 2 k \quad \forall i \in \{0, 1, \dots , k - 2 \}, \tag {92} +$$ + +$$ +a _ {i} ^ {*} = (- 1) ^ {i} \frac {3}{2} k \quad i = k - 1, \tag {93} +$$ + +$$ +a _ {i} ^ {*} = (- 1) ^ {i} \frac {1}{2} k \quad i = k. \tag {94} +$$ + +One can check that with these $a_{i}^{*}$ the statement holds. + +Lemma B.6. Let $f^{*}(x) = \sum_{i=0}^{k} a_{i}^{*} \sigma_{i}(x)$ and let $\hat{f}(x) = \sum_{i=0}^{k} a_{i}^{*} \hat{\sigma}_{i}(x)$ , with $\sigma_{i}(x)$ defined above and $\hat{\sigma}_{i}(x) := \mathrm{ReLU}(\sum_{j=1}^{d} w_{i,j}^{(1)} x_{j} + b_{i}^{(1)})$ . Then, with probability $1 - 3\delta$ for all $x$ , + +$$ +(1 - f (x) f ^ {*} (x)) _ {+} \leq \frac {3 2 k ^ {2} \zeta N}{\mu^ {k - 1} - \mu^ {k + 1}} \cdot \left(1 + \frac {d - k}{k}\right), \tag {95} +$$ + +where $(z)_{+}:= \max \{0,z\}$ . + +Proof. + +$$ +\left| f ^ {*} (x) - \hat {f} (x) \right| = \left| \sum_ {i} a _ {i} ^ {*} \left(\sigma_ {i} (x) - \hat {\sigma} _ {i} (x) \right. \right| \tag {96} +$$ + +$$ +\leq k \| a ^ {*} \| _ {\infty} \sup _ {i} | \sigma_ {i} (x) - \hat {\sigma} _ {i} (x) | \tag {97} +$$ + +$$ +\leq \frac {3 2 k ^ {2} \zeta N}{\mu^ {k - 1} - \mu^ {k + 1}} \cdot \left(1 + \frac {d - k}{k}\right). \tag {98} +$$ + +Thus, + +$$ +\begin{array}{l} (1 - f (x) f ^ {*} (x)) _ {+} \leq | 1 - f (x) f ^ {*} (x) | (99) \\ = \left| f ^ {* ^ {2}} (x) - f (x) f ^ {*} (x) \right| (100) \\ = | f ^ {*} (x) | \cdot | f ^ {*} (x) - f (x) | \leq \frac {3 2 k ^ {2} \zeta N}{\mu^ {k - 1} - \mu^ {k + 1}} \cdot \left(1 + \frac {d - k}{k}\right). (101) \\ \end{array} +$$ + +# B.3. Second Step: Convergence + +We apply Theorem A.9 with $\mathcal{L}(a) \coloneqq \mathbb{E}_{x \sim \mathrm{Rad}(1/2) \otimes d} \left[ L_{\mathrm{cov}}((a, b^{(1)}, w^{(1)}), \chi_{[k]}, x) \right]$ , $\rho = 2\sqrt{N}$ , $\mathcal{B} = 2k\sqrt{k}$ . We get that if + +1. $N\geq (k + 1)\log (\frac{k + 1}{\delta})$ +2. $\zeta \leq \frac{\epsilon(\mu^{k - 1} - \mu^{k + 1})}{64k^2N}\cdot \left(1 + \frac{d - k}{k}\right)^{-1};$ + +3. $B \geq (2\zeta^2)^{-1}\log \left(\frac{dN}{\delta}\right)$ ; +4. $T \geq \frac{64k^3N}{\epsilon^2}$ . + +then with probability $1 - 3\delta$ over the initialization, + +$$ +\mathbb {E} _ {x \sim \operatorname {R a d} (1 / 2) ^ {\otimes d}} \left[ \min _ {t \in [ T ]} L _ {\text {c o v}} \left(\chi_ {[ k ]}, \theta^ {t}, x, \operatorname {R a d} (1 / 2) ^ {\otimes d}\right) \right] \leq \epsilon . \tag {102} +$$ + +Remark B.7. We remark that if $\mu = \theta_{d,k}(1)$ , then $\zeta$ decreases exponentially fast in $k$ , and as a consequence the batch size and the computational cost grow exponentially in $k$ . If however we choose $\mu = 1 - 1 / k$ , then we get $\zeta = 1 / \mathrm{poly}(k)$ and, as a consequence, the batch size and the computational cost grow polynomially in $k$ . + +# C. Proof of Theorem 4.2 + +Let us consider $r = 2$ . The case of general $r$ follows easily. Let us state the following Lemma. + +Lemma C.1. Let $x \sim \operatorname{Rad}(p)^{\otimes d}$ and let $H(x) := \sum_{i=1}^{d} 1(x_i = 1)$ be the Hamming weight of $H(x)$ . Assume $\epsilon < p < \epsilon'$ for some $\epsilon, \epsilon' \in [0,1/2]$ , then, + +$$ +\mathbb {P} _ {x \sim \operatorname {R a d} (p) \otimes d} \left(H (x) \geq \epsilon^ {\prime} d\right) \leq 2 \exp \left(- \left(\epsilon^ {\prime} - p\right) ^ {2} d\right); \tag {103} +$$ + +$$ +\mathbb {P} _ {x \sim \operatorname {R a d} (p) \otimes d} (H (x) \leq \epsilon d) \leq 2 \exp (- (p - \epsilon) ^ {2} d). \tag {104} +$$ + +Proof of Lemma C.1. We apply Hoeffding's inequality with $\mathbb{E}[H(x)] = pd$ and $\sum_{i=1}^{d} x_i = d - 2H(x)$ : + +$$ +\mathbb {P} (| H (x) - p d | \geq t d) \leq 2 \exp (- (t - p) ^ {2} d). \tag {105} +$$ + +Take $\epsilon$ such that $|p_1 - \frac{1}{2}| > \epsilon$ , and consider the following algorithm: + +1. Choose a set $V \subseteq [d]$ uniformly at random among all subsets of $[d]$ of cardinality $k_S$ ; +2. Take a fully connected neural network $\mathrm{NN}(x;\psi)$ with the same architecture as $\mathrm{NN}(x;\theta)$ and with initialization $\psi^0 = \theta^0$ ; +3. Train $\mathrm{NN}(x;\psi)$ on data $(x,\chi_V(x))$ , with $x\sim \mathrm{Rad}(p_1)^{\otimes d}$ ; +4. Train until convergence the pre-trained network $\mathrm{NN}(x;\psi)$ with initialization $\psi^{T_1}$ on data $(x,\chi_T(x))$ , with $x \sim \operatorname{Rad}(1/2)^{\otimes d}$ . + +The result holds by the following two Lemmas. + +Lemma C.2. If $V = S$ , $\mathrm{TV}(\theta^T; \psi^T) \leq \frac{AT\sqrt{|\theta|}}{\tau} \exp(-d\delta^2)$ , where $\delta = \min \{|\epsilon - p_1|, |1/2 - \epsilon|\}$ and TV denotes the total variation distance between the law of $\theta^T$ and $\psi^T$ . + +Proof. Clearly, $\mathrm{TV}(\theta^0;\psi^0) = 0$ . Then, using subadditivity of TV + +$$ +\begin{array}{l} \operatorname {T V} \left(\theta^ {T}; \psi^ {T}\right) \leq \sum_ {t = 1} ^ {T} \operatorname {T V} \left(\theta^ {t}; \psi^ {t} \mid \{Z ^ {i} \} _ {i \leq t - 2}\right) (106) \\ = \sum_ {t = 1} ^ {T} \mathrm {T V} \left(\gamma \left(g _ {\theta^ {t - 1}} + Z ^ {t - 1}\right); \gamma \left(g _ {\psi^ {t - 1}} + Z ^ {t - 1}\right) \mid \left\{Z _ {i} \right\} _ {i \leq t - 1}\right), (107) \\ \end{array} +$$ + +where $g_{\theta^{t - 1}}, g_{\psi^{t - 1}}$ denote the population gradients in $\theta^{t - 1}$ and $\psi^{t - 1}$ , respectively. Then, recalling that the $Z^{t - 1}$ are Gaussians, we get + +$$ +\operatorname {T V} \left(\theta^ {T}; \psi^ {T}\right) \stackrel {(a)} {\leq} \sum_ {t = 1} ^ {T} \frac {1}{2 \tau \gamma} \left\| \gamma g _ {\theta^ {t - 1}} - \gamma g _ {\psi^ {t - 1}} \right\| _ {2} \tag {108} +$$ + +$$ +\stackrel {(b)} {\leq} \sum_ {t = 1} ^ {T _ {r - 1}} \frac {1}{2 \tau \gamma} \cdot 2 \sqrt {| \theta |} A \gamma \cdot \mathbb {P} (H (x) \geq \epsilon d) + \sum_ {t = T _ {r - 1}} ^ {T} \frac {1}{2 \tau \gamma} \cdot 2 \sqrt {| \theta |} A \gamma \cdot \mathbb {P} (H (x) \leq \epsilon d) \tag {109} +$$ + +$$ +\stackrel {(c)} {\leq} \frac {A T \sqrt {| \theta |}}{\tau} \exp (- d \delta^ {2}). \tag {110} +$$ + +In $(a)$ we applied the formula for the TV between Gaussian variables with same variance. In $(b)$ we used that each gradient is in $[-A, A]$ and that during the first part of training, for all $x$ with $H(x) < \epsilon$ , the two gradients are the same, and similarly in the second part of training for all $H(x) > \epsilon d$ . In $(c)$ we applied Lemma C.1. + +We apply Theorem 3 from (Abbe & Sandon, 2020), which we restate here for completeness. + +Theorem C.3 (Theorem 3 in (Abbe & Sandon, 2020)). Let $\mathcal{P}_k$ be the set of $k$ -parities over $d$ bits. Noisy-GD on any neural network of size $|\theta|$ and any initialization, after $T$ steps of training under the uniform distribution, outputs a network such that + +$$ +\frac {1}{| \mathcal {P} _ {k} |} \sum_ {f \in \mathcal {P} _ {k}} \left| \mathbb {E} _ {x \sim \operatorname {R a d} (1 / 2) ^ {\otimes d}} \left[ \operatorname {N N} \left(x; \theta^ {T}\right) \cdot f (x) \right] \right| \leq \frac {T \sqrt {| \theta |} A}{\tau} \cdot d ^ {- k / 2}. \tag {111} +$$ + +In our case, this implies: + +$$ +\left. \mathbb {E} _ {V} \right| \mathbb {E} _ {x} \left[ \mathrm {N N} (x; \psi^ {T}) \cdot G _ {S, T, \epsilon} (x) \right] | \leq \frac {\left(T - T _ {r - 1}\right) \sqrt {| \theta |} A}{\tau d ^ {k _ {T} / 2}} \tag {112} +$$ + +To conclude our proof, note that: + +$$ +\begin{array}{l} \mathbb {E} _ {V} \left| \mathbb {E} _ {x} \left[ \mathrm {N N} (x, \psi^ {T}) \cdot G _ {S, T, \epsilon} (x) \right] \right| = \mathbb {E} _ {V} \left[ \left| \mathbb {E} _ {x} \left[ \mathrm {N N} (x, \psi^ {T}) \cdot G _ {S, T, \epsilon} (x) \right] \right| | | V \cap T | = 0 \right] \mathbb {P} (| V \cap T | = 0) (113) \\ + \mathbb {E} _ {V} \left[ \left| \mathbb {E} _ {x} \left[ \mathrm {N N} (x, \psi^ {T}) \cdot G _ {S, T, \epsilon} (x) \right] \right| | | V \cap T | > 0 \right] \mathbb {P} (| V \cap T | > 0). (114) \\ \end{array} +$$ + +One can check that, + +$$ +\mathbb {P} (| V \cap T | > 0) = 1 - \frac {2 k _ {S} k _ {T}}{d} + O \left(d ^ {- 2}\right). \tag {115} +$$ + +Moreover, by symmetry, for any $V$ such that $|V \cap T| = 0$ , the algorithm achieves the same correlation (to see this, one finds an appropriate permutation of the input neurons and use invariance of GD on fully connected networks with permutation-invariant initialization). Thus, + +$$ +\mathbb {E} _ {V} \left[ \left| \mathbb {E} _ {x} \left[ \mathrm {N N} (x, \psi^ {T}) \cdot G _ {S, T, \epsilon} (x) \right] \right| | | V \cap T | = 0 \right] = \mathbb {E} _ {V} \left[ \left| \mathbb {E} _ {x} \left[ \mathrm {N N} (x, \psi^ {T}) \cdot G _ {S, T, \epsilon} (x) \right] \right| | V = S \right]. \tag {116} +$$ + +By Lemma C.2, + +$$ +\begin{array}{l} \left| \mathbb {E} _ {x} \left[ \mathrm {N N} \left(x; \theta^ {T}\right) \cdot G _ {S, T, \epsilon} (x) \right] \right| \leq \mathbb {E} _ {V} \left[ \left| \mathbb {E} _ {x} \left[ \mathrm {N N} \left(x, \psi^ {T}\right) \cdot G _ {S, T, \epsilon} (x) \right] \right| | V = S \right] + \frac {A T \sqrt {| \theta |}}{\tau} \exp (- d \delta^ {2}) (117) \\ \leq \frac {\left(T - T _ {r - 1}\right) \sqrt {| \theta |} A}{\tau d ^ {k _ {T} / 2}} + \frac {A T \sqrt {| \theta |}}{\tau} \exp (- d \delta^ {2}) + \frac {2 k _ {S} k _ {T}}{d} + O (d ^ {- 2}). (118) \\ \end{array} +$$ + +The argument for general $r$ holds by taking $\epsilon$ such that $|p_l - \frac{1}{2}| > \epsilon$ for all $l \in [r - 1]$ and by replacing step 3 of the algorithm above with the following: + +3. Train $\mathrm{NN}(x;\psi)$ on data $(x,\chi_V(x))$ using a $(r - 1)$ -CL $((T_1,\dots,T_{r - 1}),(p_1,\dots,p_{r - 1}))$ strategy. + +# D. Proof of Corollary 4.5 + +We use the same proof strategy of Theorem 4.2: specifically, we use the same algorithm for training a network $\mathrm{NN}(x;\psi)$ with the same architecture as $\mathrm{NN}(x;\theta)$ , so that Lemma C.2 still holds. We import Theorem 3 from (Abbe & Sandon, 2020) in the following form. + +Theorem D.1 (Theorem 3 in (Abbe & Sandon, 2020)). Let $\mathcal{F}$ be the set of $k$ -parities over set $\{d/2 + 1, \ldots, d\}$ . Noisy-GD on any neural network $\mathrm{NN}(x; w)$ of size $|w|$ and any initialization, after $T$ steps of training on samples drawn from the uniform distribution, outputs a network such that + +$$ +\frac {1}{| \mathcal {F} |} \sum_ {f \in \mathcal {F}} \left| \mathbb {E} _ {x \sim \operatorname {R a d} (1 / 2) ^ {\otimes d}} \left[ \mathrm {N N} (x; w ^ {(T)}) \cdot f (x) \right] \right| \leq \frac {T \sqrt {| w |} A}{\tau} \cdot \binom {d / 2} {k} ^ {- 1 / 2}. \tag {119} +$$ + +Similarly as before, Theorem D.1 and Lemma C.1 imply: + +$$ +\mathbb {E} _ {V} \left| \mathbb {E} _ {x \sim \operatorname {R a d} (1 / 2) ^ {\otimes d}} \left[ \mathrm {N N} (x; \psi^ {(T)}) \cdot G _ {S, V, \epsilon} (x) \right] \right| \leq \frac {(T - T _ {r - 1}) \sqrt {| \theta |} A}{\tau} \cdot \binom {d / 2} {k _ {V}} ^ {- 1 / 2} + \exp (- d \delta^ {2}), \tag {120} +$$ + +where by $\mathbb{E}_V$ we denote the expectation over set $V$ sampled uniformly at random from all subsets of $\{d / 2 + 1,\dots,d\}$ of cardinality $k_{V}$ . Since both the initialization and noisy-GD on fully connected networks are invariant to permutation of the input neurons, for any $V\subseteq \{d / 2 + 1,\dots,d\}$ , the algorithm achieves the same correlation. Thus, applying Lemma C.2: + +$$ +\left| \mathbb {E} _ {x} \left[ \mathrm {N N} \left(x; \theta^ {(T)}\right) \cdot G _ {S, V, \epsilon} (x) \right] \right| \leq \frac {\left(T - T _ {r - 1}\right) \sqrt {| \theta |} A}{\tau} \cdot \binom {d / 2} {k _ {V}} ^ {- 1 / 2} + \frac {2 A T \sqrt {| \theta |}}{\tau} \exp (- d \delta^ {2}). \tag {121} +$$ \ No newline at end of file diff --git a/amathematicalmodelforcurriculumlearningforparities/images.zip b/amathematicalmodelforcurriculumlearningforparities/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b010a5ab9ab414c4e2a4d67e2ec3bc837db136ed --- /dev/null +++ b/amathematicalmodelforcurriculumlearningforparities/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ba9fd40be60e9d8713169da3fa5e83e3f59e99e057a2dd20bcf36303c2aa4d7a +size 1039555 diff --git a/amathematicalmodelforcurriculumlearningforparities/layout.json b/amathematicalmodelforcurriculumlearningforparities/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a45bcd9ae2701a228269f3a567401f227f65067a --- /dev/null +++ b/amathematicalmodelforcurriculumlearningforparities/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd765e2e26d58a2da18ca3a7bad64f8b57aa3091615e1ddc4eb9c856a5279357 +size 1276041 diff --git a/amodelbasedmethodforminimizingcvarandbeyond/8e6efb25-800f-4b84-acdb-047aa9807330_content_list.json b/amodelbasedmethodforminimizingcvarandbeyond/8e6efb25-800f-4b84-acdb-047aa9807330_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..8bdbb3d3886fe0bf718146c90c9a1492f47d220f --- /dev/null +++ b/amodelbasedmethodforminimizingcvarandbeyond/8e6efb25-800f-4b84-acdb-047aa9807330_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:236409d4110bae6b447405f8d57f19d5f0fa8e59200e86ff9d64bd963c10863b +size 169568 diff --git a/amodelbasedmethodforminimizingcvarandbeyond/8e6efb25-800f-4b84-acdb-047aa9807330_model.json b/amodelbasedmethodforminimizingcvarandbeyond/8e6efb25-800f-4b84-acdb-047aa9807330_model.json new file mode 100644 index 0000000000000000000000000000000000000000..f758ca05a4c3ea4de9b8d5f817916480fc1d739a --- /dev/null +++ b/amodelbasedmethodforminimizingcvarandbeyond/8e6efb25-800f-4b84-acdb-047aa9807330_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:461766b7e7753c751d8f2982439fa744cab198cbcb5121c2924b74cd1283da55 +size 193677 diff --git a/amodelbasedmethodforminimizingcvarandbeyond/8e6efb25-800f-4b84-acdb-047aa9807330_origin.pdf b/amodelbasedmethodforminimizingcvarandbeyond/8e6efb25-800f-4b84-acdb-047aa9807330_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6ae591e4ec384ec80cd2686ab1c8a85d12cfde36 --- /dev/null +++ b/amodelbasedmethodforminimizingcvarandbeyond/8e6efb25-800f-4b84-acdb-047aa9807330_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93708ed0a9ae78a90ae4dd77efe42188fefcf3c0313bebf4b07297c3813138ed +size 1675201 diff --git a/amodelbasedmethodforminimizingcvarandbeyond/full.md b/amodelbasedmethodforminimizingcvarandbeyond/full.md new file mode 100644 index 0000000000000000000000000000000000000000..db157ff98459c80cd81cca7e6aa1d87d68817dd2 --- /dev/null +++ b/amodelbasedmethodforminimizingcvarandbeyond/full.md @@ -0,0 +1,1033 @@ +# A Model-Based Method for Minimizing CVaR and Beyond + +Si Yi Meng1,2 Robert M. Gower2 + +# Abstract + +We develop a variant of the stochastic prox-linear method for minimizing the Conditional Value-at-Risk (CVaR) objective. CVaR is a risk measure focused on minimizing worst-case performance, defined as the average of the top quantile of the losses. In machine learning, such a risk measure is useful to train more robust models. Although the stochastic subgradient method (SGM) is a natural choice for minimizing the CVaR objective, we show that our stochastic prox-linear $(\mathrm{SPL}+)$ algorithm can better exploit the structure of the objective, while still providing a convenient closed form update. Our $\mathrm{SPL}+$ method also adapts to the scaling of the loss function, which allows for easier tuning. We then specialize a general convergence theorem for $\mathrm{SPL}+$ to our setting, and show that it allows for a wider selection of step sizes compared to SGM. We support this theoretical finding experimentally. + +# 1. Introduction + +The most common approach to fit a model parametrized by $\theta \in \mathbb{R}^d$ to data, is to minimize the expected loss over the data distribution, that is + +$$ +\min _ {\theta \in \mathbb {R} ^ {d}} R _ {\operatorname {E R M}} (\theta) = \mathbb {E} _ {z \sim P} [ \ell (\theta ; z) ]. \tag {1} +$$ + +But in many cases, the expected loss may not be the suitable objective to minimize. When robustness or safety of the model are concerned, the emphasis should rather be on the extreme values of the distribution rather than the average value. For instance, in distributionally robust optimization, the goal is to optimize the model for the worst case distribution around some fixed distribution (Duchi & Namkoong, 2018). In extreme risk-averse settings, such as when safety is the top priority, it is desirable to minimize the maximum + +$^{1}$ Department of Computer Science, Cornell University, Ithaca, NY, USA $^{2}$ Center for Computational Mathematics, Flatiron Institute, New York, NY, USA. Correspondence to: Si Yi Meng . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +loss within a training set (Shalev-Shwartz & Wexler, 2016). These applications can all be formulated as minimizing the expectation of the losses that are above some cutoff value, + +$$ +\min _ {\theta \in \mathbb {R} ^ {d}} R _ {\mathrm {C V a R}} (\theta) = \mathbb {E} _ {z \sim P} \left[ \ell (\theta ; z) \mid \ell (\theta ; z) \geq \alpha_ {\beta} (\theta) \right], (2) +$$ + +where $\alpha_{\beta}(\theta)$ is the upper $\beta$ -quantile of the losses. For example, for $\beta = 0.9$ , the problem in (2) is to minimize the expectation of the worst $10\%$ of the losses. + +In this work, we propose a variant of the stochastic prox-linear (SPL) method pioneered by Burke & Ferris (1995); Lewis & Wright (2016); Duchi & Ruan (2018) for solving (2). The possibility of applying SPL to CVaR minimization was mentioned in Davis & Drusvyatskiy (2019), but not explored. We introduce a variant of SPL called $\mathrm{SPL}+$ , that adapts to the scaling of the loss function, which in turn allows for a default parameter setting. We first derive a closed-form update for $\mathrm{SPL}+$ , and show why it is particularly well suited for minimizing CVaR. We give its convergence rates for convex and Lipschitz losses by adapting existing results from Davis & Drusvyatskiy (2019). Through several experiments comparing the stochastic prox-linear method to stochastic subgradient we show that SPL and $\mathrm{SPL}+$ are more robust to the choice of step size. We conclude with a discussion on several future applications for minimizing CVaR in machine learning. + +# 1.1. Background + +The CVaR objective was first introduced in finance as an alternative measure of risk, also known as the expected shortfall (Artzner et al., 1999; Embrechts et al., 1999). Many applications in finance can be formulated as CVaR minimization problems, such as portfolio optimization (Krokhmal et al., 2002; Mansini et al., 2007), insurance (Embrechts et al., 2013) and credit risk management (Andersson et al., 2001). The seminal work of Rockafellar & Uryasev (2000) proposed a variational formulation of the CVaR objective that is amenable to standard optimization methods. This formulation has since inspired considerable research in applications spanning machine learning and adjacent fields, such as $\nu$ -SVM (Takeda & Sugiyama, 2008; Gotoh & Takeda, 2016), robust decision making and MDPs (Chow et al., 2015; Chow & Ghavamzadeh, 2014; Chow et al., 2017; Cardoso & Xu, 2019; Sani et al., 2012), influence maximization and submodular optimization (Maehara, 2015; Ohsaka & Yoshida, + +2017; Wilder, 2018), fairness (Williamson & Menon, 2019), and federated learning (Laguel et al., 2021b). + +Though it finds many applications, the CVaR objective is typically difficult to minimize. It is nonsmooth even when the individual losses $\ell(\cdot; z)$ are continuously differentiable. Indeed, if $P$ does not admit a density — which is the case for all empirical distributions over training data — the variational objective is not everywhere differentiable. To address this, Laguel et al. (2021a) developed subdifferential calculus for a number of equivalent CVaR formulations and proposed minimizing a smoothed version of the dual objective. On the other hand, several works (Soma & Yoshida, 2020; Holland & Haress, 2021) apply the stochastic subgradient method directly to the variational formulation proposed by Rockafellar & Uryasev (2000), which is well-defined regardless of the distribution $P$ . However, as we elaborate in Section 3, this approach is oblivious to the special structure of the variational form of the CVaR objective. + +# 2. Problem setup + +Let $\ell (\theta ;z)$ be the loss associated with the model parameters $\theta \in \mathbb{R}^d$ and a measurable random variable $z(\omega)$ on some background probability space $(\Omega ,\mathcal{F},\mathbb{P})$ + +When $z$ follows a distribution $P$ with density $p(z)$ , the cumulative distribution function on the loss for a fixed $\theta$ is given by $\mathbb{P}[\ell (\theta ;z)\leq \alpha ] = \int_{\ell (\theta ;z)\leq \alpha}p(z)dz$ , which we assume is everywhere continuous with respect to $\alpha$ . Let $\beta$ be a confidence level, for instance $\beta = 0.9$ . The Value-at-Risk (VaR) of the model is the lowest $\alpha$ such that with probability $\beta$ , the loss will not exceed $\alpha$ . Formally, + +$$ +\operatorname {V a R} _ {\beta} (\theta) := \min \left\{\alpha \in \mathbb {R}: \mathbb {P} [ \ell (\theta ; z) \leq \alpha ] \geq \beta \right\}. \tag {3} +$$ + +The Conditional Value-at-Risk (CVaR) is the expectation of the upper tail starting at $\mathrm{VaR}_{\beta}$ , illustrated in Figure 1: + +$$ +\operatorname {C V a R} _ {\beta} (\theta) := \mathbb {E} _ {z \sim P} [ \ell (\theta ; z) \mid \ell (\theta ; z) \geq \operatorname {V a R} _ {\beta} (\theta) ]. \tag {4} +$$ + +Clearly, the CVaR upper bounds the VaR for the same $\beta$ . Our goal is to minimize $\mathrm{CVaR}_{\beta}$ over $\theta \in \mathbb{R}^d$ , but directly minimizing (4) is not straightforward. Fortunately, Rockafellar & Uryasev (2000) introduced a variational formulation where the solution to + +$$ +\theta^ {*}, \alpha^ {*} \in \underset {\theta \in \mathbb {R} ^ {d}, \alpha \in \mathbb {R}} {\arg \min } F _ {\beta} (\theta , \alpha) \quad \text {w h e r e}, \tag {5} +$$ + +$$ +F _ {\beta} (\theta , \alpha) := \alpha + \frac {1}{1 - \beta} \mathbb {E} _ {z \sim P} [ \max \{\ell (\theta ; z) - \alpha , 0 \} ] +$$ + +is such that $\theta^{*}$ is the solution to (4), and we obtain $\alpha^{*} = \mathrm{VaR}_{\beta}(\theta)$ as a byproduct. + +# 3. The Stochastic Subgradient Method + +A natural choice for minimizing (5) is the stochastic subgradient method (SGM). Letting $\partial f$ denote the convex subd + +![](images/b9f74cd14049cf97c230a2416b992670ed6d3e717816b4311a4194e9760d1e4d.jpg) + +![](images/5f11ce03fcb8160bb5e9aa48af575096749a2729fa809fe17e9a1dda42f79cc8.jpg) +Figure 1: Expectation, VaR, and CVaR. + +ifferential of $f$ , at each step $t$ we sample $z \sim P$ uniformly and compute a subgradient $g_{t}$ from the subdifferential + +$$ +\partial F _ {\beta} \left(\theta_ {t}, \alpha_ {t}; z\right) = \binom {\mathbf {0}} {1} + \frac {u _ {t}}{1 - \beta} \binom {\partial \ell \left(\theta_ {t}; z\right)} {- 1} \tag {6} +$$ + +where $u_{t} = \partial \max \{u,0\} |_{u = \ell (\theta_{t};z) - \alpha_{t}}$ . Given some step size sequence $\{\lambda_t\} >0$ , and denoting $x = (\theta ,\alpha)^{\top}$ , SGM then takes the step + +$$ +x _ {t + 1} = x _ {t} - \lambda_ {t} g _ {t}, \quad \text {w h e r e} g _ {t} \in \partial F _ {\beta} \left(\theta_ {t}, \alpha_ {t}; z\right). \tag {7} +$$ + +Substituting in the subgradient $g_{t}$ given in (6) into (7) gives + +$$ +\theta_ {t + 1} = \theta_ {t} - \frac {\lambda_ {t}}{1 - \beta} u _ {t} \partial \ell \left(\theta_ {t}; z\right), \tag {8} +$$ + +$$ +\alpha_ {t + 1} = \alpha_ {t} - \lambda_ {t} + \frac {\lambda_ {t}}{1 - \beta} u _ {t}, \tag {9} +$$ + +For reference, the complete SGM algorithm is given in Algorithm 1. SGM is very sensitive to the step size choice and may diverge if not carefully tuned. This issue can be explained from a modeling perspective (Davis & Drusvyatskiy, 2019). Indeed, SGM can be written as a model-based method where at each iteration $t$ , it uses the following linearization of the sampled $F_{\beta}(x;z)$ at the current point $x_{t}$ : + +$$ +m _ {t} ^ {\mathrm {S G M}} (x; z) := F _ {\beta} \left(x _ {t}; z\right) + \langle g _ {t}, x - x _ {t} \rangle . \tag {10} +$$ + +This provides an approximate, stochastic model of the objective $F_{\beta}(x)$ . The SGM update is then a proximal step on this model, that is + +$$ +x _ {t + 1} = \underset {x \in \mathbb {R} ^ {d + 1}} {\arg \min } m _ {t} (x; z) + \frac {1}{2 \lambda_ {t}} \| x - x _ {t} \| ^ {2} \tag {11} +$$ + +using $m_t = m_t^{\mathrm{SGM}}$ . The issue with $m_t = m_t^{\mathrm{SGM}}(x;z)$ is that it uses a linearization to approximate the $\max \{\cdot, 0\}$ function. This linearization can take negative values, which + +Algorithm 1 SGM: Stochastic subgradient method for CVaR minimization +1: initialize: $\theta_0\in \mathbb{R}^d$ $\alpha_0\in \mathbb{R}$ , hyperparameter: $\lambda >0$ +2: for $t = 0,1,2,\ldots ,T$ do +3: Sample data point $z\sim P$ , compute $\ell (\theta_t;z)$ and $v_{t}\in \partial \ell (\theta_{t};z)$ +4: $\lambda_t\gets \lambda /\sqrt{t + 1}$ +5: if $\alpha_{t}\geq \ell (\theta_{t};z)$ then $\triangleright \alpha_{t}$ too big +6: $\theta_{t + 1}\gets \theta_t$ +7: $\alpha_{t + 1}\gets \alpha_t - \lambda_t$ +8: else $\triangleright \alpha_{t}$ too small +9: $\theta_{t + 1}\gets \theta_t - \frac{\lambda_t}{1 - \beta} v_t$ +10: $\alpha_{t + 1}\leftarrow \alpha_t + \frac{\lambda_t}{1 - \beta}\beta$ +11: end if +12: end for +13: return $\bar{x}_T = \frac{1}{T + 1}\sum_{t = 1}^{T + 1}(\theta_t,\alpha_t)^{\intercal}$ + +is a poor approximation of the non-negative $\max\{\cdot, 0\}$ operation. The main insight of the SPL method is to leverage the structure of $F_{\beta}(x)$ as a truncated function. This structure allows for a more accurate model that still has an easily computable proximal operator. + +# 4. The SPL method for CVaR minimization + +# 4.1. A tighter model + +Here we introduce an alternative model for our objective that only linearizes inside the $\max\{\cdot, 0\}$ , which is a strictly more accurate model when the objective is convex (Asi & Duchi, 2019a). In particular, for some $v_t \in \partial \ell(\theta_t; z)$ and $\ell_t \coloneqq \ell(\theta_t; z)$ , we use + +$$ +m _ {t} ^ {\mathrm {S P L}} (x; z) = \alpha + \frac {\operatorname* {m a x} \left\{\ell_ {t} + \langle v _ {t} , \theta - \theta_ {t} \rangle - \alpha , 0 \right\}}{1 - \beta} \tag {12} +$$ + +The algorithm resulting from (11) using $m_{t} = m_{t}^{\mathrm{SPL}}$ is known as the stochastic prox-linear (SPL) method (Duchi & Ruan, 2018). Figure 2 illustrates that (12) better approximates the level sets of the loss function as compared to (10). + +# 4.2. Separate regularization parameters + +Now that we have determined a tighter model (12), it remains now to select a default step size sequence $\lambda_{t}$ for the proximal step (11). But, as we will argue next, having the same default step size sequence for both $\alpha$ and $\theta$ could lead to inconsistencies due to the dependency on the scale of the loss function. + +To explain this dependency, let units $(\ell)$ denote the units of our loss function $\ell (\theta_t;z)$ . For instance, our loss could be a cost measured in dollars. Since $\alpha$ approximates a quantile of the losses, it must also have the same units as the loss. + +![](images/d840a15671b1a1576fcff0e082a5288dcf93aebea20d5e6ceee0a04266cdfd21.jpg) +Figure 2: Comparison of SGM and SPL models on the CVaR objective with a single $\ell(\theta) = \log(1 + \exp(\theta)) + \frac{0.01}{2}\theta^2$ . Filled contours are the level sets of the objective, while the dashed contour lines are the level sets of the respective model $m_t$ constructed at $(\theta_t, \alpha_t)$ . With the same step size, the SGM model results in an update that increases the objective, whereas the SPL model does not. Note that because the subgradient of the objective is 0 in $\theta$ , the SGM model is constant in $\theta$ . + +Consequently, our model in (12) also has the same units as the loss function. A clash of units appears when we consider the regularization term in (11), that is the term + +$$ +\frac {1}{2 \lambda_ {t}} \| x - x _ {t} \| ^ {2} = \frac {1}{2 \lambda_ {t}} \left(\| \theta - \theta_ {t} \| ^ {2} + (\alpha - \alpha_ {t}) ^ {2}\right). +$$ + +This regularization term must also have the same units as the loss so that the entire objective in (11) has consistent units. But since $\mathrm{units}(\alpha) = \mathrm{units}(\ell)$ , the term $\frac{1}{2\lambda_t} (\alpha -\alpha_t)^2$ can only have the same units as the loss if $\mathrm{units}(\lambda_t) = \mathrm{units}(\ell)$ . In direct contradiction, the term $\frac{1}{2\lambda_t}\left\| \theta -\theta_t\right\|^2$ can only have the same units as the loss if $\mathrm{units}(\lambda_t) = 1 / \mathrm{units}(\ell)$ , since $\theta$ parametrizes the objective and thus does not carry the units of the loss. There is no choice of $\lambda_{t}$ which would result in the objective of (11) having consistent units; consequently, there is no default, scale-invariant $\lambda_{t}$ that would work across different loss functions. + +One simple way to fix this clash of units, is to disentangle $\lambda_{t}$ into two regularization parameters $\lambda_{\theta ,t},\lambda_{\alpha ,t} > 0$ and update the iterates according to + +$$ +\begin{array}{l} \theta_ {t + 1}, \alpha_ {t + 1} = \underset {\theta \in \mathbb {R} ^ {d}, \alpha \in \mathbb {R}} {\arg \min } m _ {t} ^ {\mathrm {S P L}} (x; z) + \frac {1}{2 \lambda_ {\theta , t}} \| \theta - \theta_ {t} \| ^ {2} \\ + \frac {1}{2 \lambda_ {\alpha , t}} \left(\alpha_ {t} - \alpha\right) ^ {2}. \tag {13} \\ \end{array} +$$ + +Now we can make the units match across (13) by choosing + +$$ +\operatorname {u n i t s} \left(\lambda_ {\alpha , t}\right) = \operatorname {u n i t s} (\ell) \text {a n d} \operatorname {u n i t s} \left(\lambda_ {\theta , t}\right) = \frac {1}{\operatorname {u n i t s} (\ell)}. \tag {14} +$$ + +As suggested by our theory in (26), if we had access to the average Lipschitz constant $L$ of the individual losses $\ell$ , then we should choose + +$$ +\lambda_ {\alpha , t} = \frac {\lambda \left| \alpha_ {t} - \alpha^ {*} \right|}{\sqrt {t}} \quad \text {a n d} \quad \lambda_ {\theta , t} = \frac {\lambda \left\| \theta_ {t} - \theta^ {*} \right\|}{L \sqrt {t}}, \tag {15} +$$ + +where $\lambda > 0$ is a numerical constant. Although this gives us consistency in the units, estimating $L$ can be difficult in practice. Thus, instead we approximate the scaling by using the initial loss $\ell_0 \coloneqq \mathbb{E}_z[\ell(\theta_0;z)]$ and choose + +$$ +\lambda_ {\alpha , t} = \frac {\lambda \ell_ {0}}{\sqrt {t}} \quad \text {a n d} \quad \lambda_ {\theta , t} = \frac {\lambda}{\ell_ {0} \sqrt {t}}, \tag {16} +$$ + +while setting $\lambda$ using a grid search. We will use (16) as our default setting for $\lambda_{\theta, t}$ and $\lambda_{\alpha, t}$ . Importantly, although we have separate regularization terms, there is still only one hyperparameter $\lambda$ to be set. + +# 4.3. Closed form update + +Lemma 1 (Closed form updates of $\mathrm{SPL}+$ ). The closed form solution to (13) is given by the updates + +$$ +\theta_ {t + 1} = \theta_ {t} - \lambda_ {\theta , t} \min \left\{\frac {1}{1 - \beta}, \gamma_ {t} \right\} \nabla \ell \left(\theta_ {t}; z\right), \tag {17} +$$ + +$$ +\alpha_ {t + 1} = \alpha_ {t} - \lambda_ {\alpha , t} + \lambda_ {\alpha , t} \min \left\{\frac {1}{1 - \beta}, \gamma_ {t} \right\}, \tag {18} +$$ + +$$ +w h e r e \gamma_ {t} = \frac {\operatorname* {m a x} \left\{\ell \left(\theta_ {t} ; z\right) - \alpha_ {t} + \lambda_ {\alpha , t} , 0 \right\}}{\lambda_ {\theta , t} \| \nabla \ell \left(\theta_ {t} ; z\right) \| ^ {2} + \lambda_ {\alpha , t}}. \tag {19} +$$ + +We first give a sketch of how the updates are derived. + +Proof. For one step update, we can drop the subscript $t$ without loss of generality. The key step is to rewrite (13) in the form of a proximal step on a truncated model, namely, + +$$ +x _ {t + 1} = \underset {x \in \mathbb {R} ^ {d + 1}} {\arg \min } \max \left\{c + \langle a, x - x _ {t} \rangle , 0 \right\} + \frac {1}{2 \lambda} \| x - x _ {t} \| ^ {2} +$$ + +where $x = (\theta, \hat{\alpha})^{\top}$ is the concatenation of $\theta$ and a scaled version of $\alpha$ . The solution to this has a nice form given in Lemma 2 in the appendix, + +$$ +x ^ {t + 1} = x _ {t} - \underbrace {\min \left\{\lambda , \frac {\max \{c , 0 \}}{\| a \| ^ {2}} \right\}} _ {=: \eta} a. +$$ + +One can show that by redefining variables as + +$$ +\hat {\alpha} = \sqrt {\frac {\lambda_ {\theta}}{\lambda_ {\alpha}}} \alpha \quad \text {a n d} \quad \hat {\alpha} _ {t} = \sqrt {\frac {\lambda_ {\theta}}{\lambda_ {\alpha}}} \alpha_ {t} - \sqrt {\lambda_ {\theta} \lambda_ {\alpha}}, +$$ + +we can absorb the leading $\alpha$ in the model (12) into its regularization term, giving us + +$$ +\alpha + \frac {1}{2 \lambda_ {\alpha}} (\alpha - \alpha_ {t}) ^ {2} = \frac {1}{2 \lambda_ {\theta}} (\hat {\alpha} - \hat {\alpha} _ {t}) ^ {2} + \text {C o n s t .}. +$$ + +After some simple manipulation on the linearization term of (13), we get that + +$$ +c = \frac {1}{1 - \beta} \left(\ell \left(\theta_ {t}; z\right) - \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \hat {\alpha} _ {t}\right), a = \frac {1}{1 - \beta} \left( \begin{array}{c} \nabla \ell \left(\theta_ {t}; z\right) \\ - \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}}. \end{array} \right). +$$ + +Plugging $a, c$ into the update of the truncated model above, + +$$ +\eta = \min \left\{\lambda_ {\theta}, \frac {\max \left\{\ell (\theta_ {t} ; z) - \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \hat {\alpha} _ {t} , 0 \right\}}{\frac {1}{(1 - \beta)} (\| \nabla \ell (\theta_ {t} ; z) \| ^ {2} + \frac {\lambda_ {\alpha}}{\lambda_ {\theta}})} \right\}. +$$ + +Substituting out $\hat{\alpha}_t$ for $\alpha_t$ and multiplying by $a$ gives us the desired $\theta_{t + 1}$ and $\alpha_{t + 1}$ . + +The detailed proof can be found in Appendix A, with a breakdown of the updates in Algorithm 2. Alternative to our technique, one can also derive these updates by enumerating the KKT conditions after formulating (13) as a constrained minimization problem with an additional slack variable. + +Examining the update in Lemma 1, we can see that the cost of computing each iteration of $\mathrm{SPL}+$ is of the same order as computing an iteration of SGM. Finally, if we set the regularization parameters according to the guide in (14), we can see by examining the units of $\mathrm{SPL}+$ that $\gamma_{t}$ in (19) is unitless. As a result, the units are consistent across the updates of both $\theta$ in (17) and $\alpha$ in (18). Next, we discuss two applications of $\mathrm{SPL}+$ which correspond to two extreme settings for the CVaR objective. + +Algorithm 2 SPL+: Stochastic prox-linear method for CVaR minimization with separate regularization +1: initialize: $\theta_0\in \mathbb{R}^d$ $\alpha_0\in \mathbb{R}$ , hyperparameter: $\lambda >0$ +2: for $t = 0,1,2,\ldots ,T$ do +3: Sample data point $z\sim P$ +4: Compute $\ell (\theta_t;z)$ and $v_{t}\in \partial \ell (\theta_{t};z)$ +5: $\lambda_{\theta ,t}\gets \lambda /(\ell_0\sqrt{t + 1})$ +6: $\lambda_{\alpha ,t}\gets \lambda \ell_0 / \sqrt{t + 1}$ +7: if $\alpha_{t} > \ell (\theta_{t};z) + \lambda_{\alpha ,t}$ then $\triangleright \alpha_{t}$ too big +8: $\theta_{t + 1}\gets \theta_t$ +9: $\alpha_{t + 1}\gets \alpha_t - \lambda_\alpha ,t$ +10: else if $\alpha_{t} < \ell (\theta_{t};z) - \frac{\lambda_{\theta,t}}{1 - \beta}\| v_{t}\|^{2} - \frac{\lambda_{\alpha,t}\beta}{1 - \beta}$ then $\triangleright$ $\alpha_{t}$ too small +11: $\theta_{t + 1}\gets \theta_t - \frac{\lambda_{\theta,t}}{1 - \beta} v_t$ +12: $\alpha_{t + 1}\gets \alpha_t + \frac{\lambda_{\alpha,t}}{1 - \beta}\beta$ +13: else △ at in middle range +14: $\nu \leftarrow \frac{\ell(\theta_t;z) + \lambda_{\alpha,t} - \alpha_t}{\lambda_{\theta,t}\|v_t\|^2 + \lambda_{\alpha,t}}$ +15: $\theta_{t + 1}\gets \theta_t - \lambda_{\theta ,t}\nu \nabla \ell (\theta_t;z)$ +16: $\alpha_{t + 1}\gets \alpha_t - \lambda_\alpha ,t + \lambda_\alpha ,t\nu$ +17: end if +18: end for +19: return $\bar{x}_T = \frac{1}{T + 1}\sum_{t = 1}^{T + 1}(\theta_t,\alpha_t)^{\intercal}$ + +# 4.4. Solving the max loss problem + +The $\mathrm{SPL + }$ method can be seen as an extension of recent class of adaptive methods (Gower et al., 2022) for minimizing the max loss, as we detail next. If $P$ is the empirical + +distribution over $n$ training examples, setting $\beta = n - 1 / n$ turns the CVaR minimization problem into the max loss minimization problem + +$$ +\min _ {\theta \in \mathbb {R} ^ {d}} f (\theta) = \max _ {i = 1, \dots , n} \ell \left(\theta ; z _ {i}\right). \tag {20} +$$ + +Indeed, if $\beta = n - 1 / n$ then the Value-at-Risk (3) would have to be the max loss, that is, $\alpha = \max_{i = 1,\dots,n}\ell (\theta ;z_i)$ . Plugging this into (5) we have that the second term in $F_{\beta}(\theta ,\alpha)$ is zero, leaving only $F_{\beta}(\theta ,\alpha) = \alpha = \max_{i = 1,\dots,n}\ell (\theta ;z_i)$ . + +The max loss problem is an interesting problem in its own right (Shalev-Shwartz & Wexler, 2016). Recently Gower et al. (2022) proposed the Polyak with slack methods for solving (20). Our $\mathrm{SPL+}$ improves upon the Polyak with slack methods in two ways: first, $\mathrm{SPL+}$ can be applied to minimizing CVaR for any $\beta$ , and not just the max loss problem; second, $\mathrm{SPL+}$ can enjoy a default parameter setting due to the two regularization parameters and the consideration around units in (14). + +Finally we show that in this setting $\mathrm{SPL}+$ can also been seen as a stochastic algorithm that minimizes the Lagrangian of a slack formulation of Equation (20), where the Lagrange multiplier is equal to $1 / 1 - \beta$ . We establish this equivalence in Appendix D. + +# 4.5. Solving ERM + +When $P$ is the empirical distribution over $n$ training examples, and if $\beta = \frac{1}{n}$ , then minimizing the CVaR objective in (5) is equivalent to minimizing the expected risk. This is because $\alpha = \min_{i=1,\dots,n} \ell(\theta, z_i)$ due to (3), and consequently from (4) we have that + +$$ +\begin{array}{l} \mathrm {C V a R} _ {\beta} (\theta) = \mathbb {E} _ {z \sim P} [ \ell (\theta ; z) \mid \ell (\theta ; z) \geq \min _ {i = 1, \dots , n} \ell (\theta , z _ {i}) ] \\ = \mathbb {E} _ {z \sim P} [ \ell (\theta ; z) ]. \\ \end{array} +$$ + +Thus minimizing (5) is equivalent to minimizing the expected risk. As a consequence, $\mathrm{SPL}+$ can also be used as an adaptive method for minimizing the expected risk. + +# 5. Convergence theory + +We instantiate the convergence analyses from Davis & Drusvyatskiy (2019) in the case of CVaR minimization, and compare the rates for SGM and SPL+ for losses satisfying the following Assumption. + +Assumption 5.1 (Convex, subdifferentiable, and Lipschitz). There exist square integrable random variables $M:\Omega \to \mathbb{R}$ such that for a.e. $z\in \Omega$ and all $\theta \in \mathbb{R}^d$ , the sample losses $\ell (\theta ;z)$ are convex, subdifferentiable1, and $M(z)$ -Lipschitz. + +Theorem 5.2 (Convergence rates of SGM and SPL+). Suppose Assumption 5.1 holds. Let $x^{*} = (\theta^{*},\alpha^{*})^{\top}$ be a minimizer of $F_{\beta}(\theta ,\alpha)$ , and $x_0\in \mathbb{R}^d$ an arbitrary initialization. Let $(x_{t})_{t = 0}^{T}$ be the iterates given by SGM or SPL+, and $\bar{x}_T = \frac{1}{T + 1}\sum_{t = 1}^{T + 1}x_t$ be the averaged iterate. + +SGM. If $\lambda_t = \frac{\lambda}{\sqrt{T + 1}}$ then the iterates $(x_{t})$ given by SGM in (7) satisfy + +$$ +\begin{array}{l} \mathbb {E} \left[ F _ {\beta} \left(\bar {x} _ {T}\right) - F _ {\beta} \left(x ^ {*}\right) \right] \\ \leq \frac {1}{2} \frac {\left\| \theta_ {0} - \theta^ {*} \right\| ^ {2}}{\lambda \sqrt {T + 1}} + \frac {1}{2} \frac {\left(\alpha_ {0} - \alpha^ {*}\right) ^ {2}}{\lambda \sqrt {T + 1}} + \frac {\lambda \mathrm {L} _ {\mathrm {S G M}} ^ {2}}{\sqrt {T + 1}}, \tag {21} \\ \end{array} +$$ + +where + +$$ +\mathrm {L} _ {\mathrm {S G M}} ^ {2} = \mathbb {E} _ {z} \left[ \frac {M (z) ^ {2} + 1}{(1 - \beta) ^ {2}} + 1 \right] \tag {22} +$$ + +$\mathbf{SPL}+$ . If $\lambda_{\alpha, t} = \frac{\lambda_{\alpha}}{\sqrt{T + 1}}$ and $\lambda_{\theta, t} = \frac{\lambda_{\theta}}{\sqrt{T + 1}}$ , then the iterates $(x_{t})$ given by $SPL+$ given in Lemma 1 satisfy + +$$ +\begin{array}{l} \mathbb {E} \left[ F _ {\beta} \left(\bar {x} _ {T}\right) - F _ {\beta} \left(x ^ {*}\right) \right] \\ \leq \frac {1}{2} \frac {\left\| \theta_ {0} - \theta^ {*} \right\| ^ {2}}{\lambda_ {\theta} \sqrt {T + 1}} + \frac {1}{2} \frac {\left(\alpha_ {0} - \alpha^ {*}\right) ^ {2}}{\lambda_ {\alpha} \sqrt {T + 1}} + \frac {\lambda_ {\alpha} \mathrm {L} _ {\mathrm {S P L}} ^ {2}}{\sqrt {T + 1}}, \tag {23} \\ \end{array} +$$ + +where + +$$ +\mathrm {L} _ {\mathrm {S P L} +} ^ {2} = \mathbb {E} _ {z} \left[ \frac {\frac {\lambda_ {\theta}}{\lambda_ {\alpha}} M (z) ^ {2} + 1}{(1 - \beta) ^ {2}} \right]. \tag {24} +$$ + +This result follows by adapting Theorem 4.4 in Davis & Drusvyatskiy (2019), and we verify the assumptions necessary in Appendix B. In particular, the best bound achieved by SGM via minimizing in $\lambda$ the RHS of (21) is with + +$$ +\lambda = \frac {\left\| x _ {0} - x ^ {*} \right\|}{L _ {\mathrm {S G M}} \sqrt {2}} \tag {25} +$$ + +yielding the rate + +$$ +\mathbb {E} \left[ F _ {\beta} \left(\bar {x} _ {T}\right) - F _ {\beta} \left(x ^ {*}\right) \right] \leq \frac {\sqrt {2} \left\| x _ {0} - x ^ {*} \right\| \mathrm {L} _ {\mathrm {S G M}}}{\sqrt {T + 1}}. +$$ + +Similarly, for $\mathrm{SPL}_{+}$ , the best bound is achieved at + +$$ +\lambda_ {\alpha} = \frac {\left| \alpha_ {0} - \alpha^ {*} \right| (1 - \beta)}{\sqrt {2}}, \quad \lambda_ {\theta} = \frac {\left\| \theta_ {0} - \theta^ {*} \right\| (1 - \beta)}{\sqrt {2} \mathbb {E} _ {z} [ M (z) ]}, \tag {26} +$$ + +giving us the rate + +$$ +\begin{array}{l} \mathbb {E} \left[ F _ {\beta} \left(\bar {x} _ {T}\right) - F _ {\beta} \left(x ^ {*}\right) \right] \leq \frac {\| \theta_ {0} - \theta^ {*} \| \mathbb {E} _ {z} [ M (z) ] + | \alpha_ {0} - \alpha^ {*} |}{\sqrt {2} (1 - \beta) \sqrt {T + 1}} \\ + \frac {\| \theta_ {0} - \theta^ {*} \| \mathbb {E} _ {z} [ M (z) ^ {2} ] / \mathbb {E} _ {z} [ M (z) ] + | \alpha_ {0} - \alpha^ {*} |}{\sqrt {2} (1 - \beta) \sqrt {T + 1}}. \\ \end{array} +$$ + +We can now use Theorem 5.2 to directly compare the convergence rate of SGM in (21) and $\mathrm{SPL + }$ in (23). First, both methods converge at the $O\left(1 / \sqrt{T + 1}\right)$ rate. The main difference is in the constants. To ease the comparison, let $\lambda_{\alpha} = \lambda_{\theta} = \lambda$ . In this case, we can see that the Lipschitz constant of SGM in (22) is always greater than the Lipschitz constant of $\mathrm{SPL + }$ in (24), thus $\mathrm{SPL + }$ has a better constant in its rate of convergence. This is another way to confirm that $\mathrm{SPL + }$ uses a better model of the objective function as compared to SGM. Yet another advantage of $\mathrm{SPL + }$ is the flexibility of having two regularization parameters $\lambda_{\theta}$ and $\lambda_{\alpha}$ , which allows for a method that is independent of the units of the loss. + +# 6. Experiments + +We design several experiments to compare, and test the sensitivity of SGM, SPL with only one regularization, that is the updates in Lemma 1 where $\lambda_{\theta ,t} = \lambda_{\alpha ,t} = \lambda_t$ , and our proposed $\mathrm{SPL + }$ updates. + +# 6.1. Synthetic data + +First we study the sensitivity of the methods to choices of $\lambda$ when minimizing the CVaR objective (5). We use three different synthetic distributions, similar to the setup of Holland & Haress (2021), where we experiment various combinations of loss functions $\ell (\cdot ;z)$ and data distributions controlled by noise $\zeta$ (Table 2). For all problems we set the dimension to be $d = 10$ . For regression problems, $\theta_{\mathrm{gen}}\sim \mathcal{U}([0,1]^d)$ + +and for classification (logistic regression) we use $\theta_{\mathrm{gen}} \sim \mathcal{U}([0,10]^d)$ to increase linear separability. The loss functions and target generation schemes are listed in Table 2. Each target of the corresponding problem contains an error $\epsilon$ from one of the distributions in Table 1, which controls the difficulty level of the problem. + +
Distribution ofζParameters
Normal(μ, σ2)μ = 0, σ = 2
Gumbel(μ, β)μ = 0, β = 4
LogNormal(μ, σ2)μ = 2, σ = 1
+ +Table 1: Error distributions in 1D. + +Since the expectation in the CVaR objective (5) is difficult to compute in closed form, we evaluate the suboptimality gaps using an empirical average over $N = 10^{6}$ data points sampled i.i.d. from the corresponding distribution under a single fixed seed. This is done for each error distribution and loss function combination, each giving us the discretization + +$$ +\tilde {F} _ {\beta} (\theta , \alpha) = \alpha + \frac {1}{1 - \beta} \frac {1}{N} \sum_ {i = 1} ^ {N} \max \left\{\ell \left(\theta ; z _ {i}\right) - \alpha , 0 \right\}. \tag {27} +$$ + +We set $\beta = 0.95$ for all experiments, and thus have omitted $\beta$ from all plot descriptions. We run full-batch L-BFGS to obtain the optimal values for comparison, recorded as $\theta^{*},\alpha^{*}$ , and $F^{*}:= \tilde{F}_{\beta}(\theta^{*},\alpha^{*})$ . For initialization, we set $\alpha_0\sim \mathcal{U}(0,1)$ and $\theta_0\sim \mathcal{N}(0,I_d)$ at initialization for all algorithms we compare. They are run for $T = 100,000$ iterations using 5 different seeds that control the randomness of initialization and sampling during the course of optimization. In the sensitivity plots (Figures 3 and 8), solid lines show the median values, while the shaded regions indicate the range over the random seeds. All objective evaluations are on $\tilde{F}_{\beta}(\bar{\theta}_t,\bar{\alpha}_t)$ using the averaged iterates. + +We employ a decreasing step size $\lambda_{t} = \lambda /\sqrt{t + 1}$ for SGM and SPL, while $\lambda_{t,\alpha} = \lambda \ell_0 / \sqrt{t + 1}$ and $\lambda_{t,\theta} = \lambda /\ell_0\sqrt{t + 1}$ for SPL+. We study the sensitivity of the methods to $\lambda$ , varied over a logarithmically-spaced grid $10^{-6},10^{-5},\dots ,10^{4}$ , densified around $\lambda = 1$ using the extra grid $10^{-1.5},10^{-0.5},\ldots ,10^{1.5}$ + +Figure 3 shows the final suboptimality achieved by SGM, SPL, and $\mathrm{SPL + }$ for different values of $\lambda$ . For smooth losses (squared and logistic) we see that SPL and $\mathrm{SPL + }$ are significantly more robust and admit a much larger range of $\lambda$ for which they achieve a low suboptimality. Interestingly, for the absolute loss, the difference is barely noticeable. We also observe that $\mathrm{SPL + }$ often admits a wider basin of good settings for $\lambda$ as compared to SGM and even SPL. Moreover, $\lambda = 1$ is often in the set of good parameter choices for $\mathrm{SPL + }$ . This suggests that our scaling of $\lambda \ell_0$ and $\lambda /\theta_0$ , as motivated by balancing units, lead to a more stable and easy to tune method by choosing $\lambda$ around 1. In Figure 8, we perform the sensitivity analysis under a fixed accuracy target $\tilde{F} (\theta ,\alpha) - \tilde{F}^{*}\leq \epsilon$ , and draw similar stability conclusions. + +# 6.2. Real data + +Finally, we present the same experiment on four real datasets: YearPredictionMSD, E2006-tfidf, (binary) mushrooms and (binary) Covertype, all from the LIBSVM repository (Chang & Lin, 2011). Similar to the synthetic experiments, we set $\beta = 0.95$ and compute $\theta^{*}$ and $\alpha^{*}$ using L-BFGS. The objective is now by default the empirical CVaR in (27) since $P$ is the empirical distribution, + +$$ +F _ {\beta} (\theta , \alpha) = \alpha + \frac {1}{1 - \beta} \frac {1}{n} \sum_ {i = 1} ^ {n} \max \left\{\ell (\theta ; z _ {i}) - \alpha , 0 \right\} +$$ + +where $n$ is the number of examples in the training split. The loss function $\ell(\cdot; z_i)$ is the squared loss for YearPredictionMSD and E2006-tidf, and logistic loss for mushrooms and Covertype. For the comparison between SGM, SPL, and SPL+, we run the methods for 200N iterations (except on E2006-tidf where we only run for 10N iterations due to its size). All convergence + +Table 2: Loss functions and data generation used for synthetic problems. The error distributions for $\zeta$ are described in Table 1. We use $\sigma(\cdot)$ to denote the sigmoid function, and all $x$ 's are sampled uniformly from the unit sphere. + +
TaskLoss ℓ(θ; x, y)Target
Regression1/2(xTθ - y)2y = xTθgen + ζ
Regression|xTθ - y|y = xTθgen + ζ
Classificationlog (1 + exp(-yxTθ))y = 1 w.p. σ(xTθgen + ζ) and -1 otherwise.
+ +![](images/f35c6c3952165eeef972d47d1bf98230e2942e8dea0239028dc6330bd28471b5.jpg) + +![](images/ed9c8334bec9927437b67a9e97f221e34b09942a009cf8c77fa4f106604e0926.jpg) + +![](images/f4171d04a810af9501809a4b3dc8b9358a11c8976fd129cd5f3d40c9f10f5342.jpg) +aee + +![](images/8ce760d7390a4ed5042e594e02d574b8ca6bdb16dc2852e9e0c02002752cb7d0.jpg) + +![](images/98b563a37c4cf5db595a6fab2e520a50024f4aba3ecfddbb0d835db04378e948.jpg) + +![](images/3a0aa6c25bf1e4fc7ce70f87ff5c78c586b46124ad43129b41c9eb3adc06a1e6.jpg) +Absouless + +![](images/643a2ccc0ffba80249b434c9f3d63e89ea6d41d384f26e4b2139efc739447e24.jpg) +Figure 3: Sensitivity of final suboptimality to step size choices under a fixed $T = 10^{5}$ budget. The first two rows are regression tasks under the $\ell_{1}$ and $\ell_{2}$ losses, while the third row correspond to a binary classification task under the logistic loss. The columns correspond to different noise distributions in the data generation that controls the difficulty of the problem. + +![](images/76346294f5c3c91c0320c4b02776e333e032be02de7b80afe0a38630c4acf894.jpg) + +![](images/d70a63b10dc65b42cd6c52f6ba4834e20cedc13fea4e0aa6ce511a3d2824c903.jpg) +Logistic loss + +![](images/8fff76dcce0ff6388442d1774a3f7148105fe2a300ae6c68a5f4fd1110346c31.jpg) + +![](images/c57897dde9cf68f7bdfd10cb8b1e4ef6ecd464b07a9ec500a088eb8eae66eb7a.jpg) + +![](images/85dff166beccd07843bc7b3135134f3b38db2aa21be74813d5d6268128247400.jpg) +Figure 4: Sensitivity and convergence plots on the YearPredictionMSD linear regression task (overdetermined). + +plots are based on the best $\lambda$ at the end of training for each method. + +For the least squares problem in Figure 4 and Figure 5, we again see that both SPL and $\mathrm{SPL + }$ can tolerate a much larger range of step sizes. The best $\lambda$ is attained at or near $\lambda = 1$ for $\mathrm{SPL + }$ , which, although performs slightly worse than SPL with the best selected $\lambda$ , allows us to consistently choose $\lambda = 1$ as a default. For the logistic regression problem in Figure 7 and Figure 6, SPL and $\mathrm{SPL + }$ are again similar or better than SGM, although $\lambda = 1$ is no longer close to optimal for SPL and $\mathrm{SPL + }$ . + +# 7. Conclusion and future work + +Our numerical evidence suggests that for the CVaR minimization problem, while both SGM and SPL can be tuned to achieve similar performance, $\mathrm{SPL + }$ is often the most tolerant to misspecified step sizes. To further speed up $\mathrm{SPL + }$ and make it more competitive over SGM, in future work we will consider using non-uniform sampling to bias towards training examples with higher losses (as in Curi et al. (2020); Sagawa et al. (2020)). + +Efficient CVaR minimization with a stochastic algorithm opens up the possibility for new applications in machine learning. For instance, we could consider models that trade-off between low average risk and heavy tails by adding the CVaR objective as a regularizer: + +$$ +\min _ {\theta \in \mathbb {R} ^ {d}} R _ {\mathrm {E R M}} (\theta) + \rho R _ {\mathrm {C V a R} _ {\beta}} (\theta) +$$ + +![](images/e2658f7a543c1a31eaa602668b7e54b8536e08c77caa07a2566db525b7c74fc4.jpg) + +![](images/d6a8a944ea1a1e69f8b75faae52e1d8c83cbdb45df092c15f12152a726f1073e.jpg) +Figure 5: Sensitivity and convergence plots on the E2006-tfidf linear regression task (underdetermined). + +where $\rho > 0$ is a parameter that captures this trade-off. Controlling this trade-off is important as machine learning models are increasingly deployed in safety-critical applications that call for control over the likelihood of failure. As future work, we also see applications in training neural networks, where CVaR can be used to disincentivize the activations from being saturated too often, and thus help in speeding up training. This would offer an alternative to normalization layers, such as batchnorm or layernorm. + +# Acknowledgements + +We would like to thank Vasileios Charisopoulos and Frederik Künstner for helpful feedback on an earlier draft. We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), Grant No. PGSD3-547276-2020. This work was partially done during S. Y. Meng's internship at the Flatiron Institute. + +# References + +Andersson, F., Mausser, H., Rosen, D., and Uryasev, S. Credit risk optimization with conditional value-at-risk criterion. Mathematical programming, 89(2):273-291, 2001. +Artzner, P., Delbaen, F., Eber, J.-M., and Heath, D. Coherent measures of risk. Mathematical Finance, 9(3):203-228, 1999. + +Asi, H. and Duchi, J. C. The importance of better models + +Logistic loss on mushrooms + +![](images/da7a871a5f0fb1e29d20cdd4e577566242570462191e2da5d936e3eb4d7273e6.jpg) + +![](images/02ddf957127c568e05e21de666eb147f1eab2a9a991e722778ddbf71e66326b9.jpg) + +![](images/fdb402f49fb6c431f133d4c98efe1ca183fab96140286fbb5e0675bbb40f8bb7.jpg) +SGM + +![](images/9083763b60fd869aa86c0c1b828d2e07b89458ab675f548f630590f6cef6d536.jpg) +SPL + +![](images/39652d05f5ce24840ec7c26c3ceb77181c028a2db600b37a505dbf1fba9a9b7d.jpg) +$\mathrm{SPL}_{+}$ +Figure 6: Sensitivity and convergence plots on the mushrooms binary classification task. The grey dashed line is the average accuracy on the test set achieved by $\theta^{*}$ . + +in stochastic optimization. Proceedings of the National Academy of Sciences, 116(46):22924-22930, 2019a. + +Asi, H. and Duchi, J. C. Stochastic (approximate) proximal point methods: Convergence, optimality, and adaptivity. SIAM Journal on Optimization, 29(3):2257-2290, 2019b. +Beck, A. First-Order Methods in Optimization. Society for Industrial and Applied Mathematics, 2017. +Burke, J. V. and Ferris, M. C. A gauss--newton method for convex composite optimization. Mathematical Programming, 71(2):179-194, 1995. +Cardoso, A. R. and Xu, H. Risk-Averse Stochastic Convex Bandit. In The 22nd International Conference on Artificial Intelligence and Statistics, volume 89, pp. 39-47, 2019. +Chang, C.-C. and Lin, C.-J. LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3):27, 2011. +Chow, Y. and Ghavamzadeh, M. Algorithms for CVaR Optimization in MDPs. In Advances in Neural Information Processing Systems 27, pp. 3509-3517, 2014. +Chow, Y., Tamar, A., Mannor, S., and Pavone, M. Risk-Sensitive and Robust Decision-Making: a CVaR Optimization Approach. In Advances in Neural Information Processing Systems 28, pp. 1522-1530, 2015. +Chow, Y., Ghavamzadeh, M., Janson, L., and Pavone, M. Risk-Constrained Reinforcement Learning with Percentile Risk Criteria. Journal of Machine Learning Research, 18:167:1-167:51, 2017. + +Logistic loss on Covertype + +![](images/184fd481b22e594a7f10eb0df5133728cc38e2be9b0162ebaf3f6752bd7bada7.jpg) + +![](images/a9fe9c4a7266a378f9dbb92b4a713554bc603858a6fe4cf8f27d9c6c030e4e90.jpg) + +![](images/64364aaee0092f36f6ed4c176357d70707ab4aa8eb1df72045461fc57d0c5064.jpg) + +![](images/255a7677424c65df3a467fd1e2ac810db85cec057c1363e51868565dbe7fd562.jpg) + +![](images/14e1f58c11d2afa42d0adaccf4d85c564be1ee726e2429887d5d425f85050c0c.jpg) + +![](images/6d0721113a9cccfab079900c11c086e0160e81a715a9eff3cb7e87c7f10736c5.jpg) +SGM +Figure 7: Sensitivity and convergence plots on the Covertype binary classification task. The grey dashed line is the average accuracy on the test set achieved by $\theta^{*}$ . Note that the reported accuracy is averaged across the entire training set, but since SPL and $\mathrm{SPL + }$ reached a lower CVaR objective (rather than the average loss objective), it is reasonable that its average accuracy is lower. Furthermore, the optimal accuracy on the test set may seem surprisingly low. The justification behind this is that the objective being minimized is the CVaR objective, while the metric being plotted is the average test accuracy. The CVaR objective puts more emphasis on the top $1 - \beta$ fraction of the examples, and so when the dataset is not linearly separable, the average accuracy can be poor. This has also been noted previously in Curi et al. (2020). Unfortunately, computing the accuracy of the examples with the top $1 - \beta$ fraction of the losses does not necessarily give us more insight into the accuracy metric. This is due to the possibility that there are only a few outliers and the classifier found by minimizing the CVaR is still getting the majority of the examples wrong, just with lower losses. + +![](images/50325a878b85ed96268d965c7b9eda2196601663ecb9d1357aa5ead01bdad930.jpg) +SPL + +![](images/779bd847b671ce90351f10b1eabc195852e1250a2432fa4c1238af799fe1cd87.jpg) +$\mathrm{SPL}_{+}$ + +Curi, S., Levy, K. Y., Jegelka, S., and Krause, A. Adaptive sampling for stochastic risk-averse learning. In Advances in Neural Information Processing Systems 33, 2020. +Davis, D. and Drusvyatskiy, D. Stochastic model-based minimization of weakly convex functions. SIAM Journal on Optimimization, 29(1):207-239, 2019. +Duchi, J. C. and Namkoong, H. Learning models with uniform performance via distributionally robust optimization. arXiv:1810.08750, 2018. +Duchi, J. C. and Ruan, F. Stochastic methods for composite and weakly convex optimization problems. SIAM Journal on Optimimization, 28(4):3229-3259, 2018. + +Embrechts, P., Resnick, S. I., and Samorodnitsky, G. Extreme value theory as a risk management tool. North American Actuarial Journal, 3(2):30-41, 1999. +Embrechts, P., Klüppelberg, C., and Mikosch, T. Modelling extremal events: for insurance and finance, volume 33. Springer Science & Business Media, 2013. +Gotoh, J.-y. and Takeda, A. CVaR minimizations in support vector machines. Financial Signal Processing and Machine Learning, pp. 233-265, 2016. +Gower, R. M., Blondel, M., Gazagnadou, N., and Pedregosa, F. Cutting some slack for sgd with adaptive polyak step-sizes, 2022. +Holland, M. and Haress, E. M. Learning with risk-averse feedback under potentially heavy tails. In International Conference on Artificial Intelligence and Statistics, pp. 892–900. PMLR, 2021. +Krokhmal, P., Palmquist, J., and Uryasev, S. Portfolio optimization with conditional value-at-risk objective and constraints. Journal of Risk, 4:43-68, 2002. +Laguel, Y., Pillutla, K., Malick, J., and Harchaoui, Z. Superquantiles at work: Machine learning applications and efficient subgradient computation. Set-Valued and Variational Analysis, 29(4):967-996, 2021a. +Laguel, Y., Pillutla, K., Malick, J., and Harchaoui, Z. A superquantile approach to federated learning with heterogeneous devices. In 55th Annual Conference on Information Sciences and Systems, CISS, pp. 1-6. IEEE, 2021b. +Lewis, A. S. and Wright, S. J. A proximal method for composite minimization. Mathematical Programming, 158:501-546, 2016. +Maehara, T. Risk averse submodular utility maximization. Operations Research Letters, 43(5):526-529, 2015. +Mansini, R., Ogryczak, W., and Speranza, M. G. Conditional value at risk and related linear programming models for portfolio optimization. Annals of Operations Research, 152(1):227-256, 2007. +Ohsaka, N. and Yoshida, Y. Portfolio optimization for influence spread. In Proceedings of the 26th International Conference on World Wide Web, pp. 977-985, 2017. +Rockafellar, R. T. and Uryasev, S. Optimization of Conditional Value-at-Risk. Journal of Risk, 2:21-42, 2000. +Sagawa, S., Koh, P. W., Hashimoto, T. B., and Liang, P. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. In International Conference on Learning Representations, 2020. + +Sani, A., Lazaric, A., and Munos, R. Risk-Aversion in Multi-armed Bandits. In Advances in Neural Information Processing Systems 25, pp. 3284-3292, 2012. +Shalev-Shwartz, S. and Wexler, Y. Minimizing the Maximal Loss: How and Why. In Proceedings of the 33nd International Conference on Machine Learning, volume 48 of JMLR Workshop and Conference Proceedings, pp. 793-801. JMLR.org, 2016. +Soma, T. and Yoshida, Y. Statistical learning with conditional value at risk. arXiv:2002.05826, 2020. +Takeda, A. and Sugiyama, M. $\nu$ -support vector machine as conditional value-at-risk minimization. In Proceedings of the Twenty-Fifth International Conference on Machine Learning, volume 307, pp. 1056-1063, 2008. +Wilder, B. Risk-Sensitive Submodular Optimization. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pp. 6451-6458, 2018. +Williamson, R. C. and Menon, A. K. Fairness risk measures. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 6786-6797, 2019. + +# A. SPL+ derivation for CVaR minimization + +Before deriving the updates, we first introduce the following lemma based on the truncated model from Asi & Duchi (2019b). + +Lemma 2 (Truncated model). Consider the problem + +$$ +x _ {t + 1} = \operatorname * {a r g m i n} _ {x \in \mathbb {R} ^ {n}} \max \left\{c + \langle a, x - x _ {t} \rangle , 0 \right\} + \frac {1}{2 \lambda} \left\| x - x _ {t} \right\| ^ {2}. +$$ + +for some scalar $c$ and vector $a \in \mathbb{R}^n$ . The solution can be written in closed form as + +$$ +x _ {t + 1} = x _ {t} - \min \left\{\lambda , \frac {\max \{c , 0 \}}{\| a \| ^ {2}} \right\} a +$$ + +Proof. Note that $x_{t + 1}$ is the proximal point of the function + +$$ +f (x) = h (\langle a, x \rangle + b), \quad \text {w i t h} h (z) = \max \{z, 0 \}, b = c - \langle a, x _ {t} \rangle . +$$ + +centered at $x \equiv x_t$ . Using Beck (2017, Theorem 6.15), we have + +$$ +\begin{array}{l} \operatorname {p r o x} _ {\lambda f} (x) = x + \frac {a}{\| a \| ^ {2}} \left(\operatorname {p r o x} _ {\lambda \| a \| ^ {2} h} (\langle a, x \rangle + b) - (\langle a, x \rangle + b)\right) \\ = x _ {t} + \frac {a}{\| a \| ^ {2}} \left(\operatorname {p r o x} _ {\lambda \| a \| ^ {2} \max \{\cdot , 0 \}} (c) - c\right) \tag {28} \\ \end{array} +$$ + +In turn, the max function is the support function of the interval [0, 1]. By Beck (2017, Theorem 6.46), it follows that + +$$ +\operatorname {p r o x} _ {\lambda \| a \| ^ {2} \max \{., 0 \}} (c) = c - \lambda \| a \| ^ {2} \operatorname {p r o j} _ {[ 0, 1 ]} \left(\frac {c}{\lambda \| a \| ^ {2}}\right). \tag {29} +$$ + +Plugging (29) into (28), we obtain + +$$ +\begin{array}{l} \operatorname {p r o x} _ {\lambda f} \left(x _ {t}\right) = x _ {t} - \frac {a}{\| a \| ^ {2}} \cdot \lambda \| a \| ^ {2} \operatorname {p r o j} _ {[ 0, 1 ]} \left(\frac {c}{\lambda \| a \| ^ {2}}\right) \\ = x _ {t} - \lambda a \cdot \mathrm {p r o j} _ {[ 0, 1 ]} \left(\frac {c}{\lambda \| a \| ^ {2}}\right). \\ \end{array} +$$ + +Writing $\operatorname{proj}_{[0,1]}(v) = \min \left\{\max \left\{v, 0\right\}, 1\right\}$ yields the result. + +Lemma 1 (Closed form updates of $\mathrm{SPL}+$ ). The closed form solution to (13) is given by the updates + +$$ +\theta_ {t + 1} = \theta_ {t} - \lambda_ {\theta , t} \min \left\{\frac {1}{1 - \beta}, \gamma_ {t} \right\} \nabla \ell \left(\theta_ {t}; z\right), \tag {17} +$$ + +$$ +\alpha_ {t + 1} = \alpha_ {t} - \lambda_ {\alpha , t} + \lambda_ {\alpha , t} \min \left\{\frac {1}{1 - \beta}, \gamma_ {t} \right\}, \tag {18} +$$ + +$$ +w h e r e \gamma_ {t} = \frac {\operatorname* {m a x} \left\{\ell \left(\theta_ {t} ; z\right) - \alpha_ {t} + \lambda_ {\alpha , t} , 0 \right\}}{\lambda_ {\theta , t} \| \nabla \ell \left(\theta_ {t} ; z\right) \| ^ {2} + \lambda_ {\alpha , t}}. \tag {19} +$$ + +Proof. We now derive the $\mathrm{SPL + }$ updates. Recall that for the CVaR objective, using the model $m_t^{\mathrm{SPL}}$ in (13), the stochastic model-based approach solves the following problem in Equation 13 at each iteration, that is + +$$ +\underset {\theta , \alpha} {\arg \min } \alpha + \frac {1}{1 - \beta} \max \left\{\ell \left(\theta_ {t}; z\right) + \left\langle v _ {t}, \theta - \theta_ {t} \right\rangle - \alpha , 0 \right\} + \frac {1}{2 \lambda_ {\theta}} \| \theta - \theta_ {t} \| ^ {2} + \frac {1}{2 \lambda_ {\alpha}} (\alpha - \alpha_ {t}) ^ {2} \tag {30} +$$ + +where $v_{t} \in \partial \ell(\theta_{t}; z)$ , and we have temporarily dropped the time-dependence on $\lambda_{\alpha, t}$ and $\lambda_{\theta, t}$ . To arrive at the closed form solution, we will re-write (30) to fit the format of Lemma 2, and then apply the lemma. To this end, we combine the $\alpha$ in front with its regularization term, + +$$ +\begin{array}{l} \alpha + \frac {1}{2 \lambda_ {\alpha}} (\alpha - \alpha_ {t}) ^ {2} = \alpha + \frac {1}{2 \lambda_ {\alpha}} (\alpha^ {2} - 2 \alpha \alpha_ {t} + (\alpha_ {t}) ^ {2}) \\ = \frac {1}{2 \lambda_ {\alpha}} \left(\left(\alpha - \alpha_ {t}\right) ^ {2} + 2 \lambda_ {\alpha} \alpha\right) \\ = \frac {1}{2 \lambda_ {\alpha}} \left(\left(\alpha - \alpha_ {t}\right) ^ {2} + 2 \lambda_ {\alpha} \alpha - 2 \lambda_ {\alpha} \alpha_ {t} + \lambda_ {\alpha} ^ {2}\right) + \frac {1}{2 \lambda_ {\alpha}} \left(2 \lambda_ {\alpha} \alpha_ {t} - \lambda_ {\alpha} ^ {2}\right) \\ = \frac {1}{2 \lambda_ {\alpha}} \left(\left(\alpha - \alpha_ {t}\right) ^ {2} + 2 \lambda_ {\alpha} \left(\alpha - \alpha_ {t}\right) + \lambda_ {\alpha} ^ {2}\right) + \text {C o n s t}. \\ = \frac {1}{2 \lambda_ {\alpha}} \left(\alpha + \lambda_ {\alpha} - \alpha_ {t}\right) ^ {2} + \text {C o n s t .} \\ \end{array} +$$ + +We now combine it with the regularization on $\theta$ + +$$ +\begin{array}{l} \underbrace {\frac {1}{2 \lambda_ {\theta}} \left\| \theta - \theta_ {t} \right\| ^ {2} + \alpha + \frac {1}{2 \lambda_ {\alpha}} (\alpha - \alpha_ {t}) ^ {2}} _ {(*)} = \frac {1}{2 \lambda_ {\theta}} \left(\left\| \theta - \theta_ {t} \right\| ^ {2} + \frac {\lambda_ {\theta}}{\lambda_ {\alpha}} (\alpha - \alpha_ {t} + \lambda_ {\alpha}) ^ {2}\right) + \text {C o n s t .} \\ = \frac {1}{2 \lambda_ {\theta}} \left(\| \theta - \theta_ {t} \| ^ {2} + \left(\sqrt {\frac {\lambda_ {\theta}}{\lambda_ {\alpha}}} (\alpha - \alpha_ {t}) + \sqrt {\lambda_ {\theta} \lambda_ {\alpha}}\right) ^ {2}\right) + \text {C o n s t .} \\ \end{array} +$$ + +Now we define a rescaled variable $\alpha$ and constant $\alpha_{t}$ as + +$$ +\hat {\alpha} = \sqrt {\frac {\lambda_ {\theta}}{\lambda_ {\alpha}}} \alpha \quad \text {a n d} \quad \hat {\alpha} _ {t} = \sqrt {\frac {\lambda_ {\theta}}{\lambda_ {\alpha}}} \alpha_ {t} - \sqrt {\lambda_ {\theta} \lambda_ {\alpha}} \tag {31} +$$ + +to arrive at + +$$ +(*) = \frac {1}{2 \lambda_ {\theta}} \left(\left\| \theta - \theta_ {t} \right\| ^ {2} + \left(\hat {\alpha} - \hat {\alpha} _ {t}\right) ^ {2}\right) + \operatorname {C o n s t}. +$$ + +As a side note: to see that the units argument is appropriate, observe that $\hat{\alpha}$ now has units $(\theta)$ since $\lambda_{\theta}$ has units inversely proportional to $\lambda_{\alpha}$ . This lets us concatenate $\alpha$ with $\theta$ for form a new variable vector $x \in \mathbb{R}^{d+1}$ to have the same units overall. Now define + +$$ +x = \left( \begin{array}{l} \theta \\ \hat {\alpha} \end{array} \right) \quad \text {a n d} \quad x _ {t} = \left( \begin{array}{l} \theta_ {t} \\ \hat {\alpha} _ {t} \end{array} \right). \tag {32} +$$ + +The linearization inside $\max \{\cdot, 0\}$ in (30) can be written as + +$$ +\begin{array}{l} \ell \left(\theta_ {t}; z\right) + \left\langle \nabla \ell \left(\theta_ {t}; z\right), \theta - \theta_ {t} \right\rangle - \alpha = \ell \left(\theta_ {t}; z\right) + \left\langle \nabla \ell \left(\theta_ {t}; z\right), \theta - \theta_ {t} \right\rangle - \sqrt {\frac {\lambda_ {\theta}}{\lambda_ {\alpha}}} \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \alpha \tag {33} \\ = \ell (\theta_ {t}; z) + \langle \nabla \ell (\theta_ {t}; z), \theta - \theta_ {t} \rangle - \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \hat {\alpha} + \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \hat {\alpha} _ {t} - \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \hat {\alpha} _ {t} \\ = \ell (\theta_ {t}; z) - \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \hat {\alpha} _ {t} + \left(\nabla \ell (\theta_ {t}; z) \right. - \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}}\left( \begin{array}{c} \theta - \theta_ {t} \\ \hat {\alpha} - \hat {\alpha} _ {t} \end{array} \right), \\ \end{array} +$$ + +and so minimizing the model $m_t$ is then equivalent to minimizing the following model $\hat{m}_t$ + +$$ +\min _ {x \in \mathbb {R} ^ {d + 1}} \max \left\{c + \langle a, x - x _ {t} \rangle , 0 \right\} + \frac {1}{2 \lambda_ {\theta}} \| x - x _ {t} \| ^ {2} +$$ + +up to constants, where + +$$ +c = \frac {1}{1 - \beta} \left(\ell \left(\theta_ {t}; z\right) - \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \hat {\alpha} _ {t}\right) \quad \text {a n d} \quad a = \frac {1}{1 - \beta} \left( \begin{array}{c} \nabla \ell \left(\theta_ {t}; z\right) \\ - \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}}. \end{array} \right). +$$ + +From Lemma 2, the update is given by + +$$ +x ^ {*} = x _ {t} - \eta \cdot a +$$ + +where step size is given by + +$$ +\eta := \min \left\{\lambda_ {\theta}, \frac {\max \{c , 0 \}}{\| a \| ^ {2}} \right\} \tag {34} +$$ + +Plugging in $a, c$ into $\eta$ gives + +$$ +\theta_ {t + 1} = \theta_ {t} - \min \left\{\lambda_ {\theta}, \frac {1}{1 - \beta} \frac {\max \left\{\ell (\theta_ {t} ; z) - \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \hat {\alpha} _ {t} , 0 \right\}}{\frac {1}{(1 - \beta) ^ {2}} (\| \nabla \ell (\theta_ {t} ; z) \| ^ {2} + \frac {\lambda_ {\alpha}}{\lambda_ {\theta}})} \right\} \frac {\nabla \ell (\theta_ {t} ; z)}{1 - \beta} +$$ + +$$ +\hat {\alpha} _ {t + 1} = \hat {\alpha} _ {t} + \frac {1}{1 - \beta} \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \min \left\{\lambda_ {\theta}, \frac {1}{1 - \beta} \frac {\max \left\{\ell (\theta_ {t} ; z) - \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \hat {\alpha} _ {t} , 0 \right\}}{\frac {1}{(1 - \beta) ^ {2}} (\| \nabla \ell (\theta_ {t} ; z) \| ^ {2} + \frac {\lambda_ {\alpha}}{\lambda_ {\theta}})} \right\}. +$$ + +Finally, substituting back using (31), that is $\hat{\alpha}_{t + 1} = \sqrt{\frac{\lambda_{\theta}}{\lambda_{\alpha}}}\alpha_{t + 1}$ and $\hat{\alpha}_t = \sqrt{\frac{\lambda_\theta}{\lambda_\alpha}}\alpha_t - \sqrt{\lambda_\theta\lambda_\alpha}$ and simplifying gives (17) and (18). + +Lemma 3. Each $SPL_{+}$ update in Algorithm 2 is equivalent to the updates given by Equation 18 and Equation 17. + +Proof. We can enumerate all the cases: + +1. If $c < 0$ , which implies checking for + +$$ +\ell (\theta_ {t}; z) < \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \hat {\alpha} _ {t} = \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \left(\sqrt {\frac {\lambda_ {\theta}}{\lambda_ {\alpha}}} \alpha_ {t} - \sqrt {\lambda_ {\theta} \lambda_ {\alpha}}\right) = \alpha_ {t} - \lambda_ {\alpha} +$$ + +then from (34) $\eta = 0$ , and the updates are + +$$ +\theta_ {t + 1} = \theta^ {*} = \theta_ {t} +$$ + +$$ +\alpha_ {t + 1} = \hat {\alpha} ^ {*} = \hat {\alpha} _ {t} +$$ + +Multiplying the second equation by $\sqrt{\frac{\lambda_{\alpha}}{\lambda_{\theta}}}$ on both sides, we get + +$$ +\sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \sqrt {\frac {\lambda_ {\theta}}{\lambda_ {\alpha}}} \alpha^ {*} = \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \left(\sqrt {\frac {\lambda_ {\theta}}{\lambda_ {\alpha}}} \alpha_ {t} - \sqrt {\lambda_ {\theta} \lambda_ {\alpha}}\right) +$$ + +$$ +\alpha_ {t + 1} = \alpha_ {t} - \lambda_ {\alpha}. +$$ + +2. If $c > \lambda_{\theta} \| a \|^2$ ( $> 0$ ), which implies checking for the condition + +$$ +\frac {1}{1 - \beta} \left(\ell \left(\theta_ {t}; z\right) - \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \hat {\alpha} _ {t}\right) > \frac {1}{(1 - \beta) ^ {2}} \left(\lambda_ {\theta} \| \nabla \ell \left(\theta_ {t}; z\right) \| ^ {2} + \lambda_ {\alpha}\right) +$$ + +$$ +\ell (\theta_ {t}; z) - \alpha_ {t} \lambda_ {\alpha} > \frac {1}{1 - \beta} \left(\lambda_ {\theta} \| \nabla \ell (\theta_ {t}; z) \| ^ {2} + \lambda_ {\alpha}\right). +$$ + +Then $\eta = \lambda_{\theta}$ , and the updates reduce to + +$$ +\begin{array}{l} \theta_ {t + 1} = \theta_ {t} - \lambda_ {\theta} \frac {1}{1 - \beta} \nabla \ell (\theta_ {t}; z) \\ \alpha_ {t + 1} = \hat {\alpha} ^ {*} = \hat {\alpha} _ {t} - \lambda_ {\theta} \frac {1}{1 - \beta} \left(- \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}}\right) \\ = \alpha_ {t} - \lambda_ {\alpha} + \frac {1}{1 - \beta} \lambda_ {\alpha}. \\ \end{array} +$$ + +3. Otherwise it must be the case that $0 < \frac{c}{\|a\|^2} < \lambda_\theta$ , so $\eta = \frac{c}{\|a\|^2}$ , and the updates are given by + +$$ +\begin{array}{l} \left( \begin{array}{c} \theta_ {t + 1} \\ \alpha_ {t + 1} \end{array} \right) = \left( \begin{array}{c} \theta^ {*} \\ \hat {\alpha} ^ {*} \end{array} \right) = \left( \begin{array}{c} \theta_ {t} \\ \hat {\alpha} _ {t} \end{array} \right) - \frac {c}{\| a \| ^ {2}} \cdot a \\ = \left( \begin{array}{c} \theta_ {t} \\ \hat {\alpha} _ {t} \end{array} \right) - \frac {\ell (\theta_ {t} ; z) - \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \hat {\alpha} _ {t}}{\| \nabla \ell (\theta_ {t} ; z) \| ^ {2} + \frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \cdot \left( \begin{array}{c} \nabla \ell (\theta_ {t} ; z) \\ - \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \end{array} \right) \\ = \left( \begin{array}{c} \theta_ {t} \\ \hat {\alpha} _ {t} \end{array} \right) - \underbrace {\frac {\ell (\theta_ {t} ; z) - \alpha_ {t} + \lambda_ {\alpha}}{\lambda_ {\theta} \left\| \nabla \ell (\theta_ {t} ; z) \right\| ^ {2} + \lambda_ {\alpha}}} _ {=: \nu} \lambda_ {\theta} \cdot \left( \begin{array}{c} \nabla \ell (\theta_ {t}; z) \\ - \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \end{array} \right) \\ \end{array} +$$ + +Converting $\hat{\alpha}_t$ to $\alpha_{t}$ and $\hat{\alpha}^*$ to $\alpha^{*}$ , we get that the updates are + +$$ +\theta_ {t + 1} = \theta_ {t} - \lambda_ {\theta} \nu \nabla \ell \left(\theta_ {t}; z\right) +$$ + +$$ +\alpha_ {t + 1} = \alpha_ {t} - \lambda_ {\alpha} + \lambda_ {\alpha} \nu . +$$ + +Note that the regularization parameters $\lambda_{\theta}$ and $\lambda_{\alpha}$ can both be written in a time-dependent form as $\lambda_{\theta,t}$ and $\lambda_{\alpha,t}$ . This concludes our derivation for the updates of SPL+ given in Algorithm 2. As a comparison, we also include the closed-form updates for SGM applied to CVaR minimization in Algorithm 1. + +# B. Proof of Theorem 5.2 + +Theorem 5.2 (Convergence rates of SGM and SPL+). Suppose Assumption 5.1 holds. Let $x^{*} = (\theta^{*},\alpha^{*})^{\top}$ be a minimizer of $F_{\beta}(\theta ,\alpha)$ , and $x_0\in \mathbb{R}^d$ an arbitrary initialization. Let $(x_{t})_{t = 0}^{T}$ be the iterates given by SGM or SPL+, and $\bar{x}_T = \frac{1}{T + 1}\sum_{t = 1}^{T + 1}x_t$ be the averaged iterate. + +SGM. If $\lambda_{t} = \frac{\lambda}{\sqrt{T + 1}}$ then the iterates $(x_{t})$ given by SGM in (7) satisfy + +$$ +\begin{array}{l} \mathbb {E} \left[ F _ {\beta} (\bar {x} _ {T}) - F _ {\beta} (x ^ {*}) \right] \\ \leq \frac {1}{2} \frac {\left\| \theta_ {0} - \theta^ {*} \right\| ^ {2}}{\lambda \sqrt {T + 1}} + \frac {1}{2} \frac {\left(\alpha_ {0} - \alpha^ {*}\right) ^ {2}}{\lambda \sqrt {T + 1}} + \frac {\lambda L _ {\mathrm {S G M}} ^ {2}}{\sqrt {T + 1}}, \tag {21} \\ \end{array} +$$ + +where + +$$ +\mathrm {L} _ {\mathrm {S G M}} ^ {2} = \mathbb {E} _ {z} \left[ \frac {M (z) ^ {2} + 1}{(1 - \beta) ^ {2}} + 1 \right] \tag {22} +$$ + +$\mathbf{SPL}+$ . If $\lambda_{\alpha, t} = \frac{\lambda_{\alpha}}{\sqrt{T + 1}}$ and $\lambda_{\theta, t} = \frac{\lambda_{\theta}}{\sqrt{T + 1}}$ , then the iterates $(x_t)$ given by $SPL+$ given in Lemma 1 satisfy + +$$ +\begin{array}{l} \mathbb {E} \left[ F _ {\beta} \left(\bar {x} _ {T}\right) - F _ {\beta} \left(x ^ {*}\right) \right] \\ \leq \frac {1}{2} \frac {\left\| \theta_ {0} - \theta^ {*} \right\| ^ {2}}{\lambda_ {\theta} \sqrt {T + 1}} + \frac {1}{2} \frac {\left(\alpha_ {0} - \alpha^ {*}\right) ^ {2}}{\lambda_ {\alpha} \sqrt {T + 1}} + \frac {\lambda_ {\alpha} \mathrm {L} _ {\mathrm {S P L}} ^ {2}}{\sqrt {T + 1}}, \tag {23} \\ \end{array} +$$ + +where + +$$ +\mathrm {L} _ {\mathrm {S P L} +} ^ {2} = \mathbb {E} _ {z} \left[ \frac {\frac {\lambda_ {\theta}}{\lambda_ {\alpha}} M (z) ^ {2} + 1}{(1 - \beta) ^ {2}} \right]. \tag {24} +$$ + +Proof. For our proof, we Recall Equation 30 (restated here) + +$$ +\underset {\theta , \alpha} {\arg \min} \alpha + \frac {1}{1 - \beta} \max \left\{\ell (\theta_ {t}; z) + \langle v _ {t}, \theta - \theta_ {t} \rangle - \alpha , 0 \right\} + \frac {1}{2 \lambda_ {\theta}} \left\| \theta - \theta_ {t} \right\| ^ {2} + \frac {1}{2 \lambda_ {\alpha}} (\alpha - \alpha_ {t}) ^ {2} +$$ + +is the subproblem we solve to obtain the updates with separate regularization. Again, we have temporarily dropped the time-dependency on $\lambda_{\alpha ,t}$ and $\lambda_{\theta ,t}$ . The arg min is the same if we scale the entire expression by $\sqrt{\frac{\lambda_{\theta}}{\lambda_{\alpha}}}$ : + +$$ +\begin{array}{l} \underset {\theta , \alpha} {\arg \min } \sqrt {\frac {\lambda_ {\theta}}{\lambda_ {\alpha}}} \alpha + \frac {1}{1 - \beta} \max \left\{\sqrt {\frac {\lambda_ {\theta}}{\lambda_ {\alpha}}} \left(\ell \left(\theta_ {t}; z\right) + \langle v _ {t}, \theta - \theta_ {t} \rangle\right) - \sqrt {\frac {\lambda_ {\theta}}{\lambda_ {\alpha}}} \alpha , 0 \right\} \\ + \frac {1}{2 \sqrt {\lambda_ {\theta} \lambda_ {\alpha}}} \| \theta - \theta_ {t} \| ^ {2} + \frac {1}{2 \lambda_ {\alpha}} \sqrt {\frac {\lambda_ {\theta}}{\lambda_ {\alpha}}} \frac {\lambda_ {\alpha}}{\lambda_ {\theta}} \left(\sqrt {\frac {\lambda_ {\theta}}{\lambda_ {\alpha}}} \alpha - \sqrt {\frac {\lambda_ {\theta}}{\lambda_ {\alpha}}} \alpha_ {t}\right) ^ {2} \tag {35} \\ \end{array} +$$ + +Let $\hat{\alpha} \coloneqq \sqrt{\frac{\lambda_{\theta}}{\lambda_{\alpha}}} \alpha$ and $\hat{\alpha}_{t} \coloneqq \sqrt{\frac{\lambda_{\theta}}{\lambda_{\alpha}}} \alpha_{t}$ . Note that this is a simpler definition of $\hat{\alpha}_{t}$ than what we used in the derivation of the updates, since we no longer have to absorb the leading $\alpha$ into the regularization. The subproblem (35) can be solved in terms of the variables $\theta$ and $\hat{\alpha}$ , and the scaled linearization + +$$ +\underset {\theta , \hat {\alpha}} {\arg \min } \hat {\alpha} + \frac {1}{1 - \beta} \max \left\{\left(\hat {\ell} \left(\theta_ {t}; z\right) + \langle \hat {v} _ {t}, \theta - \theta_ {t} \rangle\right) - \hat {\alpha}, 0 \right\} + \frac {1}{2 \sqrt {\lambda_ {\theta} \lambda_ {\alpha}}} \left(\left\| \theta - \theta_ {t} \right\| ^ {2} + \left(\hat {\alpha} - \hat {\alpha} _ {t}\right) ^ {2}\right) \tag {36} +$$ + +where $\hat{\ell} (\theta_t;z)\coloneqq \sqrt{\frac{\lambda_\theta}{\lambda_\alpha}}\ell (\theta_t;z)$ , and its scaled subgradient is $\hat{v}_t\coloneqq \sqrt{\frac{\lambda_\theta}{\lambda_\alpha}} v_t$ . Now define the scaled CVaR objective to be + +$$ +\hat {F} _ {\beta} (\theta , \hat {\alpha}) = \hat {\alpha} + \frac {1}{1 - \beta} \mathbb {E} _ {z \sim P} \left[ \max \left\{\hat {\ell} (\theta ; z) - \hat {\alpha}, 0 \right\} \right] \tag {37} +$$ + +and the updates in the scaled subproblem (36) gives the $\mathrm{SPL + }$ method for solving this scaled CVaR problem. By Lemma 4 we have that the assumptions required to invoke Theorem 4.4 in Davis & Drusvyatskiy (2019) now hold. In particular since $\ell (\theta ;z)$ is $M(z)$ -Lipschitz we have that $\hat{\ell} (\theta ;z)$ is $\sqrt{\frac{\lambda_{\theta}}{\lambda_{\alpha}}} M(z)$ -Lipschitz. We first consider the convergence of $\mathrm{SPL + }$ in terms of the scaled objectives. Denoting $\Delta = \| x_0 - x^*\|$ , $\hat{\lambda}\coloneqq \sqrt{\lambda_{\theta}\lambda_{\alpha}}$ , $x = (\theta ,\hat{\alpha})^{\top}$ , $x^{*} = (\theta^{*},\hat{\alpha}^{*})^{\top}$ a minimizer of $\hat{F}_{\beta}$ . Using a constant step size of $\hat{\lambda}_t = \frac{\hat{\lambda}}{\sqrt{T + 1}}$ , from Theorem 4.4 in Davis & Drusvyatskiy (2019) the convergence rate is + +$$ +\mathbb {E} \left[ \hat {F} _ {\beta} \left(\bar {x} _ {T}\right) - \hat {F} _ {\beta} \left(x ^ {*}\right) \right] \leq \frac {\frac {1}{2} \Delta^ {2} + \mathrm {L} _ {\mathrm {S P L} +} ^ {2} \hat {\lambda} ^ {2}}{\hat {\lambda} \sqrt {T + 1}}. \tag {38} +$$ + +Finally, multiplying (37) through by $\sqrt{\frac{\lambda_{\alpha}}{\lambda_{\theta}}}$ we have that + +$$ +\sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \hat {F} _ {\beta} (\theta , \hat {\alpha}) = \alpha + \frac {1}{1 - \beta} \mathbb {E} _ {z \sim P} [ \max \left\{\ell (\theta ; z) - \alpha , 0 \right\} ] = F _ {\beta} (\theta , \alpha). +$$ + +Furthermore, multiplying (39) through by $\sqrt{\frac{\lambda_{\alpha}}{\lambda_{\theta}}}$ and substituting back $\hat{\lambda} \coloneqq \sqrt{\lambda_{\theta}\lambda_{\alpha}}$ and + +$$ +\Delta^ {2} = \| \theta_ {0} - \theta^ {*} \| ^ {2} + (\hat {\alpha} _ {0} - \hat {\alpha} ^ {*}) ^ {2} = \| \theta_ {0} - \theta^ {*} \| ^ {2} + \frac {\lambda_ {\theta}}{\lambda_ {\alpha}} (\alpha_ {0} - \alpha^ {*}) ^ {2} +$$ + +gives + +$$ +\begin{array}{l} \mathbb {E} \left[ F _ {\beta} \left(\bar {x} _ {T}\right) - F _ {\beta} \left(x ^ {*}\right) \right] \leq \sqrt {\frac {\lambda_ {\alpha}}{\lambda_ {\theta}}} \frac {\frac {1}{2} \Delta^ {2} + \mathrm {L} _ {\mathrm {S P L} +} ^ {2} \lambda_ {\theta} \lambda_ {\alpha}}{\sqrt {\lambda_ {\theta} \lambda_ {\alpha}} \sqrt {T + 1}} (39) \\ = \frac {\frac {1}{2} \left\| \theta_ {0} - \theta^ {*} \right\| ^ {2} + \frac {1}{2} \frac {\lambda_ {\theta}}{\lambda_ {\alpha}} \left(\alpha_ {0} - \alpha^ {*}\right) ^ {2} + \mathsf {L} _ {\mathrm {S P L} +} ^ {2} \lambda_ {\theta} \lambda_ {\alpha}}{\lambda_ {\theta} \sqrt {T + 1}} (40) \\ = \frac {1}{2} \frac {\left\| \theta_ {0} - \theta^ {*} \right\| ^ {2}}{\lambda_ {\theta} \sqrt {T + 1}} + \frac {1}{2} \frac {\left(\alpha_ {0} - \alpha^ {*}\right) ^ {2}}{\lambda_ {\alpha} \sqrt {T + 1}} + \frac {\mathrm {L} _ {\mathrm {S P L} +} ^ {2} \lambda_ {\alpha}}{\sqrt {T + 1}} (41) \\ \end{array} +$$ + +which concludes the proof of convergence of $\mathrm{SPL}_{+}$ . As for the proof of SGM, it only remains to choose $\lambda_{\theta} = \lambda_{\alpha} = \lambda$ + +To apply Theorem 4.4 in Davis & Drusvyatskiy (2019), we must first verify their assumptions (B1)-(B4) hold. We will enumerate these under their following general setup: writing the CVaR objective in Equation 5 as + +$$ +F _ {\beta} (x) = f (x) + r (x), \tag {42} +$$ + +where $r(x) = 0$ for SGM while $r(x) = \hat{\alpha}$ for SPL+. In the SPL+ case, we further write $f(x) = \mathbb{E}_z[h(c(x;z))]$ where $h(\cdot) = \frac{1}{1 - \beta}\max \left\{\cdot ,0\right\}$ and $c(x;z) = \hat{\ell} (\theta ;z) - \hat{\alpha}$ . Recall that the stochastic one-sided models used are + +$$ +\mathrm {S G M} \quad f _ {t} ^ {\mathrm {S G M}} (x; z) = F _ {\beta} \left(x _ {t}; z\right) + \langle g _ {t}, x - x _ {t} \rangle \quad \text {w h e r e} g _ {t} \in \partial F _ {\beta} \left(x _ {t}; z\right), \quad x = (\theta , \alpha) ^ {\top} \tag {43} +$$ + +$$ +\mathrm {S P L} + \quad f _ {t} ^ {\mathrm {S P L}} (x; z) = h \left(c \left(x _ {t}; z\right) + \left\langle u _ {t}, x - x _ {t} \right\rangle\right) \quad \text {w h e r e} u _ {t} \in \partial c \left(x _ {t}; z\right), \quad x = \left(\theta , \hat {\alpha}\right) ^ {\top} \tag {44} +$$ + +and the update in Equation 11 is equivalent to + +$$ +x _ {t + 1} = \underset {x \in \mathbb {R} ^ {d + 1}} {\arg \min } r (x) + f _ {t} (x; z) + \frac {1}{2 \lambda_ {t}} \| x - x _ {t} \| ^ {2} \tag {45} +$$ + +The assumptions we need to verify are given in the following Lemma, which are adapted from Davis & Drusvyatskiy (2019). + +Lemma 4. Let $\ell(\theta; z)$ be $M(z)$ -Lipschitz and convex. Consider the two alternative definitions for $f_t(x; z)$ given in (43) and (44). We have that the following assumptions hold. + +(B1) (Sampling) It is possible to generate i.i.d. realizations $z_{1}, z_{2}, \dots \sim P$ . + +(B2) (One-sided accuracy) There is an open set $U$ containing $\operatorname{dom} r$ and a measurable function $(x, y; z) \mapsto g_x(y; z)$ , defined on $U \times U \times \Omega$ , satisfying + +$$ +\mathbb {E} _ {z} \left[ f _ {t} (x _ {t}; z) \right] = f (x _ {t}) \quad \forall x _ {t} \in U, +$$ + +and + +$$ +\mathbb {E} _ {z} \left[ f _ {t} (x; z) - f (x) \right] \leq \frac {\tau}{2} \left\| x _ {t} - x \right\| ^ {2} \quad \forall x _ {t}, x \in U. +$$ + +(B3) (Weak-convexity) The function $f_{t}(x;z) + r(x)$ is $\eta$ -weakly convex for all $x \in U$ , a.e. $z \in \Omega$ . +(B4) (Lipschitz property) There exists a measurable function $L: \Omega \to \mathbb{R}_+$ satisfying $\sqrt{\mathbb{E}_z[L(z)^2]} \leq \mathsf{L}$ and such that + +$$ +f _ {t} \left(x _ {t}; z\right) - f _ {t} (x; z) \leq L (z) \| x _ {t} - x \| \quad \forall x _ {t}, x \in U a n d a. e. z \sim P, +$$ + +where + +$$ +\mathrm {L} _ {\mathrm {S G M}} ^ {2} = \mathbb {E} _ {z} \left[ \frac {M (z) ^ {2} + 1}{(1 - \beta) ^ {2}} + 1 \right] \quad f o r S G M w h e r e f _ {t} (x; z) i s \tag {46} +$$ + +$$ +\mathrm {L} _ {\mathrm {S P L} +} ^ {2} = \mathbb {E} _ {z} \left[ \frac {\frac {\lambda_ {\theta}}{\lambda_ {\alpha}} M (z) ^ {2} + 1}{(1 - \beta) ^ {2}} \right] \quad f o r S P L + w h e r e f _ {t} (x; z) i s (4 4). \tag {47} +$$ + +Proof. Assumption (B1) follows trivially from i.i.d. sampling, while (B2) follows from convexity of $\ell(\cdot; z)$ or $\hat{\ell}(\cdot; z)$ , giving us $\tau = 0$ . Since $r(x)$ is also convex in both methods and both models are convex, (B3) holds with $\eta = 0$ . + +To prove item (B4) for SGM, where $f^t = f_t^{\mathrm{SGM}}$ is given in (43), first note that from (6) for $g_t \in \partial F_\beta(x_t;z)$ and $u_t \in \partial \ell(\theta_t;z)$ we have that + +$$ +\begin{array}{l} \left\| g _ {t} \right\| ^ {2} = \mathbb {1} \left\{\ell \left(\theta_ {t}; z\right) - \alpha_ {t} \geq 0 \right\} \frac {\left\| u _ {t} \right\| ^ {2}}{(1 - \beta) ^ {2}} + \left(1 - \frac {\mathbb {1} \left\{\ell \left(\theta_ {t} ; z\right) - \alpha_ {t} \geq 0 \right\}}{(1 - \beta)}\right) ^ {2} \\ \leq \mathbb {1} \left\{\ell \left(\theta_ {t}; z\right) - \alpha_ {t} \geq 0 \right\} \frac {M (z) ^ {2}}{(1 - \beta) ^ {2}} + 1 - 2 \frac {\mathbb {1} \left\{\ell \left(\theta_ {t} ; z\right) - \alpha_ {t} \geq 0 \right\}}{(1 - \beta)} + \frac {\left(\mathbb {1} \left\{\ell \left(\theta_ {t} ; z\right) - \alpha_ {t} \geq 0 \right\}\right) ^ {2}}{(1 - \beta) ^ {2}} \\ \leq \frac {M (z) ^ {2}}{(1 - \beta) ^ {2}} + 1 + \frac {1}{(1 - \beta) ^ {2}}, \tag {48} \\ \end{array} +$$ + +where in the first inequality we used that $\ell (\cdot ;z)$ is $M(z)$ -Lipschitz to bound $\| u_t\| \leq M(z)$ , and in the second inequality we used that the indicator function $\mathbb{1}\left\{\ell (\theta_t;z) - \alpha_t\geq 0\right\}$ is positive and upper bounded by 1. Consequently, + +$$ +\left\| g _ {t} \right\| \leq \sqrt {1 + \frac {M (z) ^ {2} + 1}{(1 - \beta) ^ {2}}}. \tag {49} +$$ + +Thus using the above and that $\max \{\cdot, 0\}$ is 1-Lipschitz: + +$$ +\begin{array}{l} f _ {t} \left(x _ {t}; z\right) - f _ {t} (y; z) \leq \left\| g _ {t} \right\| \left\| x _ {t} - y \right\| \\ \leq \underbrace {\sqrt {1 + \frac {M (z) ^ {2} + 1}{(1 - \beta) ^ {2}}}} _ {=: L (z)} \| x _ {t} - y \|. \tag {By(49)} \\ \end{array} +$$ + +This gives us $\mathsf{L}_{\mathrm{SGM}}^2 = \mathbb{E}_z[L(z)^2 ] = \mathbb{E}_z\left[1 + \frac{M(z)^2 + 1}{(1 - \beta)^2}\right]$ + +For $\mathrm{SPL + }$ and $f^{t} = f_{t}^{\mathrm{SPL}}$ defined in (44) we have that + +$$ +\begin{array}{l} (1 - \beta) (f _ {t} (x _ {t}; z) - f _ {t} (y; z)) = \max \left\{\hat {\ell} (\theta_ {t}; z) - \hat {\alpha} _ {t}, 0 \right\} - \max \left\{\hat {\ell} (\theta_ {t}; z) - \langle \hat {v} _ {t}, \theta - \theta_ {t} \rangle - \hat {\alpha}, 0 \right\} \\ \leq \max \left\{\langle \hat {v} _ {t}, \theta - \theta_ {t} \rangle + (\hat {\alpha} - \hat {\alpha} _ {t}), 0 \right\} \quad (\max \{a, 0 \} - \max \{b, 0 \} \leq \max \{a - b, 0 \}) \\ = \max \left\{\left\langle \left( \begin{array}{c} \hat {v} _ {t} \\ 1 \end{array} \right), \left( \begin{array}{c} \theta - \theta_ {t} \\ \hat {\alpha} - \hat {\alpha} _ {t} \end{array} \right) \right\rangle , 0 \right\} \\ \leq \left\| \binom {\hat {v} _ {t}} {1} \right\| \left\| \binom {\theta - \theta_ {t}} {\hat {\alpha} - \hat {\alpha} _ {t}} \right\| \tag {Cauchy-Schwarz} \\ \leq \sqrt {1 + \left\| \hat {v} _ {t} \right\| ^ {2}} \left\| x _ {t} - y \right\| \\ \leq \sqrt {1 + \frac {\lambda_ {\theta}}{\lambda_ {\alpha}} M (z) ^ {2}} \| x _ {t} - y \| \quad (\text {S i n c e} \hat {v} _ {t} \text {i s s c a l e d} v _ {t}) \\ \end{array} +$$ + +Dividing both sides by $(1 - \beta)$ gives us + +$$ +f _ {t} (x _ {t}; z) - f _ {t} (y; z) \leq \underbrace {\left(\frac {\sqrt {\frac {\lambda_ {\theta}}{\lambda_ {\alpha}} M (z) ^ {2} + 1}}{1 - \beta}\right)} _ {=: L (z)} \| x _ {t} - y \|. +$$ + +Taking expectation over $z$ yields + +$$ +\mathsf {L} _ {\mathrm {S P L} +} ^ {2} = \mathbb {E} _ {z} [ L (z) ^ {2} ] = \mathbb {E} _ {z} \left[ \frac {\frac {\lambda_ {\theta}}{\lambda_ {\alpha}} M (z) ^ {2} + 1}{(1 - \beta) ^ {2}} \right]. +$$ + +![](images/0bec03f1dd8ddece0793ed2ded7ad7df7fee11dbd4f2580832bc7ad00584b48a.jpg) + +# C. Additional experiment results + +![](images/0b866fee6aaa3b6a8dae7f76ead4b753e6c4f919c0b7698fa94dd970c7e3042a.jpg) +Figure 8 shows a similar sensitivity analysis to Figure 3 in the main text. Instead of the sensitivity of final suboptimality, here we show the sensitivity of the minimum number of iterations to reach $\epsilon$ -suboptimality $\tilde{F}(\theta, \alpha) - \tilde{F}^* \leq \epsilon$ . + +![](images/9b3ac6a2b94c06078ffe205365fce421157f90e9d50f83c48d5934b48017646d.jpg) + +![](images/c93f9e71ae92b8c439c120d9da6e2cacde23281bba57e7b08650bee7ee8fac9c.jpg) + +![](images/c924cbe4b8ffae5b288d7526fc13598d1c93dfcfac8b348c501e9573e7ec462e.jpg) + +![](images/738fbd1ee6dd41da7a658dea4cc03b5f4aa0a00fa4d15292387e12d62604e143.jpg) + +![](images/e0eefb52ce0ebefdc756461e489ccfa247285e1a9e178ab4cd652c9a84ff27ca.jpg) + +![](images/7ae95ba276623f3dbb208dea696a5fab69b1d1fb54ee57f5ac478e8884753964.jpg) +Figure 8: Sensitivity of minimum number of iterations to achieve $\epsilon$ suboptimality to step size choices. The first two rows are regression tasks under the $\ell_1$ and $\ell_2$ losses, while the third row corresponds to a binary classification task under the logistic loss. The columns correspond to different noise distributions in the data generation that controls the difficulty of the problem. + +![](images/1d377bbad4192f73d5ab2e4a3d9b28357b808ab3a5e1635eacd6e27f36878839.jpg) + +![](images/d353e5b021446bb4b4f1a6e769d94a89e63ecd94f3c5fb1e7dbc5a8f27838d4c.jpg) + +![](images/0043e0df1e342943c36ad28b0c19064c767db4ddbeb06c138b2eba9916d13ecb.jpg) + +# D. Relationship to max loss minimization + +Lemma 5. The $SPL+$ updates in Lemma 1 minimize the prox-linear model of the Lagrangian of the max loss objective, + +$$ +\min _ {\theta \in \mathbb {R} ^ {d}} f (\theta) = \max _ {i = 1, \dots , n} \ell (\theta ; z _ {i}) +$$ + +with $\beta = 1 - 1 / n$ + +Proof. The equivalent slack formulation to the max loss objective is + +$$ +\min _ {s, \theta} s +$$ + +$$ +\begin{array}{l} \text {s . t .} \ell (\theta ; z _ {i}) \leq s \quad \forall i = 1, \dots , n \end{array} +$$ + +Note that we can add a dummy constraint to have the equivalent problem + +$$ +\min _ {s, w} s +$$ + +$$ +\text {s . t .} \ell (\theta ; z _ {i}) \leq s \quad \forall i = 1, \dots , n +$$ + +$$ +0 \leq 0 +$$ + +$$ +\Updownarrow +$$ + +$$ +\min _ {s, w} s +$$ + +$$ +\text {s . t .} \max \left\{\ell (\theta ; z _ {i}) - s, 0 \right\} \leq 0 +$$ + +Then the Lagrangian is given by + +$$ +\mathcal {L} (s, \theta , \Gamma) = s + \frac {1}{n} \sum_ {i = 1} ^ {n} \Gamma_ {i} \max \left\{\ell \left(\theta ; z _ {i}\right) - s, 0 \right\} \tag {50} +$$ + +Note that we have included a $1 / n$ scaling for each constraint, which is fine because they are positive and so can be absorbed into the Lagrange multipliers. The dual problem is given by + +$$ +\max _ {\Gamma \in \mathbb {R} ^ {n}} g (\Gamma) = \min _ {s, \theta} \mathcal {L} (s, \theta , \Gamma) +$$ + +$$ +\begin{array}{l} \text {s . t .} \Gamma_ {i} \geq 0 \quad \forall i = 1, \dots , n \end{array} +$$ + +And so given a set of $\Gamma$ , we need to minimize the Lagrangian over $s$ and $\theta$ . We can treat this as the base objective, the basis of our stochastic model construction. At each iteration $t$ , we will use the following model + +$$ +m _ {t} (s, \theta) = s + \Gamma_ {i} \max \left\{\ell \left(\theta_ {t}; z _ {i}\right) + \langle \nabla \ell \left(\theta_ {t}; z _ {i}\right), \theta - \theta_ {t} \rangle - s, 0 \right\} +$$ + +And then using the stochastic model-based approach, the updates are given by + +$$ +\theta_ {t + 1}, s _ {t + 1} = \underset {s, \theta} {\arg \min } m _ {t} (s, \theta) + \frac {1}{2 \lambda_ {\theta}} \| \theta - \theta_ {t} \| ^ {2} + \frac {1}{2 \lambda_ {s}} (s - s _ {t}) ^ {2} +$$ + +Observe that this corresponds exactly to our $\mathrm{SPL}+$ updates for the CVaR objective. Specifically, we can recover that by taking $s = \alpha$ and $\Gamma_{i} = \frac{1}{1 - \beta}$ . Now let's analyze the KKT conditions to see what $\Gamma$ should be. Let $u_{i}^{*} = \partial \max \{u,0\} |_{u = f(\theta^{*};z_{i}) - s^{*}}$ + +$$ +0 \in \partial_ {s} \mathcal {L} (s ^ {*}, \theta^ {*}, \Gamma^ {*}) = 1 - \frac {1}{n} \sum_ {i = 1} ^ {n} \Gamma_ {i} ^ {*} u _ {i} ^ {*} \iff 1 \in \frac {1}{n} \sum_ {i = 1} ^ {n} \Gamma_ {i} ^ {*} u _ {i} ^ {*} +$$ + +$$ +0 \in \partial_ {\theta} \mathcal {L} (s ^ {*}, \theta^ {*}, \Gamma^ {*}) = \frac {1}{n} \sum_ {i = 1} ^ {n} \Gamma_ {i} ^ {*} u _ {i} ^ {*} \nabla \ell (\theta^ {*}; z _ {i}) +$$ + +$$ +\max \left\{\ell \left(\theta^ {*}; z _ {i}\right) - s ^ {*}, 0 \right\} \leq 0 \quad \forall i +$$ + +$$ +\Gamma_ {i} ^ {*} \geq 0 \quad \forall i +$$ + +$$ +\Gamma_ {i} ^ {*} (\max \left\{\ell \left(\theta^ {*}; z _ {i}\right) - s ^ {*}, 0 \right\}) = 0 \quad \forall i +$$ + +First, based on the constraints, we must have $\ell(\theta^{*}; z_{i}) \leq s^{*}$ for all $i$ . Now suppose none of the constraints are tight, i.e. $\ell(\theta^{*}; z_{i}) < s^{*}$ for all $i$ . Then $u_{i}^{*} = 0$ for all $i$ so the first KKT condition would collapse. This means that the active constraint set $\mathcal{I} = \{i = 1, \dots, n : \ell(\theta^{*}; z_{i}) = s^{*}\}$ must be non-empty. We can use this to simplify the first two conditions to + +$$ +1 \in \frac {1}{n} \sum_ {i \in \mathcal {I}} \Gamma_ {i} ^ {*} [ 0, 1 ] +$$ + +$$ +0 \in \frac {1}{n} \sum_ {i \in \mathcal {I}} \Gamma_ {i} ^ {*} [ 0, 1 ] \nabla \ell \left(\theta^ {*}; z _ {i}\right) +$$ + +which holds for $\Gamma_i^* \geq n$ for $i \in \mathcal{I}$ . If we let $\Gamma_i^* = n$ for all $i$ , then we will need $\beta = 1 - 1/n$ to recover the CVaR objective, which makes sense in the max loss minimization problem with $n$ training examples, so we can take $P = P_n$ the empirical distribution. \ No newline at end of file diff --git a/amodelbasedmethodforminimizingcvarandbeyond/images.zip b/amodelbasedmethodforminimizingcvarandbeyond/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..d1957b777eb74b61e210d1afc03e54ea54dc2000 --- /dev/null +++ b/amodelbasedmethodforminimizingcvarandbeyond/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e89bb315beb9c51a7930f2b9f3497d58ef017a690c8441ce4b323765a9eab079 +size 1449293 diff --git a/amodelbasedmethodforminimizingcvarandbeyond/layout.json b/amodelbasedmethodforminimizingcvarandbeyond/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..8f0662d53b5918417f7220c72c646b69cfae3a4f --- /dev/null +++ b/amodelbasedmethodforminimizingcvarandbeyond/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dc2697c864f99b2d121941e25fd08d854683993425f8f6995b0ecbe4c98c3f43 +size 1050784 diff --git a/amodelfreeclosenessofinfluencetestforfeaturesinsupervisedlearning/16b1bb8b-955b-4ed8-a04b-5572fd8a6cd0_content_list.json b/amodelfreeclosenessofinfluencetestforfeaturesinsupervisedlearning/16b1bb8b-955b-4ed8-a04b-5572fd8a6cd0_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..97845d433b39c8373ed5c6bb73464d443c6cc3a1 --- /dev/null +++ b/amodelfreeclosenessofinfluencetestforfeaturesinsupervisedlearning/16b1bb8b-955b-4ed8-a04b-5572fd8a6cd0_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0992457ac2dc785170bb5295043261bf124538330eff2bad52d0e964368ebfe9 +size 165004 diff --git a/amodelfreeclosenessofinfluencetestforfeaturesinsupervisedlearning/16b1bb8b-955b-4ed8-a04b-5572fd8a6cd0_model.json b/amodelfreeclosenessofinfluencetestforfeaturesinsupervisedlearning/16b1bb8b-955b-4ed8-a04b-5572fd8a6cd0_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3cd64716909e503e4ac1b2e7364808afc03d0284 --- /dev/null +++ b/amodelfreeclosenessofinfluencetestforfeaturesinsupervisedlearning/16b1bb8b-955b-4ed8-a04b-5572fd8a6cd0_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa25a15ba1c0f4c5a9da9fe8ab76f466ea01672900a80996a66beebfeaa74506 +size 190173 diff --git a/amodelfreeclosenessofinfluencetestforfeaturesinsupervisedlearning/16b1bb8b-955b-4ed8-a04b-5572fd8a6cd0_origin.pdf b/amodelfreeclosenessofinfluencetestforfeaturesinsupervisedlearning/16b1bb8b-955b-4ed8-a04b-5572fd8a6cd0_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5a8f886c096eeada9ab0b56a4df6997cc6f3ad3c --- /dev/null +++ b/amodelfreeclosenessofinfluencetestforfeaturesinsupervisedlearning/16b1bb8b-955b-4ed8-a04b-5572fd8a6cd0_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6a7cbc119cc0bdb44d04adaa9e413e2ba37e6dd7b0af6133f7d16a03e2acf4ef +size 1568190 diff --git a/amodelfreeclosenessofinfluencetestforfeaturesinsupervisedlearning/full.md b/amodelfreeclosenessofinfluencetestforfeaturesinsupervisedlearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..650e90b1e34498571e55f2bf9cff59d78cc24783 --- /dev/null +++ b/amodelfreeclosenessofinfluencetestforfeaturesinsupervisedlearning/full.md @@ -0,0 +1,887 @@ +# A Model-free Closeness-of-influence Test for Features in Supervised Learning + +Mohammad Mehrabi1 Ryan A. Rossi2 + +# Abstract + +Understanding the effect of a feature vector $x \in \mathbb{R}^d$ on the response value (label) $y \in \mathbb{R}$ is the cornerstone of many statistical learning problems. Ideally, it is desired to understand how a set of collected features combine together and influence the response value, but this problem is notoriously difficult, due to the high-dimensionality of data and limited number of labeled data points, among many others. In this work, we take a new perspective on this problem, and we study the question of assessing the difference of influence that the two given features have on the response value. We first propose a notion of closeness for the influence of features, and show that our definition recovers the familiar notion of the magnitude of coefficients in the parametric model. We then propose a novel method to test for the closeness of influence in general model-free supervised learning problems. Our proposed test can be used with finite number of samples with control on type I error rate, no matter the ground truth conditional law $\mathcal{L}(Y|X)$ . We analyze the power of our test for two general learning problems i) linear regression, and ii) binary classification under mixture of Gaussian models, and show that under the proper choice of score function, an internal component of our test, with sufficient number of samples will achieve full statistical power. We evaluate our findings through extensive numerical simulations, specifically we adopt the datamodel framework (Ilyas, et al., 2022) for CIFAR-10 dataset to identify pairs of training samples with different influence on the trained model via optional black box training mechanisms. + +# 1. Introduction + +In a classic supervised learning problem, we are given a dataset of $n$ iid data points $\{(x_i, y_i)\}_{i=1:n}$ with feature vectors $x \in \mathbb{R}^d$ and response value (label) $y \in \mathbb{R}$ . From the inferential point of view, understanding the influence of each individual feature $i \in \{1, \dots, d\}$ on $y$ is of paramount importance. Considering a parametric family of distributions for $\mathcal{L}(Y|X)$ is among the most studied techniques for this problem. In this setting, the influence of each feature can be seen by their corresponding coefficient value in the parametric model. Essentially such methods can result in spurious statistical findings, mainly due to model misspecification, where in the first place the ground-truth data generating law $\mathcal{L}(Y|X)$ does not belong to the considered parametric family. A natural remedy for this problem is to relax the parametric family assumption, removing concerns about model misspecification. Besides the difficulties with the new model-free structure of the problem, we need a new notion to capture the influence of features, as there is no longer a coefficient vector as per class of parametric models. + +In this paper, we follow the model-free structure, but take a new perspective on the generic problem of investigating the influence of features on the response value. In particular, as a first step towards this notoriously hard question under no class of parametric distribution assumption or whatsoever, we are specifically interested in assessing the closeness of influence of features. For this end, we posit the following fundamental question: + +(*) In a general model-free supervised learning problem, for two given features, is it possible to assess the closeness of their influence on the response value (label) in a statistically sound way? + +In this paper, we answer question $(^{*})$ affirmatively. We characterize a notion of closeness for the influence of features on $y$ under the general model-free framework. We show that this notion aligns perfectly well with former expectations in parametric models, where small difference in the coefficient values imply close influence on the response value. We then cast the closeness of influence question as a hypothesis testing problem, and show that we can control associated type I error rate with finite number of samples. + +# 1.1. Motivation Behind Question (*) + +Beyond the inferential nature of Question (*) that helps to better understand the data-generating process of on-hand data, being able to answer this question has a myriad of applications for other classic machine learning tasks. In fact, inspired by the recent advancements in interpretable machine learning systems, it is desired to strike a balance between model flexibility in capturing the ground-truth law $\mathcal{L}(Y|X)$ and using few number of explanatory variables. For this goal, feature aggregation has been used to distill a large amount of feature information into a smaller number of features. In several parametric settings, features with equal coefficients are naturally grouped together, e.g., in linear regression new feature $x_{1} + x_{2}$ is considered rather than $(x_{1},x_{2})$ , in case that $x_{1},x_{2}$ have equal corresponding regression coefficients (Yan & Bien, 2021). In addition, identifying features with near influence on the response value can be used for tree-based aggregation schemes (Shao et al., 2021; Bien et al., 2021; Wilms & Bien, 2022). This is of paramount importance in learning problems involving rare features, such as the count of microbial species (Bien et al., 2021). In addition, in many learning problems, an honest comprehensive assessment for characterizing the behavior of $Y$ with respect to a certain attribute $A$ is desired. This can be used to assess the performance of model with respect to a sensitive attribute (fair machine learning), or to check if two different treatments (different values of $A$ ) have close influence on potential outcomes. + +# 1.2. Related Work + +In machine learning, the problem of identifying a group of features that have the largest influence on the response value is often formulated as variable selection. With a strong parametric assumption, the conditional law $\mathcal{L}(Y|X)$ is considered to belong to a known class of parametric models, such as linear regression. For variable selection in the linear regression setting, the LASSO (Tibshirani, 1996) and Dantzig selector (Candes & Tao, 2007) are the most widely used. In fact, there are several other works for variable selection in the linear regression setting with output solutions satisfying certain structures, such as (Bogdan et al., 2015; Tibshirani et al., 2005). There has been another complimentary line in the past years from model-X perspective (Candes et al., 2018). In this setting, despite the classical setup, in which a strong parametric assumption is considered on the conditional law, it shifts the focus to the feature distribution $X$ and assumes an extensive knowledge on the distribution of the features. This setting arises naturally in many learning problems. For example, we can get access to distributional information on features in learning scenarios where the sampling mechanism can be controlled, e.g., in datamodel framework (Ilyas et al., 2022), and gene knockout experiments (Peters et al., 2016; Cong et al., 2013). Other settings + +include problems where an abundant number of unlabeled data points (unsupervised learning) are available. + +The other related line of work is to estimate and perform statistical inference on certain statistical model parameters. Specifically, during the past few years, there have been several works (Javanmard & Montanari, 2014; Van de Geer et al., 2014; Deshpande et al., 2019; Fei & Li, 2021) for inferential tasks on low-dimensional components of model parameters in high-dimensional $(d > n)$ settings of linear and generalized linear models. Another complementary line of work, is the conditional independence testing problem $X_{j} \perp Y|X_{-j}$ to test if a certain feature $X_{j}$ is independent of the response value $Y$ , while controlling for the effect of the other features. This problem has been studied in several recent works for both parametric (Crawford et al., 2018; Belloni et al., 2014), and model-X frameworks (Candes et al., 2018; Javanmard & Mehrabi, 2021; Liu et al., 2022; Shaer & Romano, 2022; Berrett et al., 2020). + +Here are couple of points worth mentioning regarding the scope of our paper. + +1. (Feature selection methods) However Question (*) has a complete different nature from well-studied variable selection techniques- with the goal of removing redundant features, an assessment tool provided for $(^{*})$ can be beneficial for post-processing of feature selection methods as well. Specifically, we expect that two redundant features have close (zero) influence on the response value, therefore our closeness-of-influence test can be used to sift through the set of redundant features and potentially improve the statistical power of the baseline feature selection methods. +2. (Regression models) We would like to emphasize that however fitting any class of regression models would yield an estimate coefficient vector, but comparing the magnitude of coefficient values for answering Question (*) is not statistically accurate and would result in invalid findings, mainly due to model misspecification. Despite such inaccuracies of fitted regression models, our proposed closeness-of-influence test works under no parametric assumption on the conditional law. +3. (Hardness of non-parametric settings) The finite-sample guarantee on type-I error rate for our test does not come free. Specifically, this guarantee holds when certain partial knowledge on the feature distributions $\mathcal{L}(X)$ is known. This setup is often referred as model-X framework (Candes et al., 2018), where on contrary to the classic statistic setups, the conditional law $\mathcal{L}(Y|X)$ is optional, and adequate amount of information on features distribution $\mathcal{L}(X)$ is known. Such requirements for features distribution makes the scope + +of our work distant from completely non-parametric problems. + +# 1.3. Summary of contributions and organization + +In this work, we propose a novel method to test the closeness of influence of a given pair of features on the response value. Here is the organization of the three major parts of the paper: + +- In Section 2, we propose the notion of symmetric influence and formulate the question $(^{*})$ as a tolerance hypothesis testing problem. We then introduce the main algorithm to construct the test statistic, and the decision rule. We later show that the type-I error is controlled for finite number of data points. +- In Section 3, for two specific learning problems: 1) linear regression setup, and 2) binary classification under a mixture of Gaussians, we analyze the statistical power of our proposed method. Our analysis reveals guidelines on the choice of the score function, that is needed for our procedure. +- In Section 5, we combine our closeness-of-influence test with datamodels (Ilyas et al., 2022) to study the influence of training samples on the trained black box model. We consider CIFAR-10 dataset and identify several pairs of training samples with different influence on the output models. + +Finally, we empirically evaluate the performance of our method in several numerical experiments, we show that our method always controls type-I error with finite number of data points, while it can achieve high statistical power. We end the paper by providing concluding remarks and interesting venues for further research. + +# 1.4. Notation + +For a random variable $X$ , we let $\mathcal{L}(X)$ denote the probability density function of $X$ . For two density functions $p, q$ let $d_{\mathrm{TV}}(p, q)$ denote the total variation distance. We use $\Phi(t)$ and $\varphi(t)$ respectively for cdf and pdf of standard normal distribution. For and integer $n$ let $[n] = \{1, \dots, n\}$ and for a vector $x \in \mathbb{R}^d$ and integers $i, j \in [d]$ let $x_{\mathrm{swap}(i,j)}$ be a vector obtained by swapping the coordinates $i$ and $j$ of $x$ . We let $\mathsf{N}(\mu, \Sigma)$ denote the probability density function of a multivariate normal distribution with mean $\mu$ and covariance matrix $\Sigma$ . + +# 2. Problem Formulation + +We are interested in investigating that if two given features $i,j$ have close influence on the response value $y$ . Specifically, in the case of the linear regression setting + +$\mathcal{L}(Y|X) = \mathsf{N}(X^{\top}\theta ,\sigma^{2})$ , two features $i$ and $j$ have an equal effect on the response variable $y$ , if the model parameter $\theta$ has equal coordinates in $i$ and $j$ . In this parametric problem, the close influence analysis can be formulated as the following hypothesis testing problem + +$$ +H _ {0}: \left| \theta_ {i} - \theta_ {j} \right| \leq \tau , \quad H _ {A}: \left| \theta_ {i} - \theta_ {j} \right| > \tau . +$$ + +In practice, the considered parametric model may not hold, and due to model misspecification, the reported results are not statistically sound and accurate. Our primary focus is to extend the definition of close influence of features on the response value to a broader class of supervised learning problems, ideally with no parametric assumption on $\mathcal{L}(Y|X)$ (model-free). For this end, we first propose the notion of symmetric influence. + +Definition 2.1 (Symmetric influence). We say that two features $i, j \in [d]$ have a symmetric influence on the response value $y$ if the conditional law $p_{Y|X}$ does not change once features $i$ and $j$ are swapped in $x$ . More precisely, if $\mathcal{L}(Y|X) = \mathcal{L}(Y|X_{\mathrm{swap}(i,j)})$ , where $X_{\mathrm{swap}(i,j)}$ is obtained from swapping coordinates $i$ and $j$ in $X$ . + +While the perfect alignment between density function $p_{Y|X}$ and $p_{Y|X_{\mathrm{swap}(i,j)}}$ is considered as equal influence, it is natural to consider small (but nonzero) average distance of these two density functions as having close influence of features $i,j$ on the response value. Inspired by this observation, we cast the problem of closeness-of-influence testing as a tolerance hypothesis testing problem 1. Before further analyzing this extended definition, for two simple examples we show that the symmetric influence definition recovers the familiar equal effect notion in parametric problems. It is worth noting that this result can be generalized to a broader class of parametric models. + +Proposition 2.2. Consider the logistic model $\mathbb{P}(Y = 1|X = x) = \frac{1}{1 + \exp(-x^{\top}\theta)}$ . In this model, features $i$ and $j$ have symmetric influence on $y$ if and only if $\theta_{i} = \theta_{j}$ . In addition, for the linear regression setting $y = x^{\top}\theta + \varepsilon$ with $\varepsilon \sim \mathsf{N}(0,\sigma^2)$ , features $i$ and $j$ have symmetric influence on $y$ if and only if $\theta_{i} = \theta_{j}$ . + +We refer to Appendix A for proofs of all propositions and theorems. + +# 2.1. Closeness-of-influence testing + +Inspired by the definition of symmetric influence given in Definition 2.1, we formulate the problem of testing the closeness of the influence of two features $i,j$ on $y$ as the following: + +$$ +\begin{array}{l} \mathcal {H} _ {0}: \mathbb {E} \left[ d _ {\mathsf {T V}} (p _ {Y | X}, p _ {Y | X _ {\mathrm {s w a p} (i, j)}}) \right] \leq \tau , \\ \mathcal {H} _ {A}: \mathbb {E} \left[ d _ {\mathrm {T V}} \left(p _ {Y | X}, p _ {Y | X _ {\text {s w a p} (i, j)}}\right) \right] > \tau . \tag {1} \\ \end{array} +$$ + +Specifically, this hypothesis testing problem allows for general non-negative $\tau$ values. We can test for symmetric influence by simply selecting $\tau = 0$ . In this case, we must have $p_{Y|X} = p_{Y|X_{\mathrm{swap}(i,j)}}$ almost surely (with respect to some measure on $\mathcal{X}$ ). For better understanding of the main quantities in the left-hand-side of 1, it is worth to note that $p_{Y|X_{\mathrm{swap}(i,j)}}(y|x) = p_{Y|X}(y|x_{\mathrm{swap}}(i,j))$ and the quantity of interest can be written as + +$$ +\begin{array}{l} \mathbb {E} \left[ d _ {\text {T V}} \left(p _ {Y | X}, p _ {Y | X _ {\text {s w a p} (i, j)}}\right) \right] \\ = \frac {1}{2} \int \left| p _ {Y | X} (y | x) - p _ {Y | X} \left(y \mid x _ {\operatorname {s w a p} (i, j)}\right) \right| p _ {X} (x) \mathrm {d} y \mathrm {d} x. \\ \end{array} +$$ + +We next move to the formal process to construct the test statistics of this hypothesis testing problem. + +Test statistics. We first provide high-level intuition behind the test statistics used for testing 1. In a nutshell, for two i.i.d. data points $(x^{(1)},y^{(1)})$ and $(x^{(2)},y^{(2)})$ , if the density functions $p_{Y|X}$ is close to $p_{Y|X_{\mathrm{swap}(i,j)}}$ , then for an optional score functions applied on $(x^{(1)},y^{(1)})$ and $(x_{\mathrm{swap}(i,j)}^{(2)},y^{(2)})$ , with equal chance (50%) one should be larger than the other one. This observation is subtle though. Since we intervene in the features of the second data point (by swapping its coordinates), this shifts the features distribution, thereby the joint distribution of $(x^{(1)},y^{(1)})$ and $(x_{\mathrm{swap}(i,j)}^{(2)},y^{(2)})$ are not equal. This implies that we must control for such distributional shifts on features as well. The formal process for constructing the test statistics $U_{n}$ is given in Algorithm 1. We next present the decision rule for hypothesis problem 1. + +# Algorithm 1 Test statistic for hypothesis testing 1 + +Input: $n$ data points $\{(x^{(m)},y^{(m)})\}_{m = 1:n}$ with $(x,y)\in \mathbb{R}^d\times \mathbb{R}$ (for $n$ being even-if not,remove one sample), two features $i,j\in \{1,2,\dots,d\}$ , and a score function $T:\mathcal{X}\times \mathcal{Y}\to \mathbb{R}$ . + +Output: A test statistic $U_{n}$ . + +For $1\leq m\leq \frac{n}{2}$ define + +$$ +\tilde {x} ^ {(m)} = x _ {\mathsf {s w a p} (i, j)} ^ {(m + \frac {n}{2})}, \quad \tilde {y} ^ {(m)} = y ^ {(m + \frac {n}{2})}. +$$ + +Define tests statistic $U_{n}$ + +$$ +U _ {n} = \frac {2}{n} \sum_ {m = 1: \frac {n}{2}} \mathbb {I} \left(T \left(x ^ {(m)}, y ^ {(m)}\right) \geq T \left(\tilde {x} ^ {(m)}, \tilde {y} ^ {(m)}\right)\right). +$$ + +Decision rule. For the data set $(\mathbf{X},\mathbf{Y})$ of size $n$ and test statistic $U_{n}$ as per Algorithm 1 at significance level $\alpha$ con + +sider the following decision rule + +$$ +\psi_ {n} (\mathbf {X}, \mathbf {Y}) = \mathbb {I} \left(\left| U _ {n} - \frac {1}{2} \right| \geq \tau + \tau_ {X} + \sqrt {\frac {\log (2 / \alpha)}{n}}\right), \tag {2} +$$ + +with $\tau_{X}$ being an upper bound on the total variation distance between the original feature distribution, and the obtained distribution by swapping coordinates $i,j$ . More precisely, for two independent features vectors $X^{(1)},X^{(2)}$ let $\tau_{X}$ be such that $\tau_{X}\geq d_{\mathsf{TV}}\left(\mathcal{L}(X^{(1)}),\mathcal{L}(X_{\mathsf{swap}(i,j)}^{(2)})\right)$ . In fact, in several learning problems when features have a certain symmetric structure, the quantity $\tau_{X}$ is zero. For instance, when features are multivariate Gaussian with isotropic covariance matrix. More on this can be seen in Section 2.2. + +Size of the test. In this section, we show that the obtained decision rule 2 has control on type I error with finite number of samples. More precisely, we show that the probability of falsely rejecting the null hypothesis 1 can always be controlled such that it does not exceed a predetermined significance level $\alpha$ . + +Theorem 2.3. Under the null hypothesis 1, decision rule 2 has type-I error smaller than $\alpha$ . More precisely + +$$ +\mathbb {P} _ {\mathcal {H} _ {0}} (\psi (\mathbf {X}, \mathbf {Y}) = 1) \leq \alpha . +$$ + +Based on decision rule 1, we can construct p-values for the hypothesis testing problem 1. The next proposition gives such formulation. + +Proposition 2.4. Consider + +$$ +p = \left\{ \begin{array}{l l} 1, & | U _ {n} - 1 / 2 | \leq \tau + \tau_ {X}, \\ 1 \wedge \eta_ {n} \left(U _ {n}, \tau , \tau_ {X}\right), & \text {o t h e r w i s e}, \end{array} \right. \tag {3} +$$ + +with function $\eta_n(u, \tau_1, \tau_2)$ being defined as + +$$ +\eta_ {n} (u, \tau_ {1}, \tau_ {2}) = 2 \exp \left(- n \left(\left| u - \frac {1}{2} \right| - \tau_ {1} - \tau_ {2}\right) ^ {2}\right). +$$ + +In this case, the $p$ -value $p$ is super-uniform. More precisely, under the null hypothesis 1 for every $\alpha \in [0,1]$ we have + +$$ +\mathbb {P} (p \leq \alpha) \leq \alpha . +$$ + +# 2.2. Effect of feature swap on features distribution + +From the formulation of the decision rule given in 2, it can be seen that an upper bound on total variation distance between density functions of $X^{(1)}$ and $X_{\mathrm{swap}(i,j)}^{(2)}$ is required. This quantity shows up as $\tau_{X}$ in 2. Regarding this change on $X$ distribution, two points are worth mentioning. First, in several classes of learning problems the feature vectors follow a symmetric structure which renders the quantity + +$\tau_{X}$ to zero. For instance, when features have an isotropic Gaussian distribution (Proposition 2.5), or in the datamodel sampling scheme (Ilyas et al., 2022), the formal statement is given in Proposition 2.6. Secondly, the value of $\tau_{X}$ can be computed when adequate amount of information is available on distribution of $X$ , the so-called model-X framework (Candes et al., 2018). We would also like to emphasize that indeed we do not need the direct access to entire density function $p_{X}$ information, and an upper bound on the quantity $d_{\mathsf{TV}}(\mathcal{L}(X^{(1)}),\mathcal{L}(X_{\mathsf{swap}(i,j)}^{(2)}))$ is sufficient. In the next proposition, for the case that features follow a general multivariate Gaussian distribution $\mathsf{N}(\mu ,\Sigma)$ we provide a valid closed-form value for $\tau_{X}$ . + +Proposition 2.5. Consider a multivariate Gaussian distribution with the mean vector $\mu \in \mathbb{R}^d$ and the covariance matrix $\Sigma \in \mathbb{R}^{d\times d}$ , for two features $i$ and $j$ the following holds: + +$$ +\begin{array}{l} d _ {\mathrm {T V}} \left(\mathcal {L} \left(X ^ {(1)}\right), \mathcal {L} \left(X _ {\operatorname {s w a p} (i, j)} ^ {(2)}\right)\right) \\ \leq \frac {1}{2} \left[ \operatorname {t r} \left(- I _ {d} + P _ {i j} \Sigma^ {- 1} P _ {i j} \Sigma\right) \right. \\ \left. + \left(\mu - P _ {i j} \mu\right) ^ {\top} \Sigma^ {- 1} \left(\mu - P _ {i j} \mu\right) \right] ^ {1 / 2}, \tag {4} \\ \end{array} +$$ + +where $P_{ij}$ is the permutation matrix that swaps the coordinates $i$ and $j$ . More precisely, for every $x \in \mathbb{R}^d$ we have $P_{ij}x = x_{\mathrm{swap}(i,j)}$ . + +It is easy to observe that in the case of isotropic Gaussian distribution with zero mean, we can choose $\tau_{X} = 0$ . More concretely, when $\mu = 0$ , and $\Sigma = \sigma^2 I$ , then Proposition 2.5 reads $\tau_{X} = 0$ . We next consider a setting with binary feature vectors that arise naturally in datamodels (Ilyas et al., 2022), and will be used later in experiments of Section 5. + +Proposition 2.6. Consider a learning problem with binary features vector $x \in \{0,1\}^d$ . For a positive integer $m$ , we suppose that $x$ is sampled uniformly at random from the space $S_m = \{x \in \{0,1\}^d : \sum x_i = m\}$ . This means that the output sample has binary entries with exactly $m$ nonzero coordinates. Then, in this setting for two independent features vectors $x^{(1)}, x^{(2)}$ , the following holds + +$$ +d _ {\mathrm {T V}} \left(\mathcal {L} \left(X ^ {(1)}\right), \mathcal {L} \left(X _ {\operatorname {s w a p} (i, j)} ^ {(2)}\right)\right) = 0. +$$ + +# 3. Power Analysis + +In this section, we provide a power analysis for our method. For a fixed score function $T: \mathcal{X} \times \mathcal{Y} \to \mathbb{R}$ and two i.i.d. data points $(x^{(1)}, y^{(2)})$ and $(x^{(2)}, y^{(2)})$ consider the following cumulative distribution functions: + +$$ +\begin{array}{l} F _ {T} (t) = \mathbb {P} \left(T \left(X ^ {(1)}, Y ^ {(1)}\right) \leq t\right), \\ G _ {T} (t) = \mathbb {P} \left(T \left(X _ {\mathsf {s w a p} (i, j)} ^ {(2)}, Y ^ {(2)}\right) \leq t\right). \\ \end{array} +$$ + +In the next theorem, we show that the power of our test depends on the average deviation of the function $F_{T} \circ G_{T}^{-1}$ from the identity mapping on the interval [0, 1]. + +Theorem 3.1. Consider the hypothesis testing problem 1 at significance level $\alpha$ with $n$ data points $(\mathbf{X},\mathbf{Y})$ . In addition, suppose that score function $T:\mathcal{X}\times \mathcal{Y}\to \mathbb{R}$ satisfies the following condition for some $\beta \in (0,1)$ : + +$$ +\left| \int_ {0} ^ {1} \left(F _ {T} \left(G _ {T} ^ {- 1} (u)\right) - u\right) \mathrm {d} u \right| \geq \rho_ {n} (\alpha , \beta , \tau) + \tau_ {X}, +$$ + +with $\rho_{n}(\alpha ,\beta ,\tau) = 2\exp (-n\beta^{2}) + \sqrt{\frac{\log(2 / \alpha)}{n}} +\tau$ .In this case, the decision rule 2 used with the score function $T$ has type II error not exceeding $\beta$ . More precisely $\mathbb{P}\left(\Psi_n(\mathbf{X},\mathbf{Y}) = 1\right)\geq 1 - \beta$ + +The function $F_{T} \circ G_{T}^{-1}$ is called ordinal dominance curve (ODC) (Hsieh & Turnbull, 1996; Bamber, 1975). It can be seen that the ODC is the population counterpart of the PP plot. A direct consequence of the above theorem is that if the ODC has a larger distance from the identity map $i(u) = u$ , then it would be easier for our test to flag smaller gaps between the influence of features. We next focus on two learning problems: 1) linear regression setting, and 2) binary classification under Gaussian mixture models. For each problem, we use Theorem 3.1 and provide lower bounds on the statistical power of our closeness-of-influence test. + +Linear regression setup. In this setting, we suppose that $y = x^{\mathsf{T}}\theta^{*} + \varepsilon$ for $\varepsilon \sim \mathsf{N}(0,\sigma^2)$ and feature vectors drawn iid from a multivariate normal distribution $\mathsf{N}(0,I_d)$ . Since features are isotropic Gaussian with zero mean, by an application of Theorem 2.5 we know that $\tau_{X}$ is zero. In the next theorem, we provide an upper bound for hypothesis testing problem 1 with $n$ data points and the score function $T(x,y) = |y - x^{\mathsf{T}}\widehat{\theta}|$ for some model estimate $\widehat{\theta}$ . We show that in this example, the power of the test highly depends on the value $|\theta_i^* -\theta_j^*|$ and the quality of the model estimate $\widehat{\theta}$ . Indeed, the higher the contrast between the coefficient values $\theta_i^*$ and $\theta_j^*$ , the easier it is for our test to reject the null hypothesis. + +Theorem 3.2. Under the linear regression setting $y = x^{\mathsf{T}}\theta^{*} + \varepsilon$ with $\varepsilon \sim \mathsf{N}(0,\sigma^2)$ with feature vectors coming from a normal population $x\sim \mathsf{N}(0,I_d)$ , consider the hypothesis testing problem 1 for features $i$ and $j$ with $\tau \in (0,1)$ . We run Algorithm 1 at the significance level $\alpha$ with the score function $T(x,y) = |y - x^{\mathsf{T}}\widehat{\theta}|$ for a model estimate $\widehat{\theta}\in \mathbb{R}^d$ . For $\beta \in (0,1)$ such that $\tan (\frac{\pi}{2}\rho_n(\alpha ,\beta ,\tau))\leq \frac{1}{2}$ , suppose that the following condition holds + +$$ +| \theta_ {i} ^ {*} - \theta_ {j} ^ {*} | \geq \frac {2 \tan (\frac {\pi}{2} (\rho_ {n} (\alpha , \beta , \tau)))}{1 - 2 \tan (\frac {\pi}{2} (\rho_ {n} (\alpha , \beta , \tau)))} \frac {\left(\sigma^ {2} + \| \widehat {\theta} - \theta^ {*} \| _ {2} ^ {2}\right)}{| \widehat {\theta} _ {i} - \widehat {\theta} _ {j} |}, +$$ + +for $\rho_{n}(\alpha, \beta, \tau)$ as per Theorem 3.1. Then, the type II error is bounded by $\beta$ . More precisely, we have $\mathbb{P}(\Psi_n(\mathbf{X}, \mathbf{Y}) = 1) \geq 1 - \beta$ . + +We refer to Appendix for the proof of Theorem 3.2. It can be seen that the right-hand-side of the above expression can be decomposed into two major parts. The first part involves the problem parameters, such as the number of samples $n$ , and error tolerance values $\alpha$ and $\beta$ . This quantity for a moderately large number of samples $n$ , and small tolerance value $\tau$ can get sufficiently small. On the other hand, the magnitude of the second part depends highly on the quality of the model estimate $\widehat{\theta}$ and the inherent noise value of the problem $\sigma^2$ which basically indicates how structured is the learning problem. Another interesting observation is regarding the $|\widehat{\theta}_i - \widehat{\theta}_j|$ . Indeed, it can be inferred that small values of this quantity renders the problem of discovering deviation from the symmetric influence harder. This conforms to our expectation, given that in the extreme scenario that $\widehat{\theta}_i = \widehat{\theta}_j$ it is impossible for the score function to discern $\theta_i^*$ and $\theta_j^*$ , because of the additive nature of the considered score function. + +Binary classifier. In this section, we provide power analysis of our method for a binary classification setting. Specifically, we consider the binary classification under a mixture of Gaussian model. More precisely, in this case the data generating process is given by + +$$ +y = \left\{ \begin{array}{l l} + 1, & \text {w . p} q, \\ - 1, & \text {w . p} 1 - q. \end{array} , \quad x \sim \mathrm {N} (y \mu , I _ {d}). \right. \tag {5} +$$ + +We consider the influence testing problem 1 with $\tau = 0$ . In the next theorem, we provide a lower bound on the statistical power of our method used under this learning setup. + +Theorem 3.3. Under the binary classification setup 5, consider the hypothesis testing problem 1 for $\tau = 0$ . We run Algorithm 1 with the score function $T(x,y) = yx^{\top}\theta$ at the significance level $\alpha$ , and suppose that for some nonnegative value $\beta$ the following holds + +$$ +\left| \mu_ {i} - \mu_ {j} \right| \geq \Phi^ {- 1} \left(\frac {1}{2} + \rho_ {n} (\alpha , \beta , 0)\right) \frac {\sqrt {2} \| \widehat {\theta} \| _ {2}}{\left| \widehat {\theta} _ {i} - \widehat {\theta} _ {j} \right|}, +$$ + +where $\rho_{n}(\alpha ,\beta ,\tau)$ is given as per Theorem 3.1. Then the type-II error in this case is bounded by $\beta$ . More concretely, we have $\mathbb{P}(\Psi_n(\mathbf{X},\mathbf{Y}) = 1)\geq 1 - \beta$ + +It is important to note that in this particular setting, the features do not follow a Gaussian distribution with a zero mean. Instead, they are sampled from a mixture of Gaussian distributions with means $\mu$ and $-\mu$ . The reason why $\tau_{X} = 0$ can be utilized is not immediately obvious. However, we demonstrate that when testing for $\tau = 0$ under the null hypothesis, it is necessary for $\mu_{i}$ to be equal to $\mu_{j}$ , and the distribution of features remains unchanged when the coordinates $i$ and $j$ are swapped. As a result, we can employ $\tau_{X} = 0$ in this scenario. This argument is further elaborated upon in the proof of Theorem 3.3. + +From the above expression it can be observed that for sufficiently large number of data points $n$ and a small value $\tau$ , the value $\Phi^{-1}(1/2 + \rho_n)$ will get smaller and converge to zero. In addition, it can be inferred that an ideal model estimate $\widehat{\theta}$ must have small norm and high contrast between $\widehat{\theta}_i$ and $\widehat{\theta}_j$ values. An interesting observation can be seen on the role of other coordinate values in $\widehat{\theta}$ . In fact, it can be realized that for the choice of the score function $T(x,y) = yx^{\mathsf{T}}\widehat{\theta}$ , the support of the model estimate $\widehat{\theta}$ must be a subset of two features $i$ and $j$ , since this would decrease $|\widehat{\theta}|$ and increases the value of $|\widehat{\theta}_i - \widehat{\theta}_j|$ . + +# 4. Experiments + +In this section, we evaluate the performance of our proposed method for identifying the symmetric influence across features. We start by the Isotropic Gaussian model for feature vectors. More precisely, we consider $x \sim \mathsf{N}(0, I_d)$ with $d = 10$ . In this case, we have $\tau_X = 0$ and we consider the hypothesis testing problem 1 for $\tau = 0$ (symmetric influence). + +Size of the test. We first start by examining the size of our proposed method. For this end, we consider the conditional law $y|x \sim \mathsf{N}(x^{\top}Sx, 1)$ , for a semi-positive definite matrix $S$ with coordinate $(i,j)$ being $S_{i,j} = 1 + \mathbb{I}(i = j)$ . The conditional mean of $y|x$ is a quadratic form and it is easy to observe that in this case for every two features $i,j \in \{1,\dots,10\}$ we have $x^{\top}Sx = x_{\mathrm{swap}(i,j)}^{\top}Sx_{\mathrm{swap}(i,j)}$ , and therefore the null hypothesis holds. We test for the symmetric influence of each pair of features $\binom{10}{2}$ number of tests). We run our method with the score function $T(x,y) = |y - \widehat{\theta}^{\top}x|$ with $\widehat{\theta} \sim \mathsf{N}(0,I_d)$ . The estimate $\widehat{\theta}$ is fixed across all 45 tests. We suppose that we have access to 1000 data points, and we consider three different significance levels $\alpha = 0.1, 0.15,$ and 0.2. The results of this experiment can be seen in Figure 1(b) where the reported numbers (rejection rates) are averaged over 1000 independent experiments. It can be observed that, in this case for all three significance levels, the rejection rates are smaller than $\alpha$ , and therefore the size of the test is controlled. + +Power analysis. The linear regression setting is considered, in which $y|x \sim \mathsf{N}(x^{\mathsf{T}}\theta^{*},1)$ , for $\theta^{*} \in \mathbb{R}^{d}$ with $d = 10$ . We consider the following pattern for signal strength $\theta_{1}^{*} = \theta_{2}^{*} = 1$ , $\theta_{3}^{*} = \theta_{4}^{*} = 2$ , $\theta_{5}^{*} = \theta_{6}^{*} = 3$ , $\theta_{7}^{*} = \theta_{8}^{*} = 4$ , $\theta_{9}^{*} = \theta_{10}^{*} = 5$ . In this example, it can be observed that the following pairs of features $\mathcal{I} = \{(1,2),(3,4),(5,6),(7,8),(9,10)\}$ have symmetric influence, and for any other pair the null hypothesis 1 must be rejected. We use the score function $T(x,y) = |y - x^{\mathsf{T}}\widehat{\theta}|$ at significance level $\alpha = 0.1$ for three different choices of $\widehat{\theta}$ . We follow this probability distribution $\widehat{\theta} \sim \mathsf{N}(\theta_0,\sigma^2 I_d)$ for three different $\sigma$ values $\sigma = 1,2$ and 3. A smaller value of $\sigma$ implies a better estimation of $\theta_0$ . The average rejection rates are depicted in Figure 1(a), + +![](images/6ef9caa414ba6875804430040384b73782c05a9e6c050312930fc20b7b0eb6f5.jpg) +(a) Average rejection rate of the null hypothesis of 1 for $\tau = 0$ and features with isotropic Gaussian distribution $x\sim \mathsf{N}(0,I_{10})$ In this experiment, we consider $y|x\sim \mathsf{N}(x^{\top}\theta^{*},1)$ for $\theta^{*} = (1,1,2,2,3,3,4,4,5,5)$ + +![](images/b0a2dd3a29415342ea61e2d5e48b0a08cdcbdb5637741201f21da4605611549d.jpg) +(b) Average rejection rate of the null hypothesis 1 for $\tau = 0$ and features coming from an isotropic Gaussian distribution $x\sim \mathsf{N}(0,I_{10})$ . In this experiment, we consider $y|x\sim \mathsf{N}(x^{\mathsf{T}}Sx,1)$ for a positive definite matrix $S_{i,j} = 1 + \mathbb{I}(i = j)$ (2 on diagonal and 1 on off-diagonal entries). +Figure 1: Average Rejection Rates for Different Settings + +where each $10 \times 10$ square corresponds to a different $\sigma$ value (three plots in total). Specifically, $(i,j)$ -th cell in each plot denotes the average rejection rate of the symmetric influence hypothesis for features $i$ and $j$ . The rejection rates are obtained by averaging over 1000 independent experiments. First, it can be inferred that for pairs belonging to the set $\mathcal{I}$ the rejection rate is always smaller than the significance level $\alpha = 0.1$ , thereby the size of the test is controlled. In addition, by decreasing the $\sigma$ value (moving from right to left), it can be inferred that the test achieves higher power (more dark blue regions). It is consistent with our prior expectation that the statistical power of our method depends on the quality of the score function $T$ and model estimate $\widehat{\theta}$ ; see Theorem 3.2. More on the statistical power of our method, it can be observed that within each plot, pairs that have higher contrast in the difference of coefficient magnitudes have higher statistical power. For instance, this pair of features $(1,10)$ with coefficient values $\theta_1^* = 1, \theta_{10}^* = 5$ has rejection rates of 0.987, 0.768, 0.543 (for $\sigma = 1,2,3$ , respectively) while the other pair of features $(6,8)$ with coefficient values $\theta_6^* = 3, \theta_8^* = 4$ has rejection rate of 0.294, 0.097, 0.055 (for $\sigma = 1,2,3$ , respectively). + +# 5. Influence of Training Data on Output Model + +In this section, we combine our closeness-of-influence test with datamodel framework (Ilyas et al., 2022) to analyze the influence of training samples on the evaluations of the trained model on certain target examples. We first provide a brief overview on datamodels and later describe the experiments setup. + +# 5.1. Datamodels + +For training samples $\mathcal{D}^{\mathrm{train}} = \{(x_i, y_i)\}_{i=1:N}$ consider a class of learning algorithm $\mathcal{A}$ , where by class we mean a training mechanism (potentially randomized), such as training a fixed geometry of deep neural networks via gradient descent and a fixed random initialization scheme. In datamodels (Ilyas et al., 2022), a new learning problem is considered, where feature vectors $S$ are binary 0-1 vectors with size $N$ with $\gamma \in (0,1)$ portion one entries, selected uniformly at random. Here $S$ is an indicator vector for participation of $N$ data points $\mathcal{D}^{\mathrm{train}}$ in the training mechanism, i.e., $S_i = 1$ if and only if the $i$ -th sample of $\mathcal{D}^{\mathrm{train}}$ is considered for the training purpose via $\mathcal{A}$ . For a fixed target example $x$ , the response value is the evaluation (will be described later) of the output model (trained with samples indicated in $S$ ) on $x$ , denoted by $f_A(x; S)$ . This random sampling of data points from $\mathcal{D}^{\mathrm{train}}$ is repeated $m$ times, therefore data for the new learning problem is $\{(S_i, f_A(x, S_i))\}_{i=1:m}$ . The ultimate goal of datamodels is to learn the mapping $S \to f_A(x, S)$ via surrogate modeling and a class of much less complex models. In the seminal work of (Ilyas et al., 2022), they show that using linear regression with $\ell_1$ penalty (LASSO (Tibshirani, 1996)) performs surprisingly well in learning the highly complex mapping of $S \to f_A(x, S)$ . + +# 5.2. Motivation + +We are specifically interested in analyzing the influence of different pairs of training samples on a variety of test targets, and discover pairs of training samples that with high cer + +![](images/6fd680cc7c3ec81e39de6806f53c1f0532379f7e3e71c7dee0313c60685ad23d.jpg) +target + +![](images/07cd456d7c0143e1ac4d8d0128f7d9272cf66a58b67f0b59ce4f39d59d62bc17.jpg) +training pairs + +![](images/db1c6a91a13b96fc8f2729051ee6726e6c6048916601898b389df5197797088c.jpg) + +![](images/22d9b2008798ceed40a2bc90fb75410cf767a1cb7eade5e5d21719371ff4d728.jpg) + +![](images/62ce3e9b0a84645d3e51696c91a7f22854d30db8f7942ccd51e19875833e2153.jpg) + +![](images/1efc4c3258cffff20c17ae6c4b473faf2a9f26b109ba931121e3f3e48ecb87f3.jpg) + +![](images/36b01c01d694ca2b06624efcc4e94f2554506ba6ee5cc647f98f3f93589cf0e5.jpg) +Figure 2: Summary of discoveries on CIFAR-10 dataset via datamodels used with our closeness-of-influence test. For each pair of 10 classes (can be similar), we choose random samples from the training data along with a random target image from dog pictures in the test data, and we repeat this process 20 times. After running the Benjamini–Yekutieli procedure on output p-values (2000 in total) at $\alpha = 0.2$ three significant results are reported. The images of these findings are plotted above, with their associated p-values. This implies that with high certainty images in each pair influence the target example differently. + +![](images/69dbf29edae4001f7ed6de1ae1ec6e090edfcd540d860c5fbb2180d7020851e9.jpg) +5.534 e-5 + +![](images/3333a86e3d33a7acbc8e3d31d29ca1e941a5eb9af57de4d3a6e4435233f4480d.jpg) + +tainty influence the test target differently. We use the score function $(f_{\mathcal{A}}(x,S) - x^{\top}\widehat{\theta})^{2}$ for our closeness-of-influence test, where $\widehat{\theta}$ is the learned datamodel. We adopt this score function, mainly due to the promising performance of linear surrogate models in (Ilyas et al., 2022) for capturing the dependency rule between $S$ and $f_{\mathcal{A}}(x;S)$ . In addition, the described sampling scheme in datamodels satisfies the symmetric structure as per Proposition 2.6 (so $\tau_{X} = 0$ ). We would like to emphasize that despite the empirical success of datamodels, the interpretation of training samples with different coefficient magnitude in the obtained linear datamodel $\widehat{\theta}$ is not statistically accurate. Here we approach this problem through the lens of hypothesis testing and output p-values, to project the level of confidence in our findings. + +# 5.3. Experimental Setups and Results + +We consider the CIFAR-10 dataset (Krizhevsky et al., 2009), which has $N = 50000$ training samples along with 10000 test datapoints and 10 classes1. We consider $\gamma = 0.5$ (portion of ones in $S_{i}$ samples), and follow the same heuristics provided for $f_{\mathcal{A}}(x; S)$ in (Ilyas et al., 2022), which is the correct-class margin, defined as the logit value of the true class minus the highest logit value among incorrect classes. We use the datamodel data given in https:// + +github.com/MadryLab/datamodels-data. The provided data has $310k$ samplings, where for each target example $x$ (in the test data) the datamodel parameter $\widehat{\theta} \in \mathbb{R}^N$ is estimated via the first $300k$ samples (10000 total number of datamodels $\widehat{\theta}$ for each test data). We use the additional $10k$ samples to run our closeness-of-fit test with the linear score function $(f_A(x; S) - x^{\mathrm{T}}\widehat{\theta})^2$ . Now, for each pair of training samples and a specific target test example, we can test for their closeness of influence. In the first experiment, for each two classes (can be the same) we choose two pictures as the training pair (randomly from the two classes), and for the target sample, we select randomly from the class of dog pictures. For each two classes, we repeat this process 20 times, and run our test 1 with $\tau = 0$ , and report all p-values (2000 in total). After running the Benjamini-Yekutieli procedure (Benjamini & Yekutieli, 2001) (with log factor correction to control for dependency among p-values), we find three statistically significant results at $\alpha = 0.2$ with p-value $= 5 \times 10^{-5}$ (for all three discoveries). Surprisingly, all three findings correspond to a similar test image, the pictures of training pairs and the one test image can be seen in Figure 2. It can be observed that in all findings one of the reported images is visually closer to the target image. This conforms well to obtained results that the null hypothesis 1 which states that the two training images have equal influence on the target sample is rejected. We refer to Appendix B for the rest of experiments. + +# 6. Concluding Remarks + +In this paper, we proposed a novel method to test the closeness of influence of a given pair of features on the response value. This procedure makes no assumption on the conditional law between the response value and features $(\mathcal{L}(Y|X))$ . We first proposed a notion called "symmetric influence" that generalized the familiar concept of equal coefficient in parametric models. This notion is motivated to characterize the sensitivity of the conditional law with respect to swapping the features. We then formulated the closeness-of-influence testing problem as a tolerance hypothesis testing. We provide theoretical guarantees on type-I error rate. We then analyzed statistical power of our method for a general score function $T$ , and show that for two specific learning problems i) linear regression settings, and 2) binary classification under a mixture of Gaussian models with a certain choice of score functions we can achieve full statistical power. Finally, we adopt the datamodel framework and use our closeness-of-influence test to find training samples that have different influence on the trained model. + +Several interesting venues for future research are in order. In particular, extending this framework for multiple testing (testing for multiple number of pairs) and still achieving valid statistical results. This can be done with generic mul + +tiple testing frameworks (similar to Benjamini-Yekutieli procedure used in Section 5) on the obtained p-values, but a method that is crafted for this setting can be more powerful. In addition, extending this framework for studying influence of a group of features (more that two) can be of great interest. + +# References + +Bamber, D. The area above the ordinal dominance graph and the area below the receiver operating characteristic graph. Journal of mathematical psychology, 12(4):387-415, 1975. +Belloni, A., Chernozhukov, V., and Hansen, C. Inference on treatment effects after selection among high-dimensional controls. The Review of Economic Studies, 81(2):608-650, 2014. +Benjamini, Y. and Yekutieli, D. The control of the false discovery rate in multiple testing under dependency. Annals of statistics, pp. 1165-1188, 2001. +Berrett, T. B., Wang, Y., Barber, R. F., and Samworth, R. J. The conditional permutation test for independence while controlling for confounders. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 82(1): 175-197, 2020. +Bien, J., Yan, X., Simpson, L., and Müller, C. L. Tree-aggregated predictive modeling of microbiome data. Scientific Reports, 11(1):1-13, 2021. +Bogdan, M., Van Den Berg, E., Sabatti, C., Su, W., and Candès, E. J. Slope—adaptive variable selection via convex optimization. The annals of applied statistics, 9 (3):1103, 2015. +Candes, E. and Tao, T. The dantzig selector: Statistical estimation when $p$ is much larger than $n$ . The annals of Statistics, 35(6):2313-2351, 2007. +Candes, E., Fan, Y., Janson, L., and Lv, J. Panning for gold: 'model-x' knockoffs for high dimensional controlled variable selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(3):551-577, 2018. +Cong, L., Ran, F. A., Cox, D., Lin, S., Barretto, R., Habib, N., Hsu, P. D., Wu, X., Jiang, W., Marraffini, L. A., et al. Multiplex genome engineering using crispr/cas systems. Science, 339(6121):819-823, 2013. +Crawford, L., Wood, K. C., Zhou, X., and Mukherjee, S. Bayesian approximate kernel regression with variable selection. Journal of the American Statistical Association, 113(524):1710-1721, 2018. + +Deshpande, Y., Javanmard, A., and Mehrabi, M. Online debiasing for adaptively collected high-dimensional data. arXiv preprint arXiv:1911.01040, 2019. +Duchi, J. Derivations for linear algebra and optimization. Berkeley, California, 3(1):2325-5870, 2007. +Fei, Z. and Li, Y. Estimation and inference for high dimensional generalized linear models: A splitting and smoothing approach. J. Mach. Learn. Res., 22:58-1, 2021. +Hsieh, F. and Turnbull, B. W. Nonparametric and semiparametric estimation of the receiver operating characteristic curve. The annals of statistics, 24(1):25-40, 1996. +Ilyas, A., Park, S. M., Engstrom, L., Leclerc, G., and Madry, A. Datamodels: Predicting predictions from training data. arXiv preprint arXiv:2202.00622, 2022. +Javanmard, A. and Mehrabi, M. Pearson chi-squared conditional randomization test. arXiv preprint arXiv:2111.00027, 2021. +Javanmard, A. and Montanari, A. Confidence intervals and hypothesis testing for high-dimensional regression. The Journal of Machine Learning Research, 15(1):2869-2909, 2014. +Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. 2009. +Liu, M., Katsevich, E., Janson, L., and Ramdas, A. Fast and powerful conditional randomization testing via distillation. Biometrika, 109(2):277-293, 2022. +Peters, J. M., Colavin, A., Shi, H., Czarny, T. L., Larson, M. H., Wong, S., Hawkins, J. S., Lu, C. H., Koo, B.-M., Marta, E., et al. A comprehensive, crisp-based functional analysis of essential genes in bacteria. Cell, 165(6):1493-1506, 2016. +Shaer, S. and Romano, Y. Learning to increase the power of conditional randomization tests. arXiv preprint arXiv:2207.01022, 2022. +Shao, S., Bien, J., and Javanmard, A. Controlling the false split rate in tree-based aggregation. arXiv preprint arXiv:2108.05350, 2021. +Tibshirani, R. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267-288, 1996. +Tibshirani, R., Saunders, M., Rosset, S., Zhu, J., and Knight, K. Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(1):91-108, 2005. + +Van de Geer, S., Buhlmann, P., Ritov, Y., and Dezeure, R. On asymptotically optimal confidence regions and tests for high-dimensional models. The Annals of Statistics, 42(3):1166-1202, 2014. +Wilms, I. and Bien, J. Tree-based node aggregation in sparse + +graphical models. Journal of Machine Learning Research, 23(243):1-36, 2022. +Yan, X. and Bien, J. Rare feature selection in high dimensions. Journal of the American Statistical Association, 116(534):887-900, 2021. + +# A. Proof of Theorems and Technical Lemmas + +# A.1. Proof of Theorem 2.3 + +Consider two data points $z^{(1)} = (x^{(1)},y^{(1)})$ , $z^{(2)} = (x^{(2)},y^{(2)})$ drawn i.i.d. from the density function $p_{X,Y}$ . For two features $i,j$ , define + +$$ +\pi = \mathbb {P} \left(T \big (X ^ {(1)}, Y ^ {(1)} \big) \geq T \big (X _ {\text {s w a p} (i, j)} ^ {(2)}, Y ^ {(2)} \big)\right). +$$ + +We want to show that under the null hypothesis, the value $\pi$ is concentrated around $1 / 2$ with maximum distance of $\tau_{X}$ . First, from the symmetry between two i.i.d. data points we have + +$$ +\mathbb {P} \left(T \left(X ^ {(1)}, Y ^ {(1)}\right) \geq T \left(X ^ {(2)}, Y ^ {(2)}\right)\right) = 1 / 2. +$$ + +The underlying assumption is that in the case of equal values the tie is broken randomly. We introduce $\widetilde{z}^{(2)} = (x_{\mathrm{swap}(i,j)}^{(2)},y^{(2)})$ . This brings us + +$$ +\begin{array}{l} \pi - \frac {1}{2} = \mathbb {P} \left(T \left(X _ {\operatorname {s w a p} (i, j)} ^ {(2)}, Y ^ {(2)}\right) \leq T \left(Z _ {1}\right)\right) \\ - \mathbb {P} \left(T \left(X ^ {(2)}, Y ^ {(2)}\right) \leq T \left(Z _ {1}\right)\right) \\ = \mathbb {E} \left[ \mathbb {P} \left(T \left(\widetilde {Z} ^ {(2)}\right) \leq T \left(Z ^ {(1)}\right) \mid Z ^ {(1)}, Y ^ {(2)}\right) \right] \\ - \mathbb {E} \left[ \mathbb {P} (T (Z ^ {(2)}) \leq T (Z ^ {(1)}) | Z ^ {(1)}, Y ^ {(2)}) \right]. \\ \end{array} +$$ + +In the next step, we let $T^{(1)} = T(Z^{(1)})$ , $T^{(2)} = T(Z^{(2)})$ , and $\widetilde{T}^{(2)} = T(\widetilde{Z}^{(2)})$ . Then, by an application of Jenson's inequality we get + +$$ +\left| \pi - \frac {1}{2} \right| \leq \mathbb {E} \left[ \left| \mathbb {P} \left(\widetilde {T} ^ {(2)} \leq T ^ {(1)} \mid Z ^ {(1)}, Y ^ {(2)}\right) - \mathbb {P} \left(T ^ {(2)} \leq T ^ {(1)} \mid Z ^ {(1)}, Y ^ {(2)}\right) \right| \right] \tag {6} +$$ + +On the other hand, for some values $z \in \mathbb{R}^{d + 1}$ , $y \in \mathbb{R}$ consider the following measurable set: + +$$ +A _ {z, y} = \{x \in \mathbb {R} ^ {d}: T (x, y) \leq T (z) \}. +$$ + +By using this definition of set $A_{z,y}$ in 6 and shorthands $W = (Z^{(1)},Y^{(2)})$ we arrive at + +$$ +\begin{array}{l} \left| \pi - \frac {1}{2} \right| \leq \mathbb {E} \left[ \left| \mathbb {P} \left(X _ {\operatorname {s w a p} (i, j)} ^ {2} \in A _ {W} | W\right) - \mathbb {P} \left(X ^ {(2)} \in A _ {W} | W\right) \right| \right] \\ \leq \mathbb {E} \left[ d _ {\mathrm {T V}} \left(p _ {X _ {\operatorname {s w a p} (i, j)} | W}, p _ {X ^ {(2)} | W}\right) \right], \tag {7} \\ \end{array} +$$ + +where the last inequality follows the definition of the total variation distance. Since $Z^{(1)}$ and $Z^{(2)}$ are independent random variables, we get that + +$$ +\begin{array}{l} d _ {\mathsf {T V}} \left(p _ {X _ {\operatorname {s w a p} (i, j)} ^ {(2)} | W}, p _ {X ^ {(2)} | W}\right) = d _ {\mathsf {T V}} \left(p _ {X _ {\operatorname {s w a p} (i, j)} ^ {(2)} | Y ^ {(2)}}, p _ {X ^ {(2)} | Y ^ {(2)}}\right) \\ = d _ {\mathrm {T V}} \left(p _ {X _ {\text {s w a p} (i, j)} | Y}, p _ {X | Y}\right), \\ \end{array} +$$ + +where the last relation comes from the fact that random variable $(x,y)\sim p_{x,y}$ and $(x^{(2)},y^{(2)})$ has a similar density function. Using the above relation in 7 yields + +$$ +\begin{array}{l} \left| \pi - \frac {1}{2} \right| \leq \mathbb {E} \left[ d _ {\mathrm {T V}} \left(p _ {X _ {\operatorname {s w a p} (i, j)} | Y}, p _ {X | Y}\right) \right] \\ = d _ {\text {T V}} \left(p X _ {\text {s w a p} (i, j)}, Y, p X, Y\right). \\ \end{array} +$$ + +In the next step, for $x \in \mathbb{R}^d$ and $y \in \mathbb{R}$ let $p(x,y)$ and $q(x,y)$ respectively denote the density functions of $(X_{\mathrm{swap}(i,j)},Y)$ and $(X,Y)$ . From the above relation we get + +$$ +\left| \pi - \frac {1}{2} \right| \leq \frac {1}{2} \int | p (x, y) - q (x, y) | d x d y. +$$ + +On the other hand, by rewriting the total variation distance of the joint random variables get + +$$ +\begin{array}{l} | p (x, y) - q (x, y) | = | p (x) p (y | x) - q (x) q (y | x) | \\ = | p (x) p (y | x) - p (x) q (y | x) \\ + p (x) q (y | x) - q (x) q (y | x) | \\ \leq p (x) \left| p (y | x) - q (y | x) \right| \\ + | p (x) - q (x) | q (y | x). \\ \end{array} +$$ + +Plugging this into the above relation yields + +$$ +\begin{array}{l} \left| \pi - \frac {1}{2} \right| \leq \frac {1}{2} \int p (x) | p (y | x) - q (y | x) | d x d y \\ + \frac {1}{2} \int | p (x) - q (x) | q (y | x) \mathrm {d} x \mathrm {d} y. \\ \end{array} +$$ + +In the next step, by integration with respect to $y$ we get + +$$ +\begin{array}{l} \left| \pi - \frac {1}{2} \right| \leq \frac {1}{2} \int p (x) | p (y | x) - q (y | x) | d x d y \\ + \frac {1}{2} \int | p (x) - q (x) | d x. \\ \end{array} +$$ + +This implies that + +$$ +\left| \pi - \frac {1}{2} \right| \leq \mathbb {E} _ {X} \left[ d _ {\mathrm {T V}} \left(p _ {Y | X _ {\text {s w a p} (i, j)}}, p _ {Y | X}\right) \right] + d _ {\mathrm {T V}} \left(p _ {X}, p _ {X _ {\text {s w a p} (i, j)}}\right). +$$ + +Finally, under the null hypothesis 1 and the fact that $\tau_{X} \geq d_{\mathsf{TV}}(p_{X}, p_{X_{\mathrm{swap}(i,j)}})$ we get + +$$ +\left| \pi - \frac {1}{2} \right| \leq \tau_ {X} + \tau . \tag {8} +$$ + +Any deviation from this range is accounted as evidence against the null hypothesis 1. In Algorithm 1, for each $1 \leq m \leq n/2$ , it is easy to observe that each random variable $\mathbb{I}\left(T(X^{(m)}, Y^{(m)}) \geq T(\widetilde{X}^{(m)}, \widetilde{Y}^{(m)})\right)$ is a Bernoulli with success probability $\pi$ . In the next step, by an application of Hoeffding's inequality for every $t \geq 0$ and sum of $n/2$ independent Bernoulli random variables we get + +$$ +\mathbb {P} \left(\left| \frac {2}{n} \sum_ {i = 1} ^ {n / 2} \mathbb {I} \big (T (x _ {i}, y _ {i}) \leq T (\tilde {x} _ {i}, \tilde {y} _ {i}) \big) - \pi \right| \geq t\right) \leq 2 \exp (- n t ^ {2}). +$$ + +Therefore, for statistics $U_{n}$ as per Algorithm 1 we get + +$$ +\mathbb {P} \left(\left| U _ {n} - \pi \right| \geq t\right) \leq 2 \exp \left(- n t ^ {2}\right), \quad \forall t \geq 0 \tag {9} +$$ + +We next consider $\delta \geq \tau +\tau_{X}$ and use triangle inequality to obtain + +$$ +\begin{array}{l} \mathbb {P} \left(\left| U _ {n} - \frac {1}{2} \right| \geq \delta\right) \leq \mathbb {P} \left(\left| U _ {n} - \pi \right| + \left| \pi - \frac {1}{2} \right| \geq \delta\right) \\ \leq \mathbb {P} (| U _ {n} - \pi | \geq \delta - \tau - \tau_ {X}) \\ \leq 2 \exp (- n (\delta - \tau - \tau_ {X}) ^ {2}). \\ \end{array} +$$ + +Where in the penultimate relation we used 8, and the last relation follows 9. By letting $\alpha = \delta -\tau -\tau_{X}$ , we get + +$$ +\mathbb {P} \left(\left| U _ {n} - \frac {1}{2} \right| \geq \tau + \tau_ {X} + \sqrt {\frac {\log \frac {2}{\alpha}}{n}}\right) \leq \alpha . +$$ + +This completes the proof. + +# A.2. Proof of Proposition 2.2 + +We start with $\theta_{i} = \theta_{j}$ , and we want to show that the symmetric influence property holds. We have + +$$ +\begin{array}{l} p _ {Y \mid X _ {\text {s w a p} (i, j)}} (y \mid x) = p _ {Y \mid X} (y \mid x _ {\text {s w a p} (i, j)}) \\ = \mathbb {P} (Y = 1 | X = x _ {\text {s w a p} (i, j)}) \\ = \left(1 + \exp (- x _ {\text {s w a p} (i, j)} ^ {\mathsf {T}} \beta)\right) ^ {- 1} \\ = \left(1 + \exp \left(- \beta_ {i} x _ {j} - \beta_ {j} x _ {i} - \sum_ {\ell \neq i, j} x _ {\ell} \beta_ {\ell}\right)\right) ^ {- 1}. \\ \end{array} +$$ + +Using $\beta_{i} = \beta_{j}$ yields + +$$ +\begin{array}{l} p _ {Y \mid X _ {\text {s w a p} (i, j)}} (y | x) = \left(1 + \exp \left(- \beta_ {j} x _ {j} - \beta_ {i} x _ {i} - \sum_ {\ell \neq i, j} x _ {\ell} \beta_ {\ell}\right)\right) ^ {- 1} \\ = \left(1 + \exp \left(- \sum_ {\ell} x _ {\ell} \beta_ {\ell}\right)\right) ^ {- 1} \\ = p _ {Y \mid X} (y \mid x). \\ \end{array} +$$ + +This completes the proof for the first part. For the other direction, suppose that the symmetric influence for $i,j$ holds, thereby for every $x\in \mathbb{R}^d$ we have + +$$ +\mathbb {P} (Y = + 1 \mid X _ {\text {s w a p} (i, j)} = x) = \mathbb {P} (Y = + 1 \mid X = x). +$$ + +By using $p_{Y|X_{\mathrm{swap}(i,j)}}(y|x) = p_{Y|X}(y|x_{\mathrm{swap}(i,j)})$ along with the logistic regression relation, we get + +$$ +\begin{array}{l} \left(1 + \exp \left(- \beta_ {i} x _ {j} - \beta_ {j} x _ {i} - \sum_ {\ell \neq i, j} x _ {\ell} \beta_ {\ell}\right)\right) ^ {- 1} \\ = \left(1 + \exp \left(- \sum_ {\ell} x _ {\ell} \beta_ {\ell}\right)\right) ^ {- 1}. \\ \end{array} +$$ + +In the next step, using the function $\log \left(\frac{u}{1 - u}\right)$ on the both sides, we get + +$$ +\beta_ {i} x _ {i} + \beta_ {j} x _ {j} = \beta_ {i} x _ {j} + \beta_ {j} x _ {i}. +$$ + +Since this must hold for all $x_{i}, x_{j}$ values, we must have $\beta_{i} = \beta_{j}$ . The proof for the linear regression setting follows the exact similar argument. + +# A.3. Proof of Proposition 2.5 + +Since $x$ is a multivariate Gaussian, it means that its coordinates are jointly Gaussian random variables, therefore swapping the location of two coordinates $i$ and $j$ does not change the joint Gaussian property. On the other hand, from the linear transform $x_{\mathrm{swap}(i,j)} = P_{ij}x$ it is easy to arrive at $x_{\mathrm{swap}(i,j)} \sim \mathsf{N}(P_{ij}\mu, P_{ij}\Sigma P_{ij})$ . We are only left with upper bounding the KL divergence of density functions $\mathsf{N}(\mu, \Sigma)$ and $\mathsf{N}(P_{ij}\mu, P_{ij}\Sigma P_{ij})$ . For this end, we borrow a result from (Duchi, 2007) for kl-divergence of multivariate Gaussian distributions. Formally we have, + +$$ +\begin{array}{l} d _ {\mathrm {k l}} \left(\mathcal {N} \left(\mu_ {1}, \Sigma_ {1}\right) \| \mathcal {N} \left(\mu_ {2}, \Sigma_ {2}\right)\right) \\ = \frac {1}{2} \left(\log \frac {\det \Sigma_ {2}}{\det \Sigma_ {1}} - d + \operatorname {t r} \left(\Sigma_ {2} ^ {- 1} \Sigma_ {1}\right) + \left(\mu_ {2} - \mu_ {1}\right) ^ {\top} \Sigma_ {2} ^ {- 1} \left(\mu_ {2} - \mu_ {1}\right)\right). \\ \end{array} +$$ + +By replacing $\Sigma_{2} = \Sigma$ and $\Sigma_{1} = P_{ij}\Sigma P_{ij}$ along with the fact that $\operatorname{det}P_{ij} = -1$ , we arrive at + +$$ +d _ {K L} \left(\mathcal {L} \left(X _ {\text {s w a p} (i, j)}\right) \| \mathcal {L} (X)\right) = \frac {1}{2} \left(- d + \operatorname {t r} \left(\Sigma^ {- 1} P _ {i j} \Sigma P _ {i j}\right) + \left(\mu - P _ {i j} \mu\right) ^ {\top} \Sigma^ {- 1} \left(\mu - P _ {i j} \mu\right)\right). +$$ + +Finally using Pinsker's inequality2 completes the proof. + +# A.4. Proof of Proposition 2.6 + +In this setup, form the construction of the feature vector $x \in \{0,1\}^d$ it is easy to get that for every $\alpha \in \{0,1\}^d$ we have + +$$ +\mathbb {P} (x = \alpha) = \frac {\mathbb {I} (| \alpha | = m)}{\binom {d} {m}}. +$$ + +From this structure, since swapping the coordinates does not change the number of non-zero entries of the binary feature vector, we get $|\alpha| = |\alpha_{\mathrm{swap}(i,j)}|$ . Thereby, we get + +$$ +\mathbb {P} (x = \alpha) = \mathbb {P} \left(x _ {\text {s w a p} (i, j)} = \alpha\right), \quad \forall \alpha \in \{0, 1 \} ^ {d}. +$$ + +Therefore $d_{\mathsf{TV}}\left(\mathcal{L}(X),\mathcal{L}(X_{\mathrm{swap}(i,j)})\right) = 0$ + +# A.5. Proof of Theorem 3.1 + +Let + +$$ +\pi = \mathbb {P} \left(T \left(X ^ {(1)}, Y ^ {(1)}\right) \geq T \left(X _ {\operatorname {s w a p} (i, j)} ^ {(2)}, Y ^ {(2)}\right)\right). +$$ + +For the sake of simplicity, we adopt the following shorthands: $T_{1} = T(X^{(1)},Y^{(1)})$ and $T_{2} = T(X_{\mathrm{swap}(i,j)}^{(2)},Y^{(2)})$ . This gives us + +$$ +\begin{array}{l} \pi = \mathbb {P} (T _ {1} \geq T _ {2}) \\ = \mathbb {E} _ {T _ {1}} [ \mathbb {P} (T _ {1} \geq T _ {2} | T _ {1}) ] \\ = \mathbb {E} _ {T _ {1}} \left[ G _ {T} (T _ {1}) \right] \\ = \int G _ {T} (t) \mathrm {d} F _ {T} (t) \\ = \int_ {0} ^ {1} G _ {T} \left(F _ {T} ^ {- 1} (u)\right) \mathrm {d} u. \\ \end{array} +$$ + +In the next step, we let $\delta = 2\exp (-n\beta^2)$ , then by plugging this relation in the given condition in Theorem 3.1 we arrive at + +$$ +\left| \pi - 1 / 2 \right| \geq \delta + \tau + \tau_ {X} + \sqrt {\frac {\log (2 / \alpha)}{n}}. \tag {10} +$$ + +We now focus on the decision rule 2. Let $\tau' = \tau + \tau_X$ , then we get + +$$ +\mathbb {P} (\Psi (\mathbf {X}, \mathbf {Y}) = 1) = \mathbb {P} \left(| U _ {n} - 1 / 2 | \geq \tau^ {\prime} + \sqrt {\frac {\log (2 / \alpha)}{n}}\right). \tag {11} +$$ + +On the other hand, from triangle inequality we have $|U_n - 1/2| \geq |\pi - 1/2| - |U_n - \pi|$ . Plugging this into 10 yields + +$$ +\left| U _ {n} - 1 / 2 \right| \geq \delta + \tau^ {\prime} + \sqrt {\frac {\log (2 / \alpha)}{n}} - \left| U _ {n} - \pi \right| +$$ + +Combining this with 11 gives us + +$$ +\begin{array}{l} \mathbb {P} (\Psi (\mathbf {X}, \mathbf {Y}) = 1) \geq \mathbb {P} (\delta \geq | U _ {n} - \pi |) \\ = 1 - \mathbb {P} (\delta \leq | U _ {n} - \pi |). \tag {12} \\ \end{array} +$$ + +In the next step, we return to the given relation for $U_{n}$ in Algorithm 1. From the definition of $\pi$ , for each $m$ we have + +$$ +\mathbb {P} \left(T \left(X ^ {(m)}, Y ^ {(m)}\right) \leq T \left(\widetilde {X} ^ {(m)}, \widetilde {Y} ^ {(m)}\right)\right) = \pi . +$$ + +Therefore by an application of the Hoeffding's inequality we get + +$$ +\mathbb {P} \left(| U _ {n} - \pi | \geq \delta\right) \leq \sqrt {\frac {\log (2 / \delta)}{n}}. +$$ + +Finally, recalling $\delta = 2\exp (-n\beta^2)$ yields + +$$ +\mathbb {P} \left(\left| U _ {n} - \pi \right| \geq \delta\right) \leq \beta . +$$ + +Using this in 12 completes the proof. In this case, statistical power not smaller than $1 - \beta$ can be achieved. + +# A.6. Proof of Theorem 3.2 + +From the isotropic Gaussian distribution, we have $\tau_{X} = 0$ . We next start by the ODC function $G_{T}\circ F_{T}^{-1}$ . For this end, we start by the definition of $F_{T}$ where for some non-negative $t$ we have: + +$$ +\begin{array}{l} F _ {T} (t) = \mathbb {P} \left(\left| Y ^ {(1)} - \widehat {\theta} ^ {\top} X ^ {(1)} \right| \leq t\right) \\ = \mathbb {P} \left(\left| \left(\theta^ {*} - \widehat {\theta}\right) ^ {\top} X ^ {(1)} + \varepsilon_ {1} \right| \leq t\right) \\ = \mathbb {P} (- t \leq (\theta^ {*} - \widehat {\theta}) ^ {\top} X ^ {(1)} + \varepsilon_ {1} \leq t). \\ \end{array} +$$ + +On the other hand, we know that $x^{\top}(\widehat{\theta} - \theta^{*}) + \varepsilon$ has a Gaussian distribution $\mathsf{N}(0, \| \theta^{*} - \widehat{\theta}\|_{2}^{2} + \sigma^{2})$ . This brings us + +$$ +\begin{array}{l} F _ {T} (t) = \mathbb {P} (- t \leq (\theta^ {*} - \widehat {\theta}) ^ {\top} X ^ {(1)} + \varepsilon_ {1} \leq t) \\ = \mathbb {P} \left(\frac {- t}{\sqrt {\| \theta^ {*} - \widehat {\theta} \| _ {2} ^ {2} + \sigma^ {2}}} \leq \frac {\left(\theta^ {*} - \widehat {\theta}\right) ^ {\mathsf {T}} X ^ {(1)} + \varepsilon_ {1}}{\sqrt {\| \widehat {\theta} - \theta^ {*} \| _ {2} ^ {2} + \sigma^ {2}}} \leq \frac {t}{\sqrt {\| \widehat {\theta} - \theta^ {*} \| _ {2} ^ {2} + \sigma^ {2}}}\right) \\ = \Phi \left(\frac {t}{\sqrt {\sigma^ {2} + \| \theta^ {*} - \widehat {\theta} \| _ {2} ^ {2}}}\right) - \Phi \left(- \frac {t}{\sqrt {\sigma^ {2} + \| \theta^ {*} - \widehat {\theta} \| _ {2} ^ {2}}}\right) \\ = 2 \Phi \left(\frac {t}{\sqrt {\sigma^ {2} + \left\| \theta^ {*} - \widehat {\theta} \right\| _ {2} ^ {2}}}\right) - 1, \tag {13} \\ \end{array} +$$ + +where the last line comes from the fact that $\Phi(t) + \Phi(-t) = 1$ for every real value $t$ . We introduce the shorthand $\widehat{\theta}_{\mathrm{swap}} = \widehat{\theta}_{\mathrm{swap}(i,j)}$ , then by a similar argument we get + +$$ +\begin{array}{l} G _ {T} (t) = \mathbb {P} \left(\left| Y ^ {(2)} - \widehat {\theta} ^ {\top} X _ {\operatorname {s w a p} (i, j)} ^ {(2)} \right| \leq t\right) \\ = \mathbb {P} \left(\left| \left(\theta^ {*} - \widehat {\theta} _ {\text {s w a p}}\right) ^ {\top} X ^ {(2)} + \varepsilon_ {2} \right| \leq t\right) \\ = 2 \Phi \left(\frac {t}{\sqrt {\sigma^ {2} + \left\| \theta^ {*} - \widehat {\theta} _ {\mathrm {s w a p}} \right\| _ {2} ^ {2}}}\right) - 1, \tag {14} \\ \end{array} +$$ + +By combining 13 and 14 we get + +$$ +F _ {T} \circ G _ {T} ^ {- 1} (u) = 2 \Phi \left(\frac {\sigma_ {2}}{\sigma_ {1}} \Phi^ {- 1} \left(\frac {u + 1}{2}\right)\right) - 1, +$$ + +for $\sigma_{2}$ and $\sigma_{1}$ given by + +$$ +\sigma_ {1} ^ {2} = \sigma^ {2} + \| \theta^ {*} - \widehat {\theta} \| _ {2} ^ {2}, \quad \sigma_ {2} ^ {2} = \sigma^ {2} + \| \theta^ {*} - \widehat {\theta} _ {\mathrm {s w a p}} \| _ {2} ^ {2}. +$$ + +We consider $\gamma = \frac{\sigma_2}{\sigma_1}$ . Plugging this into the power expression in Theorem 3.1 we arrive at + +$$ +F _ {T} (G _ {T} ^ {- 1} (u)) - u = 2 \left[ \Phi \left(\gamma \Phi^ {- 1} \left(\frac {u + 1}{2}\right)\right) - \frac {u + 1}{2} \right]. +$$ + +In the next step, by using the change of variable $v = \frac{u + 1}{2}$ we get + +$$ +\int_ {0} ^ {1} \left[ F _ {T} \left(G _ {T} ^ {- 1} (u)\right) - u \right] \mathrm {d} u = 4 \int_ {\frac {1}{2}} ^ {1} \left[ \Phi \left(\gamma \Phi^ {- 1} (v)\right) - v \right] \mathrm {d} v. +$$ + +We then introduce function $\psi :[0, + \infty ]\to \mathbb{R}$ as following + +$$ +\psi (\gamma) = 4 \int_ {\frac {1}{2}} ^ {1} \Phi \Bigl (\gamma \Phi^ {- 1} (v) \Bigr) \mathrm {d} v. +$$ + +This implies that + +$$ +\psi (\gamma) - \psi (1) = \int_ {0} ^ {1} \left[ F _ {T} \left(G _ {T} ^ {- 1} (u)\right) - u \right] \mathrm {d} u. \tag {15} +$$ + +By differentiating $\psi(.)$ with respect to $\gamma$ in its original definition we obtain + +$$ +\begin{array}{l} \frac {\mathrm {d} \psi}{\mathrm {d} \gamma} = 4 \frac {\partial}{\partial \gamma} \int_ {\frac {1}{2}} ^ {1} \Phi (\gamma \Phi^ {- 1} (v)) \mathrm {d} v \\ = 4 \int_ {\frac {1}{2}} ^ {1} \Phi^ {- 1} (v) \varphi (\gamma \Phi^ {- 1} (v)) d v. \\ \end{array} +$$ + +We next use $s = \Phi^{-1}(v)$ to arrive at the following + +$$ +\begin{array}{l} \frac {\mathrm {d} \psi}{\mathrm {d} \gamma} = 4 \int_ {0} ^ {+ \infty} s \varphi (\gamma s) \varphi (s) \mathrm {d} s \\ = \frac {4}{2 \pi} \int_ {0} ^ {+ \infty} s \exp \left(- \frac {s ^ {2}}{2} (1 + \gamma^ {2})\right) d s \\ = \frac {2}{\pi (\gamma^ {2} + 1)} \int_ {0} ^ {+ \infty} s \exp (- s ^ {2} / 2) d s \\ = \frac {2}{\pi (\gamma^ {2} + 1)}. \\ \end{array} +$$ + +Since the differentiation of $\psi$ with respect to $\gamma$ is provided above, we then can use this and obtain the closed form equation for $\psi(u)$ . This indeed is given by + +$$ +\psi (\gamma) = C + \frac {2}{\pi} \arctan (\gamma), +$$ + +For some constant value $C$ . In order to find $C$ , note that $\psi(1) = 4\int_{\frac{1}{2}}^{1} v \, \mathrm{d}v = \frac{3}{2}$ . This brings us $\psi(\gamma) = 1 + \frac{2}{\pi} \arctan(\gamma)$ . Using this in 15 yields + +$$ +\begin{array}{l} \left| \int_ {0} ^ {1} \left[ F _ {T} \left(G _ {T} ^ {- 1} (u)\right) - u \right] \mathrm {d} u \right| = | \psi (\gamma) - \psi (1) | \\ = \frac {2}{\pi} | \arctan (\gamma) - \arctan (1) |. \\ \end{array} +$$ + +On the other hand, from the identity $\arctan (x) - \arctan (y) = \arctan \frac{x - y}{1 + xy}$ we arrive at: + +$$ +\begin{array}{l} \left| \int_ {0} ^ {1} \left[ F _ {T} \left(G _ {T} ^ {- 1} (u)\right) - u \right] \mathrm {d} u \right| = \frac {2}{\pi} \left| \arctan \left(\frac {\gamma - 1}{1 + \gamma}\right) \right| \\ = \frac {2}{\pi} \arctan \left(\frac {| \gamma - 1 |}{1 + \gamma}\right), \\ \end{array} +$$ + +where in the last relation we used $\arctan (|.|) = |\arctan (.)|$ (note that $\gamma \geq 0$ ). We next use $\gamma = \sigma_2 / \sigma_1$ to get + +$$ +\left| \int_ {0} ^ {1} [ F _ {T} \left(G _ {T} ^ {- 1} (u)\right) - u ] \mathrm {d} u \right| = \frac {2}{\pi} \arctan \left(\frac {\left| \sigma_ {1} - \sigma_ {2} \right|}{\sigma_ {1} + \sigma_ {2}}\right). \tag {16} +$$ + +On the other hand, from $\sigma_1^2 +\sigma_2^2\geq 2\sigma_1\sigma_2$ we get + +$$ +\Delta_ {T} = \frac {\left| \sigma_ {1} - \sigma_ {2} \right|}{\left| \sigma_ {1} + \sigma_ {2} \right|} \geq \frac {\left| \sigma_ {1} ^ {2} - \sigma_ {2} ^ {2} \right|}{2 \left(\sigma_ {1} ^ {2} + \sigma_ {2} ^ {2}\right)} +$$ + +We then use this with the definition of $\sigma_{1},\sigma_{2}$ to get + +$$ +\begin{array}{l} \Delta_ {T} \geq \frac {1}{2} \frac {\left| \| \theta^ {*} - \widehat {\theta} \| _ {2} ^ {2} - \| \theta^ {*} - \widehat {\theta} _ {\mathrm {s w a p}} \| _ {2} ^ {2} \right|}{2 \sigma^ {2} + \| \theta^ {*} - \widehat {\theta} \| _ {2} ^ {2} + \| \theta^ {*} - \widehat {\theta} _ {\mathrm {s w a p}} \| _ {2} ^ {2}} \\ = \frac {1}{2} \frac {\left| - 2 \widehat {\theta} ^ {\top} \theta^ {*} + 2 \widehat {\theta} _ {\mathrm {s w a p}} ^ {\top} \theta^ {*} \right|}{2 \sigma^ {2} + 2 \| \theta^ {*} \| _ {2} ^ {2} + 2 \| \widehat {\theta} \| _ {2} ^ {2} - 2 \widehat {\theta} ^ {\top} \theta^ {*} - 2 \widehat {\theta} _ {\mathrm {s w a p}} ^ {\top} \theta^ {*}}, \\ \end{array} +$$ + +where we used $\| \theta \| _2 = \| \theta_{\mathrm{swap}}\| _2$ . In the next step, since $\widehat{\theta}_{\mathrm{swap},\ell} = \widehat{\theta}_{\ell}$ for all $\ell \neq i,j$ we get + +$$ +\begin{array}{l} \Delta_ {T} \geq \frac {1}{2} \frac {\left| - \widehat {\theta} _ {i} \theta_ {i} ^ {*} - \widehat {\theta} _ {j} \theta_ {j} ^ {*} + \widehat {\theta} _ {i} \theta_ {j} ^ {*} + \widehat {\theta} _ {j} \theta_ {i} ^ {*} \right|}{\sigma^ {2} + \| \theta^ {*} \| _ {2} ^ {2} + \| \widehat {\theta} \| _ {2} ^ {2} - \widehat {\theta} ^ {\top} \theta^ {*} - \widehat {\theta} _ {\mathrm {s w a p}} ^ {\top} \theta^ {*}} \\ = \frac {1}{2} \frac {\left| - \widehat {\theta} _ {i} \theta_ {i} ^ {*} - \widehat {\theta} _ {j} \theta_ {j} ^ {*} + \widehat {\theta} _ {i} \theta_ {j} ^ {*} + \widehat {\theta} _ {j} \theta_ {i} ^ {*} \right|}{\sigma^ {2} + \| \theta^ {*} - \widehat {\theta} \| _ {2} ^ {2} + \widehat {\theta} ^ {\top} \theta^ {*} - \widehat {\theta} _ {\mathrm {s w a p}} ^ {\top} \theta^ {*}} \\ \end{array} +$$ + +In the next step, by using the observation that $\widehat{\theta}_{\mathrm{swap},\ell} = \widehat{\theta}_{\ell}$ for all $\ell \neq i,j$ another time we get + +$$ +\Delta_ {T} \geq \frac {1}{2} \frac {\left| \widehat {\theta} _ {i} - \widehat {\theta} _ {i} \right| \left| \theta_ {i} ^ {*} - \theta_ {j} ^ {*} \right|}{\sigma^ {2} + \left\| \theta^ {*} - \widehat {\theta} \right\| ^ {2} + \left(\theta_ {i} ^ {*} - \theta_ {j} ^ {*}\right) \left(\widehat {\theta} _ {i} - \widehat {\theta} _ {j}\right)} +$$ + +Thereby we get + +$$ +\Delta_ {T} \geq \frac {1}{2} \frac {\left| \widehat {\theta} _ {i} - \widehat {\theta} _ {i} \right| \left| \theta_ {i} ^ {*} - \theta_ {j} ^ {*} \right|}{\sigma^ {2} + \left\| \theta^ {*} - \widehat {\theta} \right\| ^ {2} + \left| \theta_ {i} ^ {*} - \theta_ {j} ^ {*} \right| \left| \widehat {\theta} _ {i} - \widehat {\theta} _ {j} \right|} +$$ + +Using the above relation in 16 we get + +$$ +\left| \int_ {0} ^ {1} \left[ F _ {T} \left(G _ {T} ^ {- 1} (u)\right) - u \right] \mathrm {d} u \right| \geq \frac {2}{\pi} \arctan \left(\frac {1}{2} \frac {\left| \widehat {\theta} _ {i} - \widehat {\theta} _ {i} \right| \left| \theta_ {i} ^ {*} - \theta_ {j} ^ {*} \right|}{\sigma^ {2} + \left\| \theta^ {*} - \widehat {\theta} \right\| ^ {2} + \left| \theta_ {i} ^ {*} - \theta_ {j} ^ {*} \right| \left| \widehat {\theta} _ {i} - \widehat {\theta} _ {j} \right|}\right) \tag {17} +$$ + +By recalling the given condition in Theorem 3.2 we have + +$$ +| \theta_ {i} ^ {*} - \theta_ {j} ^ {*} | \geq \frac {2 \tan (\frac {\pi}{2} (\rho_ {n} (\alpha , \beta , \tau)))}{1 - 2 \tan (\frac {\pi}{2} (\rho_ {n} (\alpha , \beta , \tau)))} \frac {\left(\sigma^ {2} + \| \widehat {\theta} - \theta^ {*} \| _ {2} ^ {2}\right)}{| \widehat {\theta} _ {i} - \widehat {\theta} _ {j} |}, +$$ + +By using $\tan \left(\frac{\pi}{2} (\rho_n(\alpha, \beta, \tau))\right) \leq \frac{1}{2}$ in the above relation we get + +$$ +\frac {2}{\pi} \arctan \left(\frac {1}{2} \frac {\left| \widehat {\theta} _ {i} - \widehat {\theta} _ {i} \right| \left| \theta_ {i} ^ {*} - \theta_ {j} ^ {*} \right|}{\sigma^ {2} + \left\| \theta^ {*} - \widehat {\theta} \right\| ^ {2} + \left| \theta_ {i} ^ {*} - \theta_ {j} ^ {*} \right| \left| \widehat {\theta} _ {i} - \widehat {\theta} _ {j} \right|}\right) \geq \rho_ {n} (\alpha , \beta , \tau). \tag {18} +$$ + +By combining 17 and 18 we get + +$$ +\left| \int_ {0} ^ {1} \left[ F _ {T} \left(G _ {T} ^ {- 1} (u)\right) - u \right] \mathrm {d} u \right| \geq \rho_ {n} (\alpha , \beta , \tau). +$$ + +Finally using Theorem 3.2 completes the proof, + +# A.7. Proof of Theorem 3.3 + +We first show that in this case, $(\tau = 0)$ for mixture of Gaussians, under the null hypothesis, we have $\tau_{X} = 0$ . For this end, from the Bayes' formula it is easy to get $\mathcal{L}(Y|X) = \mathsf{Bern}(g(x,\mu))$ with + +$$ +g (x, \mu) = \frac {1}{1 + \frac {1 - q}{q} e ^ {- x ^ {\intercal} \mu}}. +$$ + +With a similar argument, it can be observed that + +$$ +\mathcal {L} (Y | X) = \operatorname {B e r n} \left(g \left(x, \mu_ {\operatorname {s w a p} (i, j)}\right)\right). +$$ + +Given that $d_{\mathrm{TV}}(\mathrm{Bern}(a), \mathrm{Bern}(b)) = |a - b|$ , under the null hypothesis (with $\tau = 0$ ) we must have $g(x, \mu) = g(x, \mu_{\mathrm{swap}(i,j)})$ almost surely for all $x$ values. This implies that $x^{\top} \mu = x^{\top} \mu_{\mathrm{swap}(i,j)}$ almost surely, thereby we have $\mu_i = \mu_j$ . In the next step, we show that if $\mu_i = \mu_j$ then $\tau_X = 0$ . We then note that + +$$ +\mathcal {L} (X) = q \mathsf {N} (+ \mu , I _ {d}) + (1 - q) \mathsf {N} (- \mu , I _ {d}), +$$ + +$$ +\mathcal {L} \left(X _ {\operatorname {s w a p} (i, j)}\right) = q \mathsf {N} (+ \mu_ {\operatorname {s w a p} (i, j)}, I _ {d}) + (1 - q) \mathsf {N} (- \mu_ {\operatorname {s w a p} (i, j)}, I _ {d}). +$$ + +In the next step, using $\mu_{i} = \mu_{j}$ we realize that $\mu_{\mathrm{swap}(i,j)} = \mu$ , therefore $\mathcal{L}(X) = \mathcal{L}(X_{\mathrm{swap}(i,j)})$ . This implies that $\tau_X = 0$ . + +For the rest of the proof, we follow a similar argument as per proof of Theorem 3.2 and we first characterize cdf functions $F_{T}$ and $G_{T}$ . In this case we have + +$$ +\begin{array}{l} F _ {T} (t) = \mathbb {P} (Y ^ {(1)} \widehat {\theta} ^ {\mathsf {T}} X ^ {(1)} \leq t) \\ = q \mathbb {P} \left(\widehat {\theta} ^ {\mathsf {T}} X ^ {(1)} \leq t | Y ^ {(1)} = + 1\right) + \\ + (1 - q) \mathbb {P} \left(- \widehat {\theta} ^ {\mathrm {T}} X ^ {(1)} \leq t \mid Y ^ {(1)} = - 1\right) \\ = q \mathbb {P} (Z ^ {+} \leq t) + (1 - q) \mathbb {P} (Z ^ {-} \leq t), \\ \end{array} +$$ + +where $Z_{+}\sim \mathsf{N}(\mu^{\mathsf{T}}\widehat{\theta},\| \widehat{\theta}\|_{2}^{2})$ and $Z_{-}\sim \mathsf{N}(-\mu^{\mathsf{T}}\widehat{\theta},\| \widehat{\theta}\|_{2}^{2})$ . This yields + +$$ +\begin{array}{l} F _ {T} (t) = q \Phi \left(\frac {t - \widehat {\theta} ^ {\mathsf {T}} \mu}{\| \widehat {\theta} \| _ {2}}\right) + (1 - q) \left(1 - \Phi \left(\frac {- t + \widehat {\theta} ^ {\mathsf {T}} \mu}{\| \widehat {\theta} \| _ {2}}\right)\right) \\ = \Phi \left(\frac {t - \widehat {\theta} ^ {\mathsf {T}} \mu}{\| \widehat {\theta} \| _ {2}}\right), \\ \end{array} +$$ + +where in the last line we used $\Phi(t) + \Phi(-t) = 1$ . We next introduce the shorthands $\widehat{\theta}_{\mathrm{swap}} = \widehat{\theta}_{\mathrm{swap}(i,j)}$ and $\mu_{\mathrm{swap}} = \mu_{\mathrm{swap}(i,j)}$ , then by a similar argument we arrive at + +$$ +G _ {T} (t) = \Phi \left(\frac {t - \widehat {\theta} _ {\mathrm {s w a p}} ^ {\mathsf {T}} \mu}{\| \widehat {\theta} _ {\mathrm {s w a p}} \| _ {2}}\right) +$$ + +Since $\widehat{\theta}_{\mathrm{swap}}^{\mathsf{T}}\mu = \mu_{\mathrm{swap}}^{\mathsf{T}}\widehat{\theta}$ and $\| \widehat{\theta}_{\mathrm{swap}}\| = \| \widehat{\theta}\|$ the expression for $G_{T}(t)$ can be written as the following: + +$$ +G _ {T} (t) = \Phi \left(\frac {t - \widehat {\theta} ^ {\mathsf {T}} \mu_ {\mathrm {s w a p}}}{\| \widehat {\theta} \| _ {2}}\right) +$$ + +In the next step, it is easy to compute the quantile function $G_T^{-1}(u) = \|\widehat{\theta}\|_2\Phi^{-1}(u) + \widehat{\theta}^\top \mu_{\mathrm{swap}}$ . This brings us + +$$ +F _ {T} \left(G _ {T} ^ {- 1} (u)\right) = \Phi \left(\Phi^ {- 1} (u) + \frac {\widehat {\theta} ^ {\mathsf {T}} \left(\mu_ {\mathrm {s w a p}} - \mu\right)}{\| \widehat {\theta} \| _ {2}}\right). +$$ + +By introducing $\lambda = \frac{\widehat{\theta}^{\mathsf{T}}(\mu_{\mathrm{swap}} - \mu)}{\|\widehat{\theta}\|_2}$ and the function $\rho (\lambda) = \int_0^1\Phi (\Phi^{-1}(u) + \lambda)\mathrm{d}u$ we obtain + +$$ +\int_ {0} ^ {1} F _ {T} \left(G _ {T} ^ {- 1} (u)\right) \mathrm {d} u = \rho (\lambda). +$$ + +On the other hand, by differentiating $\rho (\lambda)$ with respect to $\lambda$ we get + +$$ +\begin{array}{l} \frac {\partial \rho}{\partial \lambda} = \frac {\partial}{\partial \lambda} \int_ {0} ^ {1} \Phi (\Phi^ {- 1} (u) + \lambda) d u \\ = \int_ {0} ^ {1} \varphi (\Phi^ {- 1} (u) + \lambda) d u. \\ \end{array} +$$ + +In the next step, by using the change of variable $s = \Phi^{-1}(u)$ we get that + +$$ +\begin{array}{l} \frac {\partial \rho}{\partial \lambda} = \int_ {- \infty} ^ {\infty} \varphi (s + \lambda) \varphi (s) d s \\ = \frac {1}{2 \pi} \int_ {- \infty} ^ {\infty} \exp \left(- \frac {(s + \lambda) ^ {2}}{2} - \frac {s ^ {2}}{2}\right) d s \\ = \frac {\exp (- \lambda^ {2} / 4)}{2 \pi} \int_ {- \infty} ^ {+ \infty} \exp \left(- \frac {(\sqrt {2} s + \lambda / \sqrt {2}) ^ {2}}{2}\right) d s \\ = \frac {\exp (- \lambda^ {2} / 4)}{2 \sqrt {2} \pi} \int_ {- \infty} ^ {+ \infty} \exp \left(- \frac {(t + \lambda / \sqrt {2}) ^ {2}}{2}\right) \mathrm {d} t = \frac {\exp (- \lambda^ {2} / 4)}{2 \sqrt {\pi}}. \\ \end{array} +$$ + +Therefore we get $\rho (\lambda) = \rho (0) + \int_0^\lambda \frac{\exp(-s^2 / 4)}{2\sqrt{\pi}} = \rho (0) + \Phi (\frac{\lambda}{\sqrt{2}}) - \frac{1}{2}$ , Since $\rho (0) = 1 / 2$ , we arrive at $\rho (\lambda) = \Phi \left(\frac{\lambda}{\sqrt{2}}\right)$ . Next from the definition of $\rho (\lambda)$ we have + +$$ +\int_ {0} ^ {1} \left[ F _ {T} \left(G _ {T} ^ {- 1} (u)\right) - u \right] \mathrm {d} u = \rho (\lambda) - \rho (0). +$$ + +In the next step, we use the equivalent value of $\lambda$ in the function $\rho (\lambda)$ to get + +$$ +\int_ {0} ^ {1} \left[ F _ {T} \left(G _ {T} ^ {- 1} (u)\right) - u \right] \mathrm {d} u = \Phi \left(\frac {\widehat {\theta} ^ {\mathsf {T}} \left(\mu_ {\mathrm {s w a p}} - \mu\right)}{\sqrt {2} \| \widehat {\theta} \| _ {2}}\right) - \Phi (0). +$$ + +Therefore we get + +$$ +\left| \int_ {0} ^ {1} \left[ F _ {T} \left(G _ {T} ^ {- 1} (u)\right) - u \right] \mathrm {d} u \right| = \left| \Phi \left(\frac {\widehat {\theta} ^ {\mathsf {T}} \left(\mu_ {\text {s w a p}} - \mu\right)}{\sqrt {2} \| \widehat {\theta} \| _ {2}}\right) - \Phi (0) \right|. +$$ + +On the other hand, the normal cdf satisfies the following property + +$$ +\left| \Phi (t) - \frac {1}{2} \right| = \Phi (| t |) - \frac {1}{2}, \forall t \in \mathbb {R} +$$ + +By using this we get + +$$ +\left| \int_ {0} ^ {1} \left[ F _ {T} \left(G _ {T} ^ {- 1} (u)\right) - u \right] \mathrm {d} u \right| = \Phi \left(\left| \frac {\widehat {\theta} ^ {\mathsf {T}} (\mu_ {\text {s w a p}} - \mu)}{\sqrt {2} \| \widehat {\theta} \| _ {2}} \right|\right) - \frac {1}{2}. \tag {19} +$$ + +In the next step, by using the fact that $\mu_{\mathrm{swap},\ell} = \mu_{\ell}$ for $\ell \neq i,j$ we get that + +$$ +\begin{array}{l} \widehat {\theta} ^ {\mathrm {T}} \left(\mu_ {\text {s w a p}} - \mu\right) = \widehat {\theta} _ {i} \left(\mu_ {\text {s w a p}, i} - \mu_ {i}\right) + \widehat {\theta} _ {j} \left(\mu_ {\text {s w a p}, j} - \mu_ {j}\right) \\ = \widehat {\theta} _ {i} \left(\mu_ {j} - \mu_ {i}\right) + \widehat {\theta} _ {j} \left(\mu_ {i} - \mu_ {j}\right) \\ = - \left(\widehat {\theta} _ {i} - \widehat {\theta} _ {j}\right) \left(\mu_ {i} - \mu_ {j}\right). \\ \end{array} +$$ + +Using this in 19 yields + +$$ +\left| \int_ {0} ^ {1} \left[ F _ {T} \left(G _ {T} ^ {- 1} (u)\right) - u \right] \mathrm {d} u \right| = \Phi \left(\frac {\left| \left(\widehat {\theta} _ {i} - \widehat {\theta} _ {j}\right) \left(\mu_ {i} - \mu_ {j}\right) \right|}{\sqrt {2} \| \widehat {\theta} \| _ {2}}\right) - \frac {1}{2} \tag {20} +$$ + +On the other hand, by recalling the condition on $|\mu_i - \mu_j|$ from Theorem 3.3 we have + +$$ +\left| \mu_ {i} - \mu_ {j} \right| \geq \Phi^ {- 1} \left(\rho_ {n} (\alpha , \beta , 0) + 2 \Phi \left(\frac {\left| \mu_ {i} - \mu_ {j} \right|}{\sqrt {2}}\right) - \frac {1}{2}\right) \frac {\sqrt {2} \| \widehat {\theta} \| _ {2}}{\left| \widehat {\theta} _ {i} - \widehat {\theta} _ {j} \right|} \tag {21} +$$ + +Combining 20 and 21 yields + +$$ +\left| \int_ {0} ^ {1} \left[ F _ {T} \left(G _ {T} ^ {- 1} (u)\right) - u \right] \mathrm {d} u \right| \geq \rho_ {n} (\alpha , \beta , 0). +$$ + +Finally, using Theorem 3.1 completes the proof. + +# B. Additional Numerical Experiments + +# B.1. Size of the test (full experiments) + +We refer to Figure 3 for experiment on the size of the test. + +# B.2. Power of the test (full experiments) + +We refer to Figure 4 for experiment on power of the test. + +# B.3. binary classification under mixture of Gaussians + +In this section, we consider the problem of testing for symmetric influence for binary classification under a mixture of Gaussian model. We consider the data generative law 5 with $q = 1/2$ and feature dimension $d = 10$ . We consider $\widetilde{\mu} = [1,2,3,\dots,10]$ and let $\mu = \frac{\widetilde{\mu}}{\|\widetilde{\mu}\|_2}$ . We follow the score function given in Theorem 3.3 and consider $T(x,y) = y\widehat{\theta}^{\mathrm{T}}x$ for some $\widehat{\theta} \sim \mathsf{N}(0,I_d)$ . We consider three different numbers of samples $n = 5000, 20000, 50000$ for this experiment. Figure 5 denote the results. Each number is averaged over 1000 independent experiments. It can be observed that pairs with higher contrast between their $\mu$ values are rejected more often. + +![](images/81ab87d5e9fb417e2c4439505d5e9af02f44bde29ea671b3c5ded15fbedc6661.jpg) +(a) $\alpha = 0.1$ + +![](images/61738379473fd5a220c2026606a62cbb9feb11866cb5972dd0daed3f8d99651f.jpg) +(b) $\alpha = 0.15$ + +![](images/403714e093f98d393cc812d5fe4dc76c522ad255c59c8737d20af31ee5e50a9f.jpg) +(c) $\alpha = 0.2$ + +![](images/3191bde8de37086c420b706541ef9b09865a3deef0aa2d715e06ae7a5bc33836.jpg) +Figure 3: Average rejection rate of the null hypothesis 1 for $\tau = 0$ and features coming from an isotropic Gaussian distribution $x\sim \mathsf{N}(0,I_{10})$ . In this experiment, we consider $y|x\sim \mathsf{N}(x^{\top}Sx,1)$ for a positive definite matrix $S_{i,j} = 1 + \mathbb{I}(i = j)$ (2 on diagonal and 1 on off-diagonal entries). The structure of $S$ implies that the symmetric influence holds for every pair of features. We consider three significance levels $\alpha = 0.1,0.15,0.2$ (from left to right). The small cell $(i,j)$ in each plot, represents rejection rates for testing symmetric influence for features $i$ and $j$ . In this experiment, the number of data points is 1000 and the method is run with the score function $T(x,y) = |y - x^{\top}\widehat{\theta}|$ for $\widehat{\theta}\sim \mathsf{N}(0,I_{10})$ . The reported numbers are averaged over 1000 experiments. It can be seen that the size of the test is controlled at the pre-determined significance levels. +Table 2. Verifying the robustness of our findings for two pairs of training samples, that are highly close to each other. + +# B.4. robustness of data models experiment + +In the second experiment, we consider a pair of training samples with 5 target examples. The first four targets are statistically significant (at level $\alpha = 0.05$ ), while the target 5 gives $pval = 0.21$ . We then replace the two training samples with some of their close other pictures, and compute the p-values for the new pair of images. We can see that the obtained p-values are somewhat close to the previous examples, which indicates the robustness of output results. The images along with p-values can be seen in Table 2. + +![](images/ce851fd5eccb1aaf540117de87a679b9df1c9bb55cd792a6922eb5f2fd1cfab3.jpg) +(a) $\sigma = 1$ + +![](images/ee6bc768248ca70e7bd77e369831d86168f63373e3c26a84799a77a733c93e25.jpg) +(b) $\sigma = 2$ +Figure 4: Average rejection rate of the null hypothesis of 1 for $\tau = 0$ and features with isotropic Gaussian distribution $x\sim \mathsf{N}(0,I_{10})$ . In this experiment, we consider $y|x\sim \mathsf{N}(x^{\mathsf{T}}\theta^{*},1)$ for $\theta^{*} = [1,1,2,2,3,3,4,4,5,5]$ . In this experiment the symmetric influence holds for pairs of features (1,2), (3,4), (5,6), (7,8), and (9,10). The small cell $(i,j)$ in each plot, represents rejection rates for testing symmetric influence for features $i$ and $j$ at significance level $\alpha = 0.1$ . In this experiment, the number of data points is 1000 and the method is run with the score function $T(x,y) = |y - x^{\mathsf{T}}\widehat{\theta}|$ for $\widehat{\theta}\sim \mathsf{N}(\theta^{*},\sigma^{2}I_{10})$ for three different $\sigma$ values $\sigma = 1,2,3$ (from left to right). The reported numbers are averaged over 1000 experiments. + +![](images/09cc73a43560d471ebe28fc4437aade0b74267d22d53382c6d7c5f0df0b35ab0.jpg) +(c) $\sigma = 3$ + +![](images/d3d52c18a76eeaf7ccfd520d262217158aca0f991cb159c73dd420a37115bf24.jpg) +(a) $n = 5000$ + +![](images/a08e242ce5133bada3d7d9203b543891c48588b580ebdbf9c67d5b46fd9e4f60.jpg) +(b) $n = 20000$ +Figure 5: Average rejection rate of the null hypothesis of 1 for $\tau = 0$ and features with isotropic Gaussian distribution $x\sim \mathsf{N}(0,I_{10})$ In this experiment, we consider binary classification under the mixture of Gaussian model 5 for $q = 1 / 2$ and $\mu = \frac{\mu}{\|\tilde{\mu}\|_2}$ for $\widetilde{\mu} = [1,2,\dots ,10]$ . The small cell $(i,j)$ in each plot, represents rejection rates for testing symmetric influence for features $i$ and $j$ at significance level $\alpha = 0.1$ . In this experiment, three different values for number of data points is considered $n = 5000,20000,50000$ (from left to right). We run Algorithm 1 with the score function $T(x,y) = yx^{\mathrm{T}}\widehat{\theta}$ for $\widehat{\theta}\sim \mathsf{N}(0,I_{10})$ . The reported numbers are averaged over 1000 experiments. + +![](images/565131cd8ef2f8386b2d2924e8f12587f41192cacbfe3828604e08f443f52f42.jpg) +(c) $n = 50000$ \ No newline at end of file diff --git a/amodelfreeclosenessofinfluencetestforfeaturesinsupervisedlearning/images.zip b/amodelfreeclosenessofinfluencetestforfeaturesinsupervisedlearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3b636ad3e3b943e827f78d19307da91d4af82ca0 --- /dev/null +++ b/amodelfreeclosenessofinfluencetestforfeaturesinsupervisedlearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0813d0c9eea6c1da71a972d1d6cc3bb74b5d62bb24a1a06e3971425d9ca2c939 +size 1190941 diff --git a/amodelfreeclosenessofinfluencetestforfeaturesinsupervisedlearning/layout.json b/amodelfreeclosenessofinfluencetestforfeaturesinsupervisedlearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a76de44e93c18f34989e59dbd4f4758db7b71811 --- /dev/null +++ b/amodelfreeclosenessofinfluencetestforfeaturesinsupervisedlearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1826d3820622614142dd3144ac9a314c0fdae088938989cb831de6db2ac6a3e5 +size 1085736 diff --git a/amodernlookattherelationshipbetweensharpnessandgeneralization/280cb18e-bb94-44d9-becb-a39723cab52c_content_list.json b/amodernlookattherelationshipbetweensharpnessandgeneralization/280cb18e-bb94-44d9-becb-a39723cab52c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1e39b12b1dbd1103f1af4219d7a7c61aba829fd4 --- /dev/null +++ b/amodernlookattherelationshipbetweensharpnessandgeneralization/280cb18e-bb94-44d9-becb-a39723cab52c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:55beea86d210b369de98d6b0bda23dd89f0ab7cab40ecae1eb99b3196f618f66 +size 341402 diff --git a/amodernlookattherelationshipbetweensharpnessandgeneralization/280cb18e-bb94-44d9-becb-a39723cab52c_model.json b/amodernlookattherelationshipbetweensharpnessandgeneralization/280cb18e-bb94-44d9-becb-a39723cab52c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..35a4b49bb6ae56c72a51fb30c939048fc4993989 --- /dev/null +++ b/amodernlookattherelationshipbetweensharpnessandgeneralization/280cb18e-bb94-44d9-becb-a39723cab52c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2f33b6eed8272c7f949fbf5cb409c3516de788fc5a5a745986ac6be235f95561 +size 363046 diff --git a/amodernlookattherelationshipbetweensharpnessandgeneralization/280cb18e-bb94-44d9-becb-a39723cab52c_origin.pdf b/amodernlookattherelationshipbetweensharpnessandgeneralization/280cb18e-bb94-44d9-becb-a39723cab52c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2ebfa2892c04d06339d7414500efdf89e51e26de --- /dev/null +++ b/amodernlookattherelationshipbetweensharpnessandgeneralization/280cb18e-bb94-44d9-becb-a39723cab52c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41ee87aa59e564bcb856bff1cdb1732351ef356fc9173da7c9a7994ec0c7867a +size 10961264 diff --git a/amodernlookattherelationshipbetweensharpnessandgeneralization/full.md b/amodernlookattherelationshipbetweensharpnessandgeneralization/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7f6510d1d4685d224e7575f2a00fadf55e0fba25 --- /dev/null +++ b/amodernlookattherelationshipbetweensharpnessandgeneralization/full.md @@ -0,0 +1,1644 @@ +# A Modern Look at the Relationship between Sharpness and Generalization + +Maksym Andriushchenko1 Francesco Croce2,3 Maximilian Müller2,3 Matthias Hein2,3 Nicolas Flammarion1 + +# Abstract + +Sharpness of minima is a promising quantity that can positively correlate with test error for deep networks and, when optimized during training, can improve generalization. However, standard sharpness is not invariant under reparametrizations of neural networks, and, to fix this, reparametrization-invariant sharpness definitions have been proposed, most prominently adaptive sharpness (Kwon et al., 2021). But does it really capture generalization in modern practical settings? We comprehensively explore this question in a detailed study of various definitions of adaptive sharpness in settings ranging from training from scratch on ImageNet and CIFAR-10 to fine-tuning CLIP on ImageNet and BERT on MNLI. We focus mostly on transformers for which little is known in terms of sharpness despite their widespread usage. Overall, we observe that sharpness does not correlate well with generalization but rather with some training parameters like the learning rate that can be positively or negatively correlated with generalization depending on the setup. Interestingly, in multiple cases, we observe a consistent negative correlation of sharpness with out-of-distribution error implying that sharper minima can generalize better. Finally, we illustrate on a simple model that the right sharpness measure is highly data-dependent, and that we do not understand well this aspect for realistic data distributions. Our code is available at https://github.com/tml-epfl/ sharpness-vs-generalization. + +# 1. Introduction + +Considering the sharpness of the training objective at a minimum has intuitive appeal: if the loss surface is slightly + +$^{1}$ EPFL $^{2}$ Tübingen AI Center $^{3}$ University of Tübingen. Correspondence to: Maksym Andriushchenko . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +perturbed due to a train vs. test or out-of-distribution (OOD) discrepancy, flat minima of deep networks should still have low loss (Hochreiter & Schmidhuber, 1995; Keskar et al., 2016). On the theoretical side, sharpness appears in generalization bounds (Neyshabur et al., 2017; Dziugaite & Roy, 2018; Foret et al., 2021) but this fact alone is not necessarily informative for practical settings. For example, quantities like the VC-dimension typically correlate negatively with generalization contrary to what the generalization bound might suggest (Jiang et al., 2020). Importantly, it has been shown empirically that sharpness can also correlate well with generalization in common deep learning setups (Keskar et al., 2016; Jiang et al., 2020) which makes it a promising generalization measure that can potentially distinguish well-generalizing solutions. Additionally, empirical success of training methods that minimize sharpness such as sharpness-aware minimization (SAM) (Zheng et al., 2021; Wu et al., 2020; Foret et al., 2021) further suggests that sharpness can be an important quantity for generalization. + +Motivation: why revisiting sharpness? Many works imply or conjecture that flatter minima should generalize better (Xing et al., 2018; Zhou et al., 2020; Cha et al., 2021; Park & Kim, 2022; Lyu et al., 2022) for standard or OOD data. However, standard sharpness definitions do not correlate well with generalization (Jiang et al., 2020; Kaur et al., 2022) which can be partially due to their lack of invariance under reparametrizations that leave the model unchanged (Dinh et al., 2017; Granziol, 2020; Zhang et al., 2021). Adaptive sharpness appears to be more promising since it fixes the reparametrization issue and is shown to empirically correlate better with generalization (Kwon et al., 2021). However, the empirical evidence in Kwon et al. (2021) and other works that discuss sharpness (Keskar et al., 2016; Jiang et al., 2020; Dziugaite et al., 2020; Bisla et al., 2022) is restricted to small datasets like CIFAR-10 or SVHN. In addition, SAM appears to be particularly useful for new architectures like vision transformers (Chen et al., 2022) for which there has been no systematic studies of sharpness vs. generalization. Moreover, transfer learning is becoming the default option for vision and language tasks but not much is known about sharpness there. Finally, the relationship between sharpness and OOD generalization is also underexplored. These new developments motivate us to revisit the role of sharpness in these new settings. + +Contributions. We aim to provide a comprehensive study focusing specifically on adaptive sharpness in order to answer the following fundamental question: + +Can reparametrization-invariant sharpness capture generalization in modern practical settings? + +Towards this goal, we make the following contributions: + +- We provide extensive evaluations of multiple reparametrization-invariant sharpness measures for (1) training from scratch on ImageNet and CIFAR-10 using transformers and ConvNets, and (2) fine-tuning CLIP and BERT transformers on ImageNet and MNLI. +- We observe that sharpness does not correlate well with generalization but rather with some training parameters like the learning rate which can be positively or negatively correlated with generalization depending on the setup. +- Interestingly, in multiple cases, we observe a consistent negative correlation of sharpness with OOD generalization implying that sharper minima can generalize better. +- Finally, we provide an analysis on a simple model where we know the measure responsible for generalization. Our analysis suggests that (1) different sharpness definitions can capture totally different trends, and (2) the right sharpness measure is highly data-dependent. + +# 2. Related work + +Here we discuss the most related papers to our work. + +Systematic studies on sharpness vs. generalization. The seminal work of Keskar et al. (2016) shows that the performance degradation of large-batch SGD (LeCun et al., 2012) is correlated with sharpness of minima. Neyshabur et al. (2017) explore different generalization measure that may explain generalization for deep networks suggesting that sharpness can be a promising measure. Jiang et al. (2020) perform a systematic study that shows a strong correlation between sharpness and generalization on a large set of CIFAR-10/SVHN models trained with many different hyperparameters. Their experimental protocol is, however, criticized in Dziugaite et al. (2020) since it can obscure failures of generalization measures and instead should be evaluated within the framework of distributional robustness. Vedantam et al. (2021) discuss OOD generalization on small datasets and evaluate a definition of sharpness which, however, does not correlate well with OOD generalization. Stutz et al. (2021) study the relationship between sharpness and generalization under $\ell_p$ -bounded adversarial perturbations. Andriushchenko & Flammarion (2022) study reasons behind the success of SAM and highlight the importance of using sharpness computed on a small subset of training points. Kaur et al. (2022) discuss that the maximum eigenvalue of + +the Hessian is not always predictive to generalization even for models obtained via standard training methods. + +Reparametrization-invariant sharpness definitions. The magnitude-aware sharpness of Keskar et al. (2016) mitigates but does not completely resolve reparametrization invariance. Liang et al. (2019) consider the Fisher-Rao metric related to sharpness and invariant to network reparametrization. Petzka et al. (2021) propose a sharpness measure based on the trace of the Hessian and show correlation for a small ConvNet on CIFAR-10. Tsuzuki et al. (2020) suggest to use a specifically rescaled sharpness inspired by the PAC-Bayes theory and report high correlation with generalization for ResNets on CIFAR-10. Most importantly for our work, Kwon et al. (2021) introduce adaptive sharpness which is reparametrization invariant, correlates well with generalization, and generalizes multiple existing sharpness definitions. + +Explicit and implicit sharpness minimization. The idea that flat minima can be beneficial for generalization dates back to Hochreiter & Schmidhuber (1995) and inspires multiple methods that optimize for more robust minima. These methods optimize different criteria ranging from random perturbations such as dropout (Srivastava et al., 2014) and Entropy-SGD (Chaudhari et al., 2016) to worst-case perturbations such as SAM (Foret et al., 2021) and its variations (Kwon et al., 2021; Zhuang et al., 2022; Du et al., 2022). Notably, Chen et al. (2022) suggest that SAM is particularly helpful for vision transformers on ImageNet scale and that standard transformers by default converge to very sharp minima. Concurrently, works on the implicit bias of SGD suggest implicit minimization of some hidden complexity measures related to flatness of minima (Keskar et al., 2016; Smith & Le, 2018; Xing et al., 2018). Izmailov et al. (2018) propose to average weights during SGD to improve generalization and motivate it by sharpness reduction. Smith et al. (2021) derive an implicit regularization term of SGD based on the gradient norm. Sharpness-related quantities based on the Hessian have been a focus of many recent works. E.g., Cohen et al. (2021); Arora et al. (2022); Damian et al. (2023) empirically and theoretically characterize the regime of full-batch gradient descent where the maximum eigenvalue of the Hessian becomes inversely proportional to the learning rate used for training. Blanc et al. (2020); Li et al. (2021); Damian et al. (2021) discover implicit minimization of the trace of the Hessian for label-noise SGD used as a proxy of standard SGD. The common theme behind these works is a focus on sharpness-related metrics as a tool to better understand generalization for deep networks. + +# 3. Adaptive Sharpness, its Invariances, and Computation + +In this section, we first provide background on adaptive sharpness, then discuss its invariance properties for modern + +architectures, and propose a way to compute worst-case sharpness efficiently. + +# 3.1. Background on Sharpness + +Sharpness definitions. We denote the loss on a set of training points $S$ as $L_S(\boldsymbol{w}) = \frac{1}{|S|} \sum_{(\boldsymbol{x}, \boldsymbol{y}) \in S} \ell_{\boldsymbol{x}\boldsymbol{y}}(\boldsymbol{w})$ , where $\ell_{\boldsymbol{x}\boldsymbol{y}}(\boldsymbol{w}) \in \mathbb{R}_+$ represents some loss function (e.g., cross-entropy) on the training pair $(\boldsymbol{x}, \boldsymbol{y}) \in S$ computed with the network weights $\boldsymbol{w}$ . For arbitrary $\boldsymbol{w} \in \mathbb{R}^p$ (i.e., not necessarily a minimum), we define the adaptive average-case and adaptive worst-case m-sharpsness with radius $\rho$ and with respect to a vector $\boldsymbol{c} \in \mathbb{R}^p$ as: + +$$ +S _ {a v g} ^ {\rho} (\boldsymbol {w}, \boldsymbol {c}) \triangleq \mathbb {E} _ {\substack {\mathcal {S} \sim P _ {m} \\ \delta \sim \mathcal {N} (0, \rho^ {2} \text{diag} (\boldsymbol {c} ^ {2}))}} L _ {\mathcal {S}} (\boldsymbol {w} + \boldsymbol {\delta}) - L _ {\mathcal {S}} (\boldsymbol {w}), \tag{1} +$$ + +$$ +S_{max}^{\rho}(\boldsymbol {w},\boldsymbol {c})\triangleq \mathbb{E}_{\mathcal{S}\sim P_{m}}\max_{\| \boldsymbol {\delta}\odot \boldsymbol{c}^{-1}\|_{p}\leq \rho}L_{\mathcal{S}}(\boldsymbol {w} + \boldsymbol {\delta}) - L_{\mathcal{S}}(\boldsymbol {w}), +$$ + +where $\odot /^{-1}$ denotes elementwise multiplication/inversion and $P_{m}$ is the data distribution that returns $m$ training pairs $(\pmb{x},\pmb{y})$ . Both average-case and worst-case sharpness have often been considered in the literature, and worst-case sharpness is mostly determined to correlate better with generalization (Jiang et al., 2020; Dziugaite et al., 2020; Kwon et al., 2021), especially with a small $m$ (i.e., $|\mathcal{S}|$ ) in worst-case sharpness (Foret et al., 2021). Using $\pmb{c} = |\pmb{w}|$ leads to elementwise adaptive sharpness (Kwon et al., 2021) and makes the sharpness invariant under multiplicative reparametrizations that preserve the network, i.e., for any $\pmb{c} \in \mathbb{R}^p$ such that $f(\pmb{w} \odot \pmb{c}) = f(\pmb{w})$ we have: + +$$ +S _ {m a x} ^ {\rho} (\boldsymbol {w} \odot \boldsymbol {c}, | \boldsymbol {w} \odot \boldsymbol {c} |) = +$$ + +$$ +\mathbb {E} _ {\mathcal {S}} \max _ {\| \boldsymbol {\delta} \odot (| \boldsymbol {w} | \odot \boldsymbol {c}) ^ {- 1} \| _ {p} \leq \rho} L _ {\mathcal {S}} (\boldsymbol {w} \odot \boldsymbol {c} + \boldsymbol {\delta}) - L _ {\mathcal {S}} (\boldsymbol {w} \odot \boldsymbol {c}) = +$$ + +$$ +\mathbb {E} _ {\mathcal {S}} \max _ {\| \boldsymbol {\delta} ^ {\prime} \odot | \boldsymbol {w} | ^ {- 1} \| _ {p} \leq \rho} L _ {\mathcal {S}} \left(\left(\boldsymbol {w} + \boldsymbol {\delta} ^ {\prime}\right) \odot \boldsymbol {c}\right) - L _ {\mathcal {S}} (\boldsymbol {w} \odot \boldsymbol {c}) = +$$ + +$$ +\mathbb{E}_{\mathcal{S}}\max_{\| \boldsymbol{\delta}^{\prime}\odot |\boldsymbol {w}|^{-1}\|_{p}\leq \rho}L_{\mathcal{S}}(\boldsymbol {w} + \boldsymbol{\delta}^{\prime}) - L_{\mathcal{S}}(\boldsymbol {w}) = S_{m a x}^{\rho}(\boldsymbol {w},|\boldsymbol {w}|), +$$ + +where we used the substitution $\delta' := \delta \odot c^{-1}$ . Similarly, one can show that $S_{avg}^{\rho}(\boldsymbol{w} \odot \boldsymbol{c}, |\boldsymbol{w} \odot \boldsymbol{c}|) = S_{avg}^{\rho}(\boldsymbol{w}, |\boldsymbol{w}|)$ . Thus, this illustrates that the criticism of sharpness stated in Dinh et al. (2017) does not apply to adaptive sharpness, and there is no need to "balance" the network in a pre-processing step like, e.g., done in Bisla et al. (2022). + +Connections between different sharpness definitions. Here we generalize the analytical expressions of standard sharpness for radius $\rho \rightarrow 0$ that depend on the first- or second-order terms which are frequently used in the literature (Blanc et al., 2020; Tsuzuki et al., 2020; Li et al., 2021; Damian et al., 2021). For a thrice differentiable loss $L(w)$ , the average-case elementwise adaptive sharpness can + +be computed as (see App. A.1 for proofs): + +$$ +\begin{array}{l} S _ {a v g} ^ {\rho} (\boldsymbol {w}, | \boldsymbol {w} |) = \mathbb {E} _ {\mathcal {S} \sim P _ {m}} \frac {\rho^ {2}}{2} \operatorname {t r} \left(\nabla^ {2} L _ {\mathcal {S}} (\boldsymbol {w}) \odot | \boldsymbol {w} | | \boldsymbol {w} | ^ {\top}\right) \\ + O \left(\rho^ {3}\right). \tag {2} \\ \end{array} +$$ + +We note that the first-order term cancels out completely and plays no role. This is not the case for worst-case adaptive sharpness where we get for $p = 2$ the following expression for every critical point that is not a local maximum: + +$$ +\begin{array}{l} S _ {m a x} ^ {\rho} (\boldsymbol {w}, | \boldsymbol {w} |) = \mathbb {E} _ {\mathcal {S} \sim P _ {m}} \frac {\rho^ {2}}{2} \lambda_ {\max } \left(\nabla^ {2} L _ {\mathcal {S}} (\boldsymbol {w}) \odot | \boldsymbol {w} | | \boldsymbol {w} | ^ {\top}\right) \\ + O \left(\rho^ {3}\right), \tag {3} \\ \end{array} +$$ + +otherwise the first-order term dominates and we get $\rho \mathbb{E}_{\mathcal{S} \sim P_m} \| \nabla L(\boldsymbol{w}) \odot |\boldsymbol{w}| \|_2$ , which resembles the implicit gradient regularization of Smith et al. (2021). Thus, worst-case sharpness with a small radius captures different properties of the loss surface depending on whether $\boldsymbol{w}$ is close to a minimum or not. We make use of these quantities in the last section to discuss insights from simple models. For the experiments, however, we evaluate a range of $\rho$ where the smallest $\rho$ well-approximates the above quantities. + +What do we expect sharpness to capture? We are looking for a sharpness measure that can be predictive for generalization meaning that it satisfies either of these two hypotheses: + +- Strong hypothesis: sharpness is highly correlated with generalization suggesting a possibility of a causal relation. +- Weak hypothesis: models with the lowest sharpness generalize well suggesting that sharpness might be sufficient but not necessary for generalization. + +To detect correlation, we follow the previous works by Jiang et al. (2020); Dziugaite et al. (2020); Kwon et al. (2021) and use the Kendall rank correlation coefficient: + +$$ +\tau (\boldsymbol {t}, \boldsymbol {s}) = \frac {2}{M (M - 1)} \sum_ {i < j} \operatorname {s i g n} \left(t _ {i} - t _ {j}\right) \operatorname {s i g n} \left(s _ {i} - s _ {j}\right) \tag {4} +$$ + +where $\pmb{t}, \pmb{s} \in \mathbb{R}^{M}$ are vectors of test error and sharpness values for $M$ different models. We adopt a less demanding setting than in the previous works of Neyshabur et al. (2017); Jiang et al. (2020); Dziugaite et al. (2020), and only compare models within the same loss surface motivated by the geometric motivation behind sharpness. This restriction rules out comparing models with different architectures (including different width and depth) or measuring sharpness on a different set of points since both changes would change the loss surface. According to the same reason, we also do not consider the ability of sharpness to capture robustness to different amounts of noisy labels (unlike, e.g., Neyshabur et al. (2017)). We always evaluate sharpness on the same + +training points taken without any data augmentations. Moreover, we always compare models trained with exactly the same training sets but, at the same time, we allow the usage of algorithmic techniques such as data augmentation or mixup for training. + +# 3.2. Which Invariances Do We Need Sharpness to Capture for Modern Architectures? + +Throughout the paper, we focus on elementwise adaptive sharpness which, as we show, satisfies the main reparametrization invariances for ResNets and ViTs. Let us denote $f_{\boldsymbol{w}}: \mathbb{R}^d \to \mathbb{R}^K$ a network with parameters $\boldsymbol{w}$ , which returns the logits $f_{\boldsymbol{w}}(\boldsymbol{x}) \in \mathbb{R}^K$ for an input $\boldsymbol{x} \in \mathbb{R}^d$ . By a reparametrization invariance we mean a function $T: \mathbb{R}^p \to \mathbb{R}^p$ such that for every $\boldsymbol{w} \in \mathbb{R}^p$ and $\boldsymbol{x} \in \mathbb{R}^d$ it holds $f_{\boldsymbol{w}}(\boldsymbol{x}) = f_{T(\boldsymbol{w})}(\boldsymbol{x})$ . We briefly discuss here that adaptive sharpness also stays invariant for modern architectures like ResNets and ViTs involving normalization layers and self-attention. Finally, we discuss how to treat the scale-sensitivity of classification losses. + +Adaptive sharpness for ResNets. A typical block of a pre-activation ResNet between skip connections includes the following sequence of operations: $\mathrm{BN} \rightarrow \mathrm{ReLU} \rightarrow \mathrm{conv} \rightarrow \mathrm{BN} \rightarrow \mathrm{ReLU} \rightarrow \mathrm{conv}$ where BN denotes BatchNorm. So we need to make sure that the sharpness definition we use is invariant to transformations that leave the network unchanged: (1) multiplication of the affine BatchNorm parameters by $\alpha \in \mathbb{R}_+$ and division of the subsequent convolutional parameters by the same $\alpha$ (since ReLU is positive one-homogeneous and $\mathrm{ReLU}(\alpha z) / \alpha = \mathrm{ReLU}(z)$ ), and (2) multiplying the convolutional layer by any $\alpha \in \mathbb{R}_+$ due to scale-invariance of the subsequent BatchNorm layer. Both multiplicative invariances are satisfied by elementwise adaptive sharpness since $S_{max}^{\rho}(\boldsymbol{w} \odot \boldsymbol{c}, |\boldsymbol{w} \odot \boldsymbol{c}|) = S_{max}^{\rho}(\boldsymbol{w}, |\boldsymbol{w}|)$ as shown above. + +Adaptive sharpness for ViTs. A typical MLP block of ViTs contains the following operations: $\mathrm{LN} \rightarrow \mathrm{Linear} \rightarrow \mathrm{GELU} \rightarrow \mathrm{Linear}$ where LN denotes LayerNorm, and pre-softmax self-attention weights are computed as $ZW_{Q}W_{K}^{\top}Z^{\top}$ where $Z \in \mathbb{R}^{P \times D}$ is the matrix of $PD$ -dimensional tokens. The network thus has the following invariances to multiplication/division by $\alpha$ : (1) between LN and Linear in MLP, (2) between $W_{Q}$ in $W_{K}$ in self-attention, (3) between two Linear layers that have GELU in-between for which $\mathrm{GELU}(\alpha z) / \alpha \approx \mathrm{GELU}(z)$ . Moreover, at the beginning of the network there is a part of the network which is invariant to the scale of the Linear layer (Linear $\rightarrow$ LN). Similarly to ResNets, all these invariances are multiplicative, so the argument about the invariance of elementwise adaptive sharpness is the same. + +Scale-sensitivity for classification losses. However, adaptive sharpness remains sensitive to the scale of the classifier, + +![](images/20f3f6a8a7d802282935f43c6f5f0b6f76ceda1c770a27515e373e61f0dea9e1.jpg) +Figure 1: Sensitivity of adaptive sharpness to weight scaling for a linear model that achieves zero training error. + +meaning that the sharpness together with the cross-entropy loss keep decreasing to zero after reaching zero training error. This can be seen even for linear models for which scaling the weight vector by a constant changes the adaptive sharpness as shown in Fig. 1. To fix this issue, Tsuzuku et al. (2020) propose to use normalization of the logits $f_{\mathbf{w}}$ , i.e.: + +$$ +\tilde {f} _ {\boldsymbol {w}} (\boldsymbol {x}) \triangleq \frac {f _ {\boldsymbol {w}} (\boldsymbol {x})}{\sqrt {\frac {1}{K} \sum_ {i = 1} ^ {K} \left(f _ {\boldsymbol {w}} (\boldsymbol {x}) _ {i} - f _ {\text {a v g}} (\boldsymbol {x})\right) ^ {2}}}, \tag {5} +$$ + +where $f_{avg}(\pmb{x}) = \frac{1}{K}\sum_{j=1}^{K}f_{\pmb{w}}(\pmb{x})_j$ . This provably fixes the scaling issue meaning that scaling the output layer by $\alpha \in \mathbb{R}_+$ does not affect the logits. Moreover, this change can make models having different training loss more comparable to each other. + +# 3.3. How to Compute Worst-Case Sharpness Efficiently? + +Estimation of worst-case sharpness involves solving a constrained maximization problem typically using projected gradient ascent which can be sensitive to its hyperparameters, primarily the step size. To avoid doing extensive grid searches over the hyperparameters of gradient ascent for each model, we choose to use Auto-PGD (Croce & Hein, 2020) (see Algorithm 1 in Appendix for the precise formulation). Auto-PGD is a hyperparameter-free method designed to accurately estimate adversarial robustness by solving a similar optimization problem to worst-case sharpness but over the input space instead of the parameter space. As in $\ell_{\infty}$ and $\ell_{2}$ versions of Auto-PGD, for each gradient step, we use gradient-sign and plain-gradient updates, respectively, but we make them proportional to $|\pmb{w}|$ , to better take into account the geometry induced by elementwise adaptive sharpness. We show in Sec. H.2 in Appendix that as few as 20 steps are typically sufficient to converge with Auto-PGD. + +# 4. Sharpness vs. Generalization: Modern Setup + +The current understanding of the relationship between sharpness and generalization is based on experiments on + +![](images/aac2ea56f30e8e4fdd15ca940c54beeb2f781a14d446f4120e8e673f4ad47eda.jpg) + +![](images/e42272f7d47041ba819712fc2314317a17576ab7b37921b8e1ad13474620dda8.jpg) +Figure 2: ViT-B/16 trained from scratch on ImageNet-1k. We show for 56 models from Steiner et al. (2021) the test error on ImageNet and its OOD variants vs. worst-case $\ell_{\infty}$ sharpness with (top) or without (bottom) normalization at $\rho = 0.002$ . The color indicates models trained with stochastic depth (sd) and dropout (do), markers and their size indicate the strength of weight decay (wd) and augmentations (aug), and $\tau$ indicates the rank correlation coefficient from Eq. (4). Overall, the correlation of sharpness with test error is either close to zero or even negative. + +non-residual convolution networks and small datasets like CIFAR-10 and SVHN (Jiang et al., 2020). We revisit here this relationship for state-of-the-art transformers trained from scratch on ImageNet-1k and CLIP / BERT fine-tuned on ImageNet-1k / MNLI. We explore both in-distribution (ID) and out-of-distribution (OOD) generalization due to the common intuition that flatter models are expected to be more robust (Cha et al., 2021). We focus on worst-case $\ell_{\infty}$ adaptive sharpness with low $m$ (256) since it appears to be one of the most promising sharpness definitions (Kwon et al., 2021). We compute sharpness with and without logit normalization, and provide average-case sharpness for different radii $\rho$ in Appendix. We focus primarily on the relationship between sharpness and test error but we also discuss sharpness vs. generalization gap in Sec. B in Appendix. + +Training on ImageNet-1k from scratch. To investigate the relationship between sharpness and generalization for large-scale settings, we evaluate ViT models from Steiner et al. (2021), using ViT-B/16-224 weights. Those were trained from scratch on ImageNet-1k for 300 epochs with different hyperparameter settings, and subsequently finetuned on the same dataset for 20.000 steps with 2 different learning rates. The different hyperparameters include augmentations, weight decay, and stochastic depth / dropout, leading to a rich pool of 56 models with test errors ranging from $21.8\%$ to $37.2\%$ . As shown in Figure 2 (first column), neither the sharpness measure computed with nor without logit normalization can effectively distinguish model performance. Logit-normalized sharpness effectively separates models with stochastic depth / dropout (sd/do from now on) from those without by grouping them into two distinct + +clusters (blue and orange). However, these clusters do not correspond to a separation by test error. For the OOD tasks (ImageNet-R, ImageNet-Sketch, ImageNet-A), within each cluster, the models trained with higher weight decay yield lower test error fairly consistently. However, this ranking is not captured by sharpness, which only disentangles the sd/do clusters. For sharpness without logit normalization, the sd/do clusters are not well-separated. Surprisingly, there is a consistent negative correlation between sharpness and test error, both on ID and OOD data, i.e. the flattest models tend to have the largest test error. Evaluation for other radii, average-case sharpness measures (App. C) and for ViTs pretrained on IN-21k and fine-tuned on IN-1k (App. D) similarly suggest that sharpness does not consistently capture generalization properties. When considering IN-1k and IN-21k pre-trained models together (App. E) we even find similar or higher sharpness for significantly better-generalizing models. Then, for none of the settings studied, we can confirm either the strong or weak hypotheses. + +Fine-tuning on ImageNet-1k from CLIP. We investigate fine-tuning from CLIP (Radford et al., 2021), which is a crucial approach due to the popularity of CLIP features (Ramesh et al., 2022), its fast training time, and its ability to achieve higher accuracy. We study the pool of classifiers obtained by Wortsman et al. (2022a) who fine-tuned a CLIP ViT-B/32 model on ImageNet multiple times by randomly selecting training hyperparameters such as learning rate, number of epochs, weight decay, label smoothing and augmentations. This set of 71 fine-tuned models, along with the base model, allows us to study how well generalization and training hyperparameters are captured by sharpness. + +![](images/a886d4fdfe231352678195db1b45d41f030fa0b487e7c3be1b52ae113fb4f040.jpg) + +![](images/8f7719956ad4b21bc8356be9d2b0c895cd59bf28e01a633ccfbcf69c00460d1a.jpg) +With logit normalization + +![](images/c51aa271b649eaf454ff110d0af05a17ec725147f9dd4c5830affb78c9526fe8.jpg) + +![](images/1eda9810898266b3c476423bb00411568a4a61f711321c29d5e25135e6265246.jpg) + +![](images/155ddea0679b14fad28fdc39c7bfc0dbd2948a88682051ff3f3c39c4873c452b.jpg) + +![](images/0e0ca7f83041be3e14376b669b87ce83d8d53c658ff0e274b94d126b637555a1.jpg) +Figure 3: Fine-tuning CLIP ViT-B/32 on ImageNet-1k. We show for 72 models from Wortsman et al. (2022a) the test error on ImageNet or its variants (distribution shifts) vs worst-case $\ell_{\infty}$ sharpness with (top) or without (bottom) normalization at $\rho = 0.002$ . Darker color indicates larger learning rate used for fine-tuning. + +![](images/fd9dbb8f50bcb0ab0bf0df40f4c1c83d54b0e55af9159b9ee041d96da4bf6614.jpg) +Without logit normalization + +![](images/562def0dcb9778ccaf3c7f5bd8d50793007250c75b0266636cfaddbcde21b624.jpg) + +![](images/10fe89855c8595155a63b6d8610c27c4424af499bea335cc6965bf7de02c832b.jpg) + +![](images/e1690b5598b0099f722c5c6abc7c724b321a498d825bd902956576f42192b47c.jpg) + +The leftmost column of Fig. 3 illustrates that worst-case $\ell_{\infty}$ adaptive sharpness does not effectively predict which classifiers have the lowest test error on ImageNet. Furthermore, there is a consistent negative correlation between sharpness and test error when evaluating classifiers on the distribution shifts ImageNet-R (Hendrycks et al., 2021a), ImageNet-Sketch (Wang et al., 2019) and ImageNet-A (Hendrycks et al., 2021b) (second to fourth columns). We further notice that, in contrast with ImageNet, higher test errors on these datasets go in parallel with higher learning rates used for fine-tuning (darker color in the plots). Indeed, smaller learning rates lead to smaller changes in the features of the base CLIP model which are more robust to distribution shifts since they were obtained from a much larger dataset than ImageNet. Finally, similar observations hold for the other sharpness definition and radii (App. F). + +Fine-tuning on MNLI from BERT. We explore fine-tuning from BERT (Devlin et al., 2019), to expand our analysis beyond vision tasks. To study the linguistic generalization of multiple classifiers trained on the same dataset, McCoy et al. (2020) have fine-tuned BERT 100 times on the Multi-genre Natural Language Inference (MNLI) dataset (Williams et al., 2018) varying exclusively the random seed across runs. These random seeds affect the initialization of the classifier and the scanning order of the training data for SGD. All these classifiers achieve very similar in-distribution generalization, i.e. on MNLI test points, but behave differently on the out-of-distribution tasks represented by the HANS dataset (McCoy et al., 2019). For example, in one of HANS sub-domains the accuracy of the models ranges from $5\%$ to $55\%$ . We randomly choose 50 of the 100 available classifiers, and compute the different measures of sharpness for + +various radii. Fig. 4 shows how the worst-case $\ell_{\infty}$ adaptive sharpness, with and without logit normalization, correlates with test error on MNLI and three HANS tasks. We observe that the correlation is weak and does not exceed 0.04, even for datasets like HANS lexical (second column) where test errors vary significantly (between $45\%$ and $95\%$ ). Moreover, in some cases the correlation is weakly negative suggesting that on average sharper models tend to generalize slightly better. Results for other radii can be found in App. G. + +Summary of the findings. To conclude, none of the settings studied above support either the strong or weak hypotheses about the role of sharpness. Contrary to our expectations, CLIP models fine-tuned on ImageNet suggest that flatter solutions consistently generalize worse on OOD data. Finally, sharpness is not useful to distinguish different solutions found by fine-tuning BERT on MNLI. All this evidence suggests that the intuitive ideas about the generalization benefits of flat minima are not supported in the modern settings. + +# 5. Why Doesn't Sharpness Correlate Well with Generalization? + +The goal of this section is to clarify the disconnect between sharpness and generalization in the modern setup. We first revisit sharpness in a controlled environment on CIFAR-10, then explore the different sharpness definitions for a simple model where generalization is well understood. + +# 5.1. The Role of Sharpness in a Controlled Setup + +Motivation. We consider three potential explanations for why sharpness does not correlate well with generalization in + +![](images/58053775318ffe51e6379c76542fac15f8cee880e0a1af03133ab3a70c53b1f7.jpg) + +![](images/f8fdf5260fdee53ad4c3718d5a388ef935a0eeb8bf08165e488eca702e9b37d2.jpg) +With logit normalization + +![](images/7d5b97b5868467ef777e50548e66721667e5773faa34cf5968ca38f65b8c9d8f.jpg) + +![](images/e9a953ad80233700a8733c3fa2c9b613ad04bae02e87fea28ff53a17ac591219.jpg) + +![](images/e04687517587009667b6e458c708df48d0358833573c3ff53078002e0a895ecf.jpg) + +![](images/dd96f7a30770fe06839317a0ea45febd3c22f8c5e2b3e8684678455e9cfe6135.jpg) + +![](images/433b2da3257abd8df6ef6179b589a117fa99091f4b0622ba9ffa087bcd7f01cc.jpg) + +![](images/c2396aee6ef628b04c4aed2fb88850612c7c9b42c618a411ee07c92e8d1b21a5.jpg) +Figure 4: Fine-tuning BERT on MNLI. We show for 50 models the error on MNLI or out-of-distribution domains (HANS subsets) vs worst-case $\ell_{\infty}$ sharpness with (top) or without (bottom) normalization at $\rho = 0.0005$ . Darker color indicates higher test error on MNLI. + +![](images/99478482398a99781401e83e8a3487db14ea7eeb6b3415e1b7a174d0c5750ca6.jpg) + +![](images/f77f119df1e9600b0d4e0afe9e134de956002968386de8aeea2e95eec6a3daeb.jpg) + +the previous section: (1) the use of transformers instead of typical convolutional networks, (2) the use of much larger datasets (ImageNet vs. CIFAR-10), (3) the need to measure sharpness closer to a global minimum. We thus train 200 ResNets-18 and 200 ViTs on CIFAR-10 in a setting similar to Jiang et al. (2020) and Kwon et al. (2021), and evaluate sharpness only for models that reach at most $1\%$ training error. This is in contrast to the ImageNet models from the previous section that are not necessarily trained to $\approx 0\%$ training error as it is usually not necessary in practice. Being closer to a global minimum ensures that the worst-case sharpness captures more the curvature by preventing first-order terms from dominating in Eq. 3. + +Setup. We train models for 200 epochs using SGD with momentum and linearly decreasing learning rates after a linear warm-up for the first $40\%$ iterations. We use the SimpleViT architecture from the vit-pytorch library which is a modification of the standard ViT (Dosovitskiy et al., 2021) with a fixed positional embedding and global average pooling instead of the CLS embedding. We vary the learning rate, $\rho \in \{0,0.05,0.1\}$ of SAM (Foret et al., 2021), mixup $(\alpha = 0.5)$ (Zhang et al., 2018), and standard augmentations combined with RandAugment (Cubuk et al., 2020). We only show models that have $\leq 1\%$ training error. + +Observations. We benchmark 12 different sharpness definitions: $\ell_2$ vs. $\ell_{\infty}$ , average- vs. worst-case, standard vs. adaptive, with vs. without logit normalization, and consider different perturbation radii $\rho$ . We report most of these results in App. H and here highlight only $\ell_{\infty}$ adaptive sharpness in Fig. 5. We observe that for ResNets, there is a strong correlation between sharpness and test error but only within each + +subgroup of training parameters such as augmentations and mixup. Importantly, sharpness does not correctly capture generalization between different subgroups leading to low positive or negative correlation (0.30 and -0.36). For ViTs, we do not observe strong positive correlation even within each subgroup (in fact, without logit normalization the correlation is noticeably negative $-0.68$ ), and many models with an order of magnitude difference in sharpness can have the same test error. Moreover, we do not consistently observe that models with the lowest sharpness generalize best. For OOD generalization on common image corruptions (Hendrycks & Dietterich, 2019), the trend is even less clear and the subgroups are mixed. We note that similar conclusions hold for other sharpness radii $\rho$ and definitions which we show in App. H.4. Moreover, in App. H we also analyze the role of data points used to evaluate sharpness (with and without augmentations), number of iterations of Auto-PGD for worst-case sharpness, and different $m$ in worst-case $m$ -sharpness (Foret et al., 2021). In conclusion, even in this controlled small-scale setup that includes more established architectures like ResNets, we find no empirical support to either the strong or weak hypothesis. + +Sharpness captures the learning rate even when it is not helpful to predict generalization. Prior works have shown a robust link between the learning rate of first-order methods and standard sharpness definitions such as $\lambda_{\max}(\nabla^2 L(\boldsymbol{w}))$ and $\operatorname{tr}(\nabla^2 L(\boldsymbol{w}))$ (Cohen et al., 2021; Wu et al., 2022). However, the connection between the learning rate and adaptive sharpness remains elusive, so we investigate it empirically in Fig. 6. For both ResNets and ViTs, we observe a significant negative correlation, especially within each subgroup defined by the same values of augment $\times$ mixup. This is + +![](images/cc0c359fb6403a2eb0b2565635f465d29e389b4d741774a7001c02c26b23b2f0.jpg) +ResNets-18 with logit normalization + +![](images/fc51f7cc283f08ffbc4d08dd1153be2498551cf4b71a1cb18241871a0e7820c5.jpg) + +![](images/b0a903c49da5d52f2edc1c233b3a42e1f7913ddcbc428e630c3a610a0be0391e.jpg) +ViTs with logit normalization + +![](images/7e92612a17ee4c2491a0c53abff95528b4cc9ad3297c4a13d7e3273fb18e0ffa.jpg) + +![](images/0af8264af2d6edb208ec55b50dfccd9d1f460db27e99bfbdf8af0cd54135f77a.jpg) +ResNets-18 without logit normalization + +![](images/9f5f4d1c1aab815e2edea38166aae8c346000a3f8f2b6b37d0c669d6dccf8d8d.jpg) +Figure 5: Training from scratch on CIFAR-10. Normalized and unnormalized $\ell_{\infty}$ adaptive sharpness vs. standard and OOD test error on common corruptions for ResNets-18 and ViTs. For other sharpness definitions ( $\ell_{2} / \ell_{\infty}$ , average-/worst-case, etc) and multiple sharpness radii $\rho$ , see App. H.4. + +![](images/64ac7cea1381ea596609e6f120c352b171a78816c4854bea4fd52a18138e997a.jpg) +ViTs without logit normalization + +![](images/8289255d7c447b4e6023b6738addba28c44e8093ae71aee444e38c4f91356095.jpg) + +![](images/44f952ce4dddc08398c42e8096c4661631089534726c333ad6b1569c37b1001b.jpg) +ResNets-18 + +![](images/3e500bfb4054e6d5d853fbed97cf51b60e76acadd2932c28f6b75823f9734b27.jpg) +Vision transformers + +![](images/5f45ff4058abc1c0d092f8c4ddb57d6bb969e937e26b3dd3214467e7f6406445.jpg) +Figure 6: Training from scratch on CIFAR-10. Sharpness negatively correlates with the learning rate, especially within each subgroup defined by the same values of augment $\times$ mixup. + +![](images/04e00382c780bc82f5428619fa9ebec3d8e45c98144c3f73e8ed23c6a7efaed7.jpg) + +however not always a desirable property for predicting generalization. On the one hand, monotonically capturing the learning rates can be useful in setting like training ResNets from scratch (Li et al., 2019). On the other hand, large learning rates do not preserve the original features and can significantly harm OOD generalization for fine-tuning (Wortsman et al., 2022b). We also see a negative correlation between sharpness and learning rate for CLIP models fine-tuned on ImageNet in Fig. 20, shown in App. F. However, for these models, we do not have subgroups as clearly defined as for the CIFAR-10 models so we cannot see a more fine-grained trend. Finally, we note that whenever learning rates have + +a beneficial regularization effect, it is closely tied to the amount of stochastic noise in SGD (Jastrzebski et al., 2017; Andriushchenko et al., 2023). This amount is equally determined by other hyperparameters like batch size, momentum coefficient, or weight decay for normalized networks (see Li et al. (2020) for a discussion on the intrinsic learning rate). These parameters are commonly varied in studies on sharpness vs. generalization (Jiang et al., 2020; Kwon et al., 2021; Bisla et al., 2022) but all reflect essentially the same underlying trend. + +# 5.2. Is Sharpness the Right Quantity in the First Place? Insights from Simple Models + +Here, we study the link between sharpness and generalization for sparse regression with diagonal linear networks for which the $\ell_1$ norm of the solution is predictive of generalization. This simple model suggests that sharpness measures which are universally correlated with better generalization across all possible data distributions simply do not exist. + +Diagonal linear networks are defined as predictors $\langle \pmb{x},\pmb{\beta}\rangle$ with parameterization $\beta = \pmb{u} \odot \pmb{v}$ for weights $\pmb{w} = \left[\begin{array}{c}\pmb{u}\\ \pmb{v}\end{array}\right] \in \mathbb{R}^{2d}$ . They have been widely studied as the simplest nontrivial neural network (Woodworth et al., 2020; Pesme et al., 2021). We consider an overparametrized sparse regression problem for a data matrix $\pmb{X} \in \mathbb{R}^{n \times d}$ and label vector $\pmb{y}$ : + +$$ +L (\boldsymbol {w}) := \left\| \boldsymbol {X} (\boldsymbol {u} \odot \boldsymbol {v}) - \boldsymbol {y} \right\| _ {2} ^ {2}, \tag {6} +$$ + +for which the ground truth $\beta^{*}$ is a sparse vector (i.e., most coordinates are zeros) and there exist many solutions $\pmb{w}$ such that $L(\pmb{w}) = 0$ . Assuming whitened data $\pmb{X}^{\top}\pmb{X} = \pmb{I}$ and that $\pmb{w}$ is a global minimum, the Hessian of the loss $L$ + +simplifies to + +$$ +\nabla^ {2} L (\boldsymbol {w}) = \left[ \begin{array}{c c} \operatorname {d i a g} (\boldsymbol {v} \odot \boldsymbol {v}) & \operatorname {d i a g} (\boldsymbol {u} \odot \boldsymbol {v}) \\ \operatorname {d i a g} (\boldsymbol {u} \odot \boldsymbol {v}) & \operatorname {d i a g} (\boldsymbol {u} \odot \boldsymbol {u}) \end{array} \right]. +$$ + +We first consider standard definitions of local (i.e., $\rho \to 0$ ) sharpness for which we have a closed-form expression. The average-case local sharpness is equal to $\mathrm{tr}(\nabla^2 L(\pmb{w})) = \sum_{i=1}^{d} u_i^2 + v_i^2$ while the worst-case local sharpness at a minimum is $\lambda_{\max}(\nabla^2 L(\pmb{w})) = \max_{1 \leq i \leq d} v_i^2 + u_i^2$ (see Sec. A.2 for details). Importantly, both average- and worst-case local sharpness are not invariant under $\alpha$ -reparametrization $(\alpha \pmb{u}, \pmb{v} / \alpha)$ while the predictor $\beta = \pmb{u} \odot \pmb{v}$ is. This fact emphasizes the need for a measure of the sharpness that adjusts to the changing scale of the parameters as the adaptive sharpness. Indeed, with the carefully selected elementwise scaling $c_i = \sqrt{|v_i| / |u_i|}$ for $1 \leq i \leq d$ and $c_i = \sqrt{|u_i| / |v_i|}$ for $d < i \leq 2d$ , we obtain for the average-case and worst-case adaptive local sharpness + +$$ +\begin{array}{l} S _ {a v g} ^ {\rho} (\boldsymbol {w}, \boldsymbol {c}) = \frac {1}{2} \sum_ {i = 1} ^ {d} u _ {i} ^ {2} | v _ {i} | / | u _ {i} | + \frac {1}{2} \sum_ {i = 1} ^ {d} v _ {i} ^ {2} | u _ {i} | / | v _ {i} | = \| \boldsymbol {\beta} \| _ {1}, \\ S _ {m a x} ^ {\rho} (\pmb {w}, \pmb {c}) = \max _ {1 \leq i \leq d} | u _ {i} | | v _ {i} | = \| \beta \| _ {\infty}. \\ \end{array} +$$ + +We first note that both definitions of adaptive sharpness are invariant under $\alpha$ -reparametrization as they only depend on the predictor $\beta$ . However, average and worst-case sharpness do not capture the same properties of $\beta$ . In particular, $\| \beta \|_1$ is a generalization measure that correctly captures the sparsity of the linear predictor which is a good indicator of generalization for a sparse $\beta^*$ . In contrast, $\| \beta \|_{\infty}$ is a generalization measure that is more suitable to capture how uniform the weights of $\beta$ are which is a good predictor of generalization for a dense $\beta^*$ . Finally, we note that using $c = w$ in adaptive sharpness would instead lead to $\| \beta \|_2^2$ and $\| \beta \|_{\infty}^2$ that would have a different interpretation. This simple model highlights that the sharpness definition that correlates well with generalization is data-dependent and in general $S_{avg}$ and $S_{max}$ capture very different trends. + +To further illustrate this point, we train 200 diagonal linear networks to $10^{-5}$ training loss on a sparse regression task ( $d = 200$ with $90\%$ sparsity) with different learning rates and random initializations. We show the results in Fig. 7 which illustrate that (1) $\| \pmb{u} \odot \pmb{v} \|_1$ is approximated well by $\frac{1}{2} \mathrm{tr}(\tilde{\nabla}^2 L(\pmb{w}))$ , (2) $\mathrm{tr}(\tilde{\nabla}^2 L(\pmb{w}))$ correlates better than $\mathrm{tr}(\tilde{\nabla}^2 L(\pmb{w}))$ so the adaptive part is important, (3) the relationship between $\mathrm{tr}(\tilde{\nabla}^2 L(\pmb{w}))$ and $\lambda_{\max}(\tilde{\nabla}^2 L(\pmb{w}))$ can be even reverse showing that different sharpness definitions capture totally different trends. We also note that even with the right definition of sharpness, the correlation is not perfect (around $\tau = 0.8$ ) and there is always some non-negligible gap in predicting the test loss. Overall, we conclude that finding a sharpness definition that correlates well with generalization requires understanding both the role of the data + +![](images/c097be40bbc9d5c82f60b85c500ffa1ac8aad77d3eed57452b2458089be760fb.jpg) + +![](images/1e2e0ba5dafd2bdb83d8935dad9f9967f1f29bcc62d5199b8b341fab5fbd3a5b.jpg) + +![](images/7fb08c3b5064c8dfcbff3fe8c4655f0256aa75cc75b2c96a1b6e799028bb6526.jpg) +Figure 7: Different generalization measures for diagonal linear networks. $\tilde{\nabla}^2$ denotes the rescaled Hessian corresponding to adaptive sharpness. + +![](images/294ab5b4fa93297bdb7124d4850a1228e4748704f75500f47c05098f9977c3b3.jpg) + +distribution and its interaction with the architecture. It is possible in very simple cases but appears extremely challenging for complex architectures like vision transformers on complex real-world datasets like ImageNet. + +# 6. Conclusions + +Our results suggest that even reparametrization-invariant sharpness is not a good indicator of generalization in the modern setting. While there definitely exist restricted settings where correlation between sharpness and generalization is significantly positive (e.g., for ResNets on CIFAR-10 with a specific combination of augmentations and mixup), it is not true anymore when we compare all models jointly. Moreover, the correlation, even within subgroups of models defined by augmentations, is much lower for vision transformers. Thus, we believe it is important to rethink the intuitive understanding of sharpness based on the geometric intuition about the shift of the loss surface. Moreover, our findings suggest that one should avoid blanket statements like "flatter minima generalize better" since even when they are only intended to imply correlation, their correctness still depends on a number of factors such as data distribution, model family, or initialization schemes (i.e., random vs. from pretrained weights). + +# Acknowledgements + +M.A. was supported by the Google Fellowship and Open Phil AI Fellowship. M.M. and M.H. were supported by the Carl Zeiss Foundation in the project "Certification and Foundations of Safe Machine Learning Systems in Healthcare". We thank David Stutz for very fruitful discussions at the initial stage of the project, Jana Vuckovic for experiments on sharpness that helped us to shape the project and Aditya Varre for discussions on sharpness for diagonal networks. + +# References + +Andriushchenko, M. and Flamarion, N. Towards understanding sharpness-aware minimization. In ICML, 2022. +Andriushchenko, M., Varre, A., Pillaud-Vivien, L., and Flammarion, N. SGD with large step sizes learns sparse features. In ICML, 2023. +Arora, S., Li, Z., and Panigrahi, A. Understanding gradient descent on edge of stability in deep learning. In ICML, 2022. +Barbu, A., Mayo, D., Alverio, J., Luo, W., Wang, C., Gutfreund, D., Tenenbaum, J., and Katz, B. Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models. In NeurIPS, 2019. +Bisla, D., Wang, J., and Choromanska, A. Low-pass filtering sgd for recovering flat optima in the deep learning optimization landscape. AISTATS, 2022. +Blanc, G., Gupta, N., Valiant, G., and Valiant, P. Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process. In _COLT_, 2020. +Cha, J., Chun, S., Lee, K., Cho, H.-C., Park, S., Lee, Y., and Park, S. Swad: Domain generalization by seeking flat minima. NeurIPS, 34:22405-22418, 2021. +Chaudhari, P., Choromanska, A., Soatto, S., LeCun, Y., Baldassi, C., Borgs, C., Chayes, J., Sagun, L., and Zecchina, R. Entropy-sgd: Biasing gradient descent into wide valleys. Journal of Statistical Mechanics: Theory and Experiment, 2019(12):124018, 2016. +Chen, X., Hsieh, C.-J., and Gong, B. When vision transformers outperform resnets without pre-training or strong data augmentations? ICLR, 2022. +Cohen, J. M., Kaur, S., Li, Y., Kolter, J. Z., and Talwalkar, A. Gradient descent on neural networks typically occurs at the edge of stability. ICLR, 2021. +Croce, F. and Hein, M. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. ICML, 2020. +Cubuk, E. D., Zoph, B., Shlens, J., and Le, Q. V. Randaugment: Practical automated data augmentation with a reduced search space. _NeurIPS_, 2020. +Damian, A., Ma, T., and Lee, J. D. Label noise sgd provably prefers flat global minimizers. NeurIPS, 34:27449-27461, 2021. +Damian, A., Nichani, E., and Lee, J. D. Self-stabilization: The implicit bias of gradient descent at the edge of stability. ICLR, 2023. + +Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL, 2019. +Dinh, L., Pascanu, R., Bengio, S., and Bengio, Y. Sharp minima can generalize for deep nets. In ICML, pp. 1019-1028. PMLR, 2017. +Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR, 2021. +Du, J., Daquan, Z., Feng, J., Tan, V., and Zhou, J. T. Sharpness-aware training for free. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=xK6wRfL2mv7. +Dziugaite, G. K. and Roy, D. Entropy-sgd optimizes the prior of a pac-bayes bound: Generalization properties of entropy-sgd and data-dependent priors. In ICML, pp. 1377-1386. PMLR, 2018. +Dziugaite, G. K., Drouin, A., Neal, B., Rajkumar, N., Caballero, E., Wang, L., Mitliagkas, I., and Roy, D. M. In search of robust measures of generalization. NeurIPS, 33: 11723-11733, 2020. +Foret, P., Kleiner, A., Mobahi, H., and Neyshabur, B. Sharpness-aware minimization for efficiently improving generalization. In ICLR, 2021. +Fort, S., Brock, A., Pascanu, R., De, S., and Smith, S. L. Drawing multiple augmentation samples per image during training efficiently decreases test error. arXiv preprint arXiv:2105.13343, 2021. +Granziol, D. Flatness is a false friend. arXiv preprint arXiv:2006.09091, 2020. +Hendrycks, D. and Dietterich, T. Benchmarking neural network robustness to common corruptions and perturbations. In ICLR, 2019. +Hendrycks, D., Basart, S., Mu, N., Kadavath, S., Wang, F., Dorundo, E., Desai, R., Zhu, T., Parajuli, S., Guo, M., Song, D., Steinhardt, J., and Gilmer, J. The many faces of robustness: A critical analysis of out-of-distribution generalization. ICCV, 2021a. +Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., and Song, D. Natural adversarial examples. CVPR, 2021b. +Hochreiter, S. and Schmidhuber, J. Simplifying neural nets by discovering flat minima. In NeurIPS, pp. 529-536, 1995. + +Izmailov, P., Podoprikhin, D., Garipov, T., Vetrov, D., and Wilson, A. G. Averaging weights leads to wider optima and better generalization. UAI, 2018. +Jastrzebski, S., Kenton, Z., Arpit, D., Ballas, N., Fischer, A., Bengio, Y., and Storkey, A. Three factors influencing minima in sgd. arXiv preprint arXiv:1711.04623, 2017. +Jiang, Y., Neyshabur, B., Mobahi, H., Krishnan, D., and Bengio, S. Fantastic generalization measures and where to find them. ICLR, 2020. +Kaur, S., Cohen, J., and Lipton, Z. C. On the maximum hessian eigenvalue and generalization. arXiv preprint arXiv:2206.10654, 2022. +Keskar, N. S., Mudigere, D., Nocedal, J., Smelyanskiy, M., and Tang, P. T. P. On large-batch training for deep learning: Generalization gap and sharp minima. ICLR, 2016. +Kwon, J., Kim, J., Park, H., and Choi, I. K. Asam: Adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks. ICML, 2021. +LeCun, Y. A., Bottou, L., Orr, G. B., and Müller, K.-R. Efficient backprop. In Neural networks: Tricks of the trade, pp. 9-48. Springer, 2012. +Li, Y., Wei, C., and Ma, T. Towards explaining the regularization effect of initial large learning rate in training neural networks. In NeurIPS, 2019. +Li, Z., Lyu, K., and Arora, S. Reconciling modern deep learning with traditional optimization analyses: The intrinsic learning rate. *NeurIPS*, 33:14544–14555, 2020. +Li, Z., Wang, T., and Arora, S. What happens after sgd reaches zero loss? a mathematical framework. arXiv preprint arXiv:2110.06914, 2021. +Liang, T., Poggio, T., Rakhlin, A., and Stokes, J. Fisher-rao metric, geometry, and complexity of neural networks. In AISTATS. PMLR, 2019. +Lyu, K., Li, Z., and Arora, S. Understanding the generalization benefit of normalization layers: Sharpness reduction. NeurIPS, 2022. +McCoy, R. T., Min, J., and Linzen, T. BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance. In Proceedings of the Third Blackbox NLP Workshop on Analyzing and Interpreting Neural Networks for NLP, November 2020. +McCoy, T., Pavlick, E., and Linzen, T. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In ACL, 2019. + +Neyshabur, B., Bhojanapalli, S., McAllester, D., and Srebro, N. Exploring generalization in deep learning. In NeurIPS, pp. 5947-5956, 2017. +Park, N. and Kim, S. How do vision transformers work? ICLR, 2022. +Pesme, S., Pillaud-Vivien, L., and Flammarion, N. Implicit bias of sgd for diagonal linear networks: a provable benefit of stochasticity. In NeurIPS, 2021. +Petzka, H., Kamp, M., Adilova, L., Sminchisescu, C., and Boley, M. Relative flatness and generalization. NeurIPS, 34:18420-18432, 2021. +Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. In ICML, pp. 8748-8763. PMLR, 2021. +Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., and Chen, M. Hierarchical text-conditional image generation with CLIP latents. arXiv preprint arXiv:2204.06125, 2022. +Recht, B., Roelofs, R., Schmidt, L., and Shankar, V. Do imagenet classifiers generalize toImagenet? In ICML, pp. 5389-5400. PMLR, 2019. +Smith, S. L. and Le, Q. V. A Bayesian perspective on generalization and stochastic gradient descent. In ICLR, 2018. +Smith, S. L., Dherin, B., Barrett, D. G., and De, S. On the origin of implicit regularization in stochastic gradient descent. ICLR, 2021. +Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1), 2014. +Steiner, A., Kolesnikov, A., Zhai, X., Wightman, R., Uszkoreit, J., and Beyer, L. How to train your vit? data, augmentation, and regularization in vision transformers. TMLR, 2021. +Stutz, D., Hein, M., and Schiele, B. Relating adversarially robust generalization to flat minima. ICCV, 2021. +Tszuk, Y., Sato, I., and Sugiyama, M. Normalized flat minima: Exploring scale invariant definition of flat minima for neural networks using pac-bayesian analysis. In ICML, pp. 9636-9647. PMLR, 2020. +Vedantam, S. R., Lopez-Paz, D., and Schwab, D. J. An empirical investigation of domain generalization with empirical risk minimizers. In Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), NeurIPS, 2021. + +Wang, H., Ge, S., Lipton, Z., and Xing, E. P. Learning robust global representations by penalizing local predictive power. In NeurIPS, pp. 10506-10518, 2019. +Williams, A., Nangia, N., and Bowman, S. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL, 2018. +Woodworth, B., Gunasekar, S., Lee, J. D., Moroshko, E., Savarese, P., Golan, I., Soudry, D., and Srebro, N. Kernel and rich regimes in overparametrized models. In _COLT_. PMLR, 2020. +Wortsman, M., Ilharco, G., Gadre, S. Y., Roelofs, R., Gontijo-Lopes, R., Morcos, A. S., Namkoong, H., Farhadi, A., Carmon, Y., Kornblith, S., et al. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. In ICML, pp. 23965-23998. PMLR, 2022a. +Wortsman, M., Ilharco, G., Kim, J. W., Li, M., Kornblith, S., Roelofs, R., Lopes, R. G., Hajishirzi, H., Farhadi, A., Namkoong, H., et al. Robust fine-tuning of zero-shot models. In CVPR, pp. 7959-7971, 2022b. +Wu, D., Xia, S.-t., and Wang, Y. Adversarial weight perturbation helps robust generalization. NeurIPS, 2020. +Wu, L., Wang, M., and Su, W. When does sgd favor flat minima? a quantitative characterization via linear stability. NeurIPS, 2022. +Xing, C., Arpit, D., Tsirigotis, C., and Bengio, Y. A walk with sgd. arXiv preprint arXiv:1802.08770, 2018. +Zhang, H., Cisse, M., Dauphin, Y. N., and Lopez-Paz, D. mixup: Beyond empirical risk minimization. ICLR, 2018. +Zhang, S., Reid, I., Pérez, G. V., and Louis, A. Why flatness does and does not correlate with generalization for deep neural networks. arXiv preprint arXiv:2103.06219, 2021. +Zheng, Y., Zhang, R., and Mao, Y. Regularizing neural networks via adversarial model perturbation. CVPR, 2021. +Zhou, P., Feng, J., Ma, C., Xiong, C., Hoi, S. C. H., et al. Towards theoretically understanding why SGD generalizes better than Adam in deep learning. *NeurIPS*, 2020. +Zhuang, J., Gong, B., Yuan, L., Cui, Y., Adam, H., Dvornek, N. C., sekhar tatikonda, s Duncan, J., and Liu, T. Surrogate gap minimization improves sharpness-aware training. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=edONMAnhLu-. + +# Appendix + +The appendix is organized as follows: + +- Sec. A: omitted derivations for sharpness when $\rho \to 0$ , first for the general case and then specifically for diagonal linear networks. +- Sec. B: figures with correlation between sharpness and generalization gap. We observe a similar trend between sharpness and generalization gap as between sharpness and test error which is reported in the main part. +- Sec. C: additional figures about ViTs from Steiner et al. (2021) trained with different hyperparameter settings on ImageNet-1k. We observe that different sharpness variants are not predictive of the performance on ImageNet and the OOD datasets, typically only separating models by stochastic depth / dropout, but not ranking them according to generalization, and often even yielding a negative correlation with OOD test error. +- Sec. D: figures about ViTs from Steiner et al. (2021) pre-trained on ImageNet-21k and then fine-tuned on ImageNet-1k. The observations are very similar to those for training on ImageNet-1k from scratch: sharpness variants are not predictive of the performance on ImageNet, and they often lead to a negative correlation with OOD test error. +- Sec. E: figures for combined analysis of ViTs from Steiner et al. (2021) both with and without ImageNet-21k pretraining. We find the better-generalizing models pretrained on ImageNet-21k to have significantly higher worst-case sharpness and roughly equal or higher logit-normalized average-case adaptive sharpness, underlining that the models' generalization properties resulting from different pretraining datasets are not captured. +- Sec. F: additional details and figures for CLIP models fine-tuned on ImageNet. We observe that sharpness variants are not predictive of the performance on ImageNet and ImageNet-V2. Moreover, there is in most cases a negative correlation with test error in presence of distribution shifts which is likely to be related to the influence that the learning rate has on sharpness. +- Sec. G: additional details and figures for BERT models fine-tuned on MNLI. We find that all sharpness variants we consider are not predictive of the generalization performance of the model, and in some cases there is rather a weak negative correlation between sharpness and test error on out-of-distribution tasks from HANS. +- Sec. H: additional details and ablation studies for CIFAR-10 models. We analyze the role of data used to evaluate sharpness, the role of the number of iterations in Auto-PGD, the role of $m$ in $m$ -sharpness, and the influence of different sharpness definitions and radii on correlation with generalization. Overall, we conclude that none of the considered sharpness definitions or radii correlates positively with generalization nor that low sharpness implies good performance of the model. + +Also, for the sake of convenience, we provide in Table 1, Table 2, Table 3, and Table 4 a summary of correlation coefficients $\tau$ between sharpness and generalization for all our experiments (except ablation studies). + +
ImageNet-1k models trained from scratch
SharpnessLogitNormρRank correlation coefficient τObjectNet
ININ-v2IN-RIN-SketchIN-A
Worst-case ℓ∞Yes0.0010.090.080.100.10-0.060.04
Worst-case ℓ∞Yes0.0020.080.080.090.09-0.070.03
Worst-case ℓ∞Yes0.004-0.11-0.11-0.06-0.06-0.23-0.16
Worst-case ℓ∞No0.001-0.42-0.43-0.27-0.28-0.45-0.45
Worst-case ℓ∞No0.002-0.42-0.42-0.27-0.27-0.41-0.45
Worst-case ℓ∞No0.004-0.34-0.34-0.20-0.20-0.36-0.36
Avg-case ℓ∞Yes0.050.460.440.380.420.310.39
Avg-case ℓ∞Yes0.10.440.430.390.430.290.39
Avg-case ℓ∞Yes0.20.420.420.390.420.290.38
Avg-case ℓ∞No0.05-0.55-0.56-0.40-0.42-0.57-0.60
Avg-case ℓ∞No0.1-0.44-0.43-0.28-0.32-0.47-0.47
Avg-case ℓ∞No0.20.130.150.260.230.050.11
+ +
ImageNet-1k models fine-tuned from IN-21k
SharpnessLogitNormρRank correlation coefficient τObjectNet
ININ-v2IN-RIN-SketchIN-A
Worst-case ℓ∞Yes0.001-0.49-0.49-0.44-0.33-0.53-0.46
Worst-case ℓ∞Yes0.002-0.48-0.48-0.46-0.33-0.51-0.44
Worst-case ℓ∞Yes0.004-0.45-0.43-0.41-0.33-0.45-0.42
Worst-case ℓ∞No0.001-0.13-0.09-0.050.05-0.13-0.09
Worst-case ℓ∞No0.002-0.10-0.03-0.010.11-0.07-0.02
Worst-case ℓ∞No0.004-0.10-0.01-0.010.11-0.060.00
Avg-case ℓ∞Yes0.05-0.11-0.08-0.11-0.07-0.06-0.06
Avg-case ℓ∞Yes0.1-0.12-0.11-0.14-0.10-0.09-0.08
Avg-case ℓ∞Yes0.2-0.25-0.24-0.25-0.23-0.25-0.24
Avg-case ℓ∞No0.05-0.02-0.04-0.03-0.02-0.05-0.06
Avg-case ℓ∞No0.1-0.07-0.10-0.08-0.08-0.11-0.10
Avg-case ℓ∞No0.2-0.11-0.11-0.10-0.11-0.12-0.13
+ +
ImageNet-1k models fine-tuned from CLIP
SharpnessLogitNormρRank correlation coefficient τObjectNet
ININ-v2IN-RIN-SketchIN-A
Worst-case ℓ∞Yes0.001-0.04-0.16-0.23-0.26-0.25-0.36
Worst-case ℓ∞Yes0.0020.04-0.10-0.39-0.28-0.41-0.47
Worst-case ℓ∞Yes0.004-0.08-0.19-0.12-0.16-0.17-0.27
Worst-case ℓ∞No0.0010.190.09-0.37-0.06-0.57-0.48
Worst-case ℓ∞No0.0020.200.08-0.51-0.18-0.58-0.51
Worst-case ℓ∞No0.0040.02-0.05-0.51-0.27-0.45-0.33
Avg-case ℓ∞Yes0.001-0.03-0.18-0.36-0.34-0.33-0.46
Avg-case ℓ∞Yes0.002-0.21-0.32-0.02-0.27-0.06-0.21
Avg-case ℓ∞Yes0.004-0.19-0.210.26-0.030.230.06
Avg-case ℓ∞No0.0010.13-0.01-0.62-0.26-0.67-0.60
Avg-case ℓ∞No0.0020.060.03-0.34-0.12-0.50-0.37
Avg-case ℓ∞No0.0040.190.21-0.120.09-0.21-0.08
+ +Table 1: A summary of correlation between sharpness and generalization for all experiments on ImageNet. We boldface entries with $|\tau| > 0.5$ suggesting a reasonably strong correlation. LogitNorm stands for logit normalization and IN stands for ImageNet. + +
MNLI models fine-tuned from BERT
SharpnessLogitNormρRank correlation coefficient τ
MNLIHANS-LHANS-SHANS-C
Worst-case ℓ∞Yes0.00050.04-0.09-0.14-0.21
Worst-case ℓ∞Yes0.001-0.09-0.09-0.13-0.18
Worst-case ℓ∞Yes0.0020.05-0.09-0.14-0.17
Worst-case ℓ∞No0.00050.04-0.24-0.22-0.07
Worst-case ℓ∞No0.0010.04-0.13-0.15-0.15
Worst-case ℓ∞No0.002-0.11-0.15-0.12-0.13
Avg-case ℓ∞Yes0.1-0.35-0.46-0.280.17
Avg-case ℓ∞Yes0.2-0.37-0.48-0.280.24
Avg-case ℓ∞Yes0.40.01-0.29-0.270.05
Avg-case ℓ∞No0.1-0.34-0.31-0.230.13
Avg-case ℓ∞No0.2-0.34-0.58-0.390.16
Avg-case ℓ∞No0.40.04-0.16-0.090.05
+ +Table 2: A summary of correlation between sharpness and generalization for all experiments on MNLI for models fine-tuned from BERT. We boldface entries with $|\tau| > 0.5$ suggesting a reasonably strong correlation. LogitNorm stands for logit normalization. + +ResNets-18 trained from scratch on CIFAR-10 + +
SharpnessLogitNormρRank correlation coefficient τ
CIFAR-10CIFAR-10-C
Standard avg-case ℓ2No0.050.140.04
Standard avg-case ℓ2No0.10.260.19
Standard avg-case ℓ2No0.20.280.21
Standard avg-case ℓ2No0.40.280.20
Standard worst-case ℓ2No0.250.170.10
Standard worst-case ℓ2No0.50.240.16
Standard worst-case ℓ2No1.00.250.18
Standard worst-case ℓ2No2.00.220.14
Adaptive avg-case ℓ2No0.05-0.37-0.46
Adaptive avg-case ℓ2No0.1-0.50-0.53
Adaptive avg-case ℓ2No0.2-0.42-0.41
Adaptive avg-case ℓ2No0.4-0.31-0.31
Adaptive worst-case ℓ2No0.25-0.36-0.39
Adaptive worst-case ℓ2No0.5-0.42-0.36
Adaptive worst-case ℓ2No1.0-0.27-0.17
Adaptive worst-case ℓ2No2.0-0.17-0.07
Adaptive avg-case ℓ2Yes0.050.180.07
Adaptive avg-case ℓ2Yes0.10.07-0.04
Adaptive avg-case ℓ2Yes0.2-0.14-0.26
Adaptive avg-case ℓ2Yes0.4-0.43-0.58
Adaptive worst-case ℓ2Yes0.250.190.14
Adaptive worst-case ℓ2Yes0.50.070.00
Adaptive worst-case ℓ2Yes1.0-0.13-0.22
Adaptive worst-case ℓ2Yes2.0-0.52-0.58
Standard avg-case ℓ∞No0.10.160.08
Standard avg-case ℓ∞No0.20.280.21
Standard avg-case ℓ∞No0.40.280.20
Standard avg-case ℓ∞No0.80.280.20
Standard worst-case ℓ∞No0.00050.290.23
Standard worst-case ℓ∞No0.0010.300.24
Standard worst-case ℓ∞No0.0020.300.24
Standard worst-case ℓ∞No0.0040.290.23
Adaptive avg-case ℓ∞No0.1-0.36-0.47
Adaptive avg-case ℓ∞No0.2-0.53-0.56
Adaptive avg-case ℓ∞No0.4-0.41-0.41
Adaptive avg-case ℓ∞No0.8-0.20-0.18
Adaptive worst-case ℓ∞No0.001-0.36-0.42
Adaptive worst-case ℓ∞No0.002-0.05-0.10
Adaptive worst-case ℓ∞No0.0040.250.20
Adaptive worst-case ℓ∞No0.0080.260.24
Adaptive avg-case ℓ∞Yes0.10.180.07
Adaptive avg-case ℓ∞Yes0.20.05-0.06
Adaptive avg-case ℓ∞Yes0.4-0.23-0.37
Adaptive avg-case ℓ∞Yes0.8-0.46-0.62
Adaptive worst-case ℓ∞Yes0.0010.300.18
Adaptive worst-case ℓ∞Yes0.0020.290.16
Adaptive worst-case ℓ∞Yes0.0040.210.07
Adaptive worst-case ℓ∞Yes0.008-0.04-0.19
+ +Table 3: A summary of correlation between sharpness and generalization for all experiments on CIFAR-10 for ResNets-18 trained from scratch. We boldface entries with $|\tau| > 0.5$ suggesting a reasonably strong correlation. LogitNorm stands for logit normalization. + +Vision transformers trained from scratch on CIFAR-10 + +
SharpnessLogitNormρRank correlation coefficient τ
CIFAR-10CIFAR-10-C
Standard avg-case ℓ2No0.005-0.45-0.54
Standard avg-case ℓ2No0.01-0.39-0.49
Standard avg-case ℓ2No0.02-0.20-0.31
Standard avg-case ℓ2No0.04-0.08-0.20
Standard worst-case ℓ2No0.025-0.59-0.62
Standard worst-case ℓ2No0.05-0.37-0.43
Standard worst-case ℓ2No0.1-0.16-0.24
Standard worst-case ℓ2No0.2-0.12-0.20
Adaptive avg-case ℓ2No0.1-0.45-0.50
Adaptive avg-case ℓ2No0.2-0.45-0.45
Adaptive avg-case ℓ2No0.4-0.42-0.47
Adaptive avg-case ℓ2No0.8-0.100.08
Adaptive worst-case ℓ2No0.5-0.64-0.53
Adaptive worst-case ℓ2No1.0-0.32-0.19
Adaptive worst-case ℓ2No2.0-0.11-0.01
Adaptive worst-case ℓ2No4.0-0.07-0.03
Adaptive avg-case ℓ2Yes0.1-0.18-0.31
Adaptive avg-case ℓ2Yes0.2-0.28-0.40
Adaptive avg-case ℓ2Yes0.4-0.39-0.46
Adaptive avg-case ℓ2Yes0.8-0.44-0.52
Adaptive worst-case ℓ2Yes0.25-0.21-0.12
Adaptive worst-case ℓ2Yes0.5-0.24-0.17
Adaptive worst-case ℓ2Yes1.0-0.22-0.19
Adaptive worst-case ℓ2Yes2.0-0.14-0.11
Standard avg-case ℓ∞No0.01-0.44-0.54
Standard avg-case ℓ∞No0.02-0.35-0.45
Standard avg-case ℓ∞No0.04-0.17-0.28
Standard avg-case ℓ∞No0.08-0.04-0.14
Standard worst-case ℓ∞No0.00001-0.61-0.63
Standard worst-case ℓ∞No0.00002-0.46-0.51
Standard worst-case ℓ∞No0.00004-0.25-0.31
Standard worst-case ℓ∞No0.00008-0.16-0.22
Adaptive avg-case ℓ∞No0.1-0.45-0.53
Adaptive avg-case ℓ∞No0.2-0.46-0.50
Adaptive avg-case ℓ∞No0.4-0.45-0.44
Adaptive avg-case ℓ∞No0.8-0.41-0.47
Adaptive worst-case ℓ∞No0.0005-0.68-0.63
Adaptive worst-case ℓ∞No0.001-0.43-0.40
Adaptive worst-case ℓ∞No0.002-0.26-0.23
Adaptive worst-case ℓ∞No0.004-0.18-0.18
Adaptive avg-case ℓ∞Yes0.1-0.11-0.23
Adaptive avg-case ℓ∞Yes0.2-0.16-0.29
Adaptive avg-case ℓ∞Yes0.4-0.31-0.42
Adaptive avg-case ℓ∞Yes0.8-0.40-0.47
Adaptive worst-case ℓ∞Yes0.0005-0.20-0.23
Adaptive worst-case ℓ∞Yes0.001-0.22-0.26
Adaptive worst-case ℓ∞Yes0.002-0.29-0.34
Adaptive worst-case ℓ∞Yes0.004-0.39-0.44
+ +Table 4: A summary of correlation between sharpness and generalization for all experiments on CIFAR-10 for ViTs trained from scratch. We boldface entries with $|\tau| > 0.5$ suggesting a reasonably strong correlation. LogitNorm stands for logit normalization. + +# A. Omitted Proofs + +# A.1. Asymptotic Analysis of Adaptive Sharpness Measures + +For the convenience of the reader we repeat here quickly the definitions of adaptive sharpness measures. Let $L_{S}(\boldsymbol{w}) = \frac{1}{|S|}\sum_{(\boldsymbol{x},\boldsymbol{y})\in S}\ell_{\boldsymbol{x}\boldsymbol{y}}(\boldsymbol{w})$ be the loss on a set of training points $S$ . For arbitrary weights $\boldsymbol{w}$ (i.e., not necessarily a minimum), then the average-case and worst-case m-sharpness is defined as: + +$$ +S_{avg,p}^{\rho}(\boldsymbol {w},\boldsymbol {c})\triangleq \mathbb{E}_{\substack{\mathcal{S}\sim P_{m}\\ \boldsymbol {\delta}\sim \mathcal{N}(0,\rho^{2}diag(\boldsymbol{c}^{2}))}}L_{\mathcal{S}}(\boldsymbol {w} + \boldsymbol {\delta}) - L_{\mathcal{S}}(\boldsymbol {w})\\ \hskip 14.226378pt S_{max,p}^{\rho}(\boldsymbol {w},\boldsymbol {c})\triangleq \mathbb{E}_{\mathcal{S}\sim P_{m}}\max_{\| \boldsymbol {\delta}\odot \boldsymbol{c}^{-1}\|_{p}\leq \rho}L_{\mathcal{S}}(\boldsymbol {w} + \boldsymbol {\delta}) - L_{\mathcal{S}}(\boldsymbol {w}), +$$ + +where $\odot /^{-1}$ denotes elementwise multiplication/inversion and $P_{m}$ is the data distribution that returns $m$ training pairs $(\pmb{x},\pmb{y})$ . + +If $c = |\pmb{w}|$ then the perturbation set is $\left\| \delta \odot |\pmb{w}|^{-1} \right\|_p \leq \rho$ . We first introduce a new variable $\gamma = \delta \odot |\pmb{w}|^{-1}$ and do a Taylor expansion around $w$ : + +$$ +L _ {\mathcal {S}} (\boldsymbol {w} + \boldsymbol {\delta}) = L _ {\mathcal {S}} (\boldsymbol {w} + \boldsymbol {\gamma} \odot | \boldsymbol {w} |) = L _ {\mathcal {S}} (\boldsymbol {w}) + \left\langle \nabla L _ {\mathcal {S}} (\boldsymbol {w}), | \boldsymbol {w} | \odot \boldsymbol {\gamma} \right\rangle + \frac {1}{2} \left\langle \boldsymbol {\gamma} \odot | \boldsymbol {w} |, \nabla^ {2} L _ {\mathcal {S}} (\boldsymbol {w}) \boldsymbol {\gamma} \odot | \boldsymbol {w} | \right\rangle + O (\| \boldsymbol {\gamma} \| _ {p} ^ {3}), +$$ + +where $\nabla^2 L_S(\pmb{w})$ denotes the Hessian of $L_{S}$ at $\pmb{w}$ . + +Proposition 1. Let $L_{S} \in C^{3}(\mathbb{R}^{s})$ , $S$ be a finite sample of training points $(x_{i},y_{i})_{i = 1}^{n}$ and let $P_{m}$ denote the uniform distribution over subsamples of size $m \leq n$ from $S$ . Then we define for $p \geq 1$ , $q \in \mathbb{R}$ such that $\frac{1}{p} +\frac{1}{q} = 1$ , then it holds + +$$ +\begin{array}{l} \lim _ {\rho \rightarrow 0} S _ {m a x, p} ^ {\rho} (\boldsymbol {w}, | \boldsymbol {w} |) \\ = \mathbb {E} _ {\mathcal {S} \sim P _ {m}} \left\{ \begin{array}{l l} \| \nabla L _ {\mathcal {S}} (\boldsymbol {w}) \odot | \boldsymbol {w} | \| _ {q} \rho + O (\rho^ {2}) & i f \nabla L _ {\mathcal {S}} (\boldsymbol {w}) \odot | \boldsymbol {w} | \neq 0, \\ \frac {\rho^ {2}}{2} \max _ {\gamma \neq 0} \frac {\left\langle \gamma , \left(\nabla^ {2} L _ {\mathcal {S}} (\boldsymbol {w}) \odot (| \boldsymbol {w} | | \boldsymbol {w} | ^ {T})\right) \gamma \right\rangle}{\| \gamma \| _ {p} ^ {2}} + O (\rho^ {3}) & i f \nabla L _ {\mathcal {S}} (\boldsymbol {w}) \odot | \boldsymbol {w} | = 0 a n d \\ & \nabla^ {2} L _ {\mathcal {S}} (\boldsymbol {w}) \odot (| \boldsymbol {w} | | \boldsymbol {w} | ^ {T}) n o t n e g a t i v e d e f i n i t e \\ O (\rho^ {3}) & i f \nabla L _ {\mathcal {S}} (\boldsymbol {w}) \odot | \boldsymbol {w} | = 0 a n d \\ & \nabla^ {2} L _ {\mathcal {S}} (\boldsymbol {w}) \odot (| \boldsymbol {w} | | \boldsymbol {w} | ^ {T}) i s n e g a t i v e d e f i n i t e \end{array} \right. \\ \end{array} +$$ + +Proof. We get + +$$ +\begin{array}{l} \max _ {\| \boldsymbol {\gamma} \| _ {p} \leq \rho} L _ {\mathcal {S}} (\boldsymbol {w} + \boldsymbol {\gamma} \odot | \boldsymbol {w} |) - L _ {\mathcal {S}} (\boldsymbol {w}) = \max _ {\| \boldsymbol {\gamma} \| _ {p} \leq \rho} \left\langle \nabla L _ {\mathcal {S}} (\boldsymbol {w}), | \boldsymbol {w} | \odot \boldsymbol {\gamma} \right\rangle + \frac {1}{2} \left\langle \boldsymbol {\gamma} \odot | \boldsymbol {w} |, \nabla^ {2} L _ {\mathcal {S}} (\boldsymbol {w}) \boldsymbol {\gamma} \odot | \boldsymbol {w} | \right\rangle + O (\| \boldsymbol {\gamma} \| _ {p} ^ {3}) \\ = \max _ {\| \boldsymbol {\gamma} \| _ {p} \leq \rho} \left\langle \nabla L _ {\mathcal {S}} (\boldsymbol {w}) \odot | \boldsymbol {w} |, \boldsymbol {\gamma} \right\rangle + \frac {1}{2} \left\langle \boldsymbol {\gamma}, \left(\nabla^ {2} L _ {\mathcal {S}} (\boldsymbol {w}) \odot (| \boldsymbol {w} | | \boldsymbol {w} | ^ {T})\right) \boldsymbol {\gamma} \right\rangle + O (\| \boldsymbol {\gamma} \| _ {p} ^ {3}) \\ \end{array} +$$ + +If $\nabla L_{\mathcal{S}}(\boldsymbol{w}) \odot |\boldsymbol{w}| \neq 0$ , then the first order term dominates for $\rho$ sufficiently small and we get + +$$ +\max _ {\| \boldsymbol {\gamma} \| _ {p} \leq \rho} \langle \nabla L _ {\mathcal {S}} (\boldsymbol {w}) \odot | \boldsymbol {w} |, \boldsymbol {\gamma} \rangle = \max _ {\| \boldsymbol {\gamma} \| _ {p} \leq \rho} \| \nabla L _ {\mathcal {S}} (\boldsymbol {w}) \odot | \boldsymbol {w} | \| _ {q} \| \boldsymbol {\gamma} \| _ {p} = \rho \| \nabla L _ {\mathcal {S}} (\boldsymbol {w}) \odot | \boldsymbol {w} | \| _ {q}. +$$ + +Otherwise we have to consider + +$$ +\max _ {\| \boldsymbol {\gamma} \| _ {p} \leq \rho} \frac {1}{2} \left\langle \boldsymbol {\gamma}, \left(\nabla^ {2} L _ {\mathcal {S}} (\boldsymbol {w}) \odot (| \boldsymbol {w} | | \boldsymbol {w} | ^ {T})\right) \boldsymbol {\gamma} \right\rangle . +$$ + +If $\nabla^2 L(\pmb{w}) \odot (|\pmb{w}||\pmb{w}|^T)$ is negative definite, then the maximum is zero attained at $\gamma = 0$ . In the other case, we get + +$$ +\max _ {\| \boldsymbol {\gamma} \| _ {p} \leq \rho} \frac {1}{2} \left\langle \boldsymbol {\gamma}, \left(\nabla^ {2} L _ {\mathcal {S}} (\boldsymbol {w}) \odot (| \boldsymbol {w} | | \boldsymbol {w} | ^ {T})\right) \boldsymbol {\gamma} \right\rangle = \frac {\rho^ {2}}{2} \max _ {\boldsymbol {\gamma} \neq 0} \frac {\left\langle \boldsymbol {\gamma} , \left(\nabla^ {2} L _ {\mathcal {S}} (\boldsymbol {w}) \odot (| \boldsymbol {w} | | \boldsymbol {w} | ^ {T})\right) \boldsymbol {\gamma} \right\rangle}{\| \boldsymbol {\gamma} \| _ {p} ^ {2}}. +$$ + +This almost finishes the proof. Finally, it holds + +$$ +\begin{array}{l} \lim _ {\rho \rightarrow 0} S _ {m a x, p} ^ {\rho} (\boldsymbol {w}, | \boldsymbol {w} |) = \lim _ {\rho \rightarrow 0} \mathbb {E} _ {\mathcal {S} \sim P _ {m}} \left[ \max _ {\| \boldsymbol {\gamma} \| _ {p} \leq \rho} L _ {\mathcal {S}} (\boldsymbol {w} + \boldsymbol {\gamma} \odot | \boldsymbol {w} |) - L _ {\mathcal {S}} (\boldsymbol {w}) \right], \\ = \mathbb {E} _ {\mathcal {S} \sim P _ {m}} \left[ \lim _ {\rho \rightarrow 0} \max _ {\| \gamma \| _ {p} \leq \rho} L _ {\mathcal {S}} (\boldsymbol {w} + \boldsymbol {\gamma} \odot | \boldsymbol {w} |) - L _ {\mathcal {S}} (\boldsymbol {w}) \right] \\ \end{array} +$$ + +where for the last step we have used that $\mathbb{E}_{\mathcal{S}\sim P_m}$ is the expectation over all possible subsamples of size $m$ and thus boils down to a finite sum for which we can drag the limit inside. + +We note that for $p = 2$ it holds $q = 2$ and + +$$ +\max _ {\boldsymbol {\gamma} \neq 0} \frac {\left\langle \boldsymbol {\gamma} , \left(\nabla^ {2} L _ {\mathcal {S}} (\boldsymbol {w}) \odot (| \boldsymbol {w} | | \boldsymbol {w} | ^ {T})\right) \boldsymbol {\gamma} \right\rangle}{\| \boldsymbol {\gamma} \| _ {2} ^ {2}} = \lambda_ {\max } \left(\nabla^ {2} L _ {\mathcal {S}} (\boldsymbol {w}) \odot (| \boldsymbol {w} | | \boldsymbol {w} | ^ {T})\right), +$$ + +which is the result used in the main paper. + +Proposition 2. Let $L_{S} \in C^{3}(\mathbb{R}^{s})$ , $S$ be a finite sample of training points $(x_{i},y_{i})_{i = 1}^{n}$ and let $P_{m}$ denote the uniform distribution over subsamples of size $m \leq n$ from $S$ . Then + +$$ +\lim _ {\rho \rightarrow 0} \frac {2}{\rho^ {2}} S _ {a v g} ^ {\rho} (\boldsymbol {w}, | \boldsymbol {w} |) = \mathbb {E} _ {\mathcal {S} \sim P _ {m}} \left[ t r \left(\nabla^ {2} L _ {\mathcal {S}} (\boldsymbol {w}) \odot | \boldsymbol {w} | | \boldsymbol {w} | ^ {\top}\right)\right] + O (\rho) +$$ + +Proof. Let us consider the loss without the subscript for clarity. Then we consider + +$$ +\mathbb {E} _ {\delta \sim \mathcal {N} (0, \rho^ {2} d i a g (\boldsymbol {c} ^ {2}))} L _ {\mathcal {S}} (\boldsymbol {w} + \boldsymbol {\delta}) - L _ {\mathcal {S}} (\boldsymbol {w}) +$$ + +When plugging in the Taylor expansion of the loss, we see that + +$$ +\begin{array}{l} \mathbb {E} _ {\boldsymbol {\delta} \sim \mathcal {N} (0, \rho^ {2} d i a g (\boldsymbol {c} ^ {2}))} L _ {\mathcal {S}} (\boldsymbol {w} + \boldsymbol {\delta}) - L _ {\mathcal {S}} (\boldsymbol {w}) \\ = \mathbb {E} _ {\boldsymbol {\gamma} \in \mathcal {N} (0, \rho^ {2} \boldsymbol {I})} \left[ \langle \nabla L _ {\mathcal {S}} (\boldsymbol {w}), | \boldsymbol {w} | \odot \boldsymbol {\gamma} \rangle + \frac {1}{2} \left\langle \boldsymbol {\gamma} \odot | \boldsymbol {w} |, \nabla^ {2} L _ {\mathcal {S}} (\boldsymbol {w}) \boldsymbol {\gamma} \odot | \boldsymbol {w} | \right\rangle + O \left(\| \boldsymbol {\gamma} \| _ {2} ^ {3}\right) \right] \\ = \frac {1}{2} \mathbb {E} _ {\boldsymbol {\gamma} \in \mathcal {N} (0, \rho^ {2} \boldsymbol {I})} \left[ \langle \boldsymbol {\gamma} \odot | \boldsymbol {w} |, \nabla^ {2} L _ {\mathcal {S}} (\boldsymbol {w}) \boldsymbol {\gamma} \odot | \boldsymbol {w} | \rangle \right] + O (\rho^ {3}) \\ = \frac {1}{2} \mathbb {E} _ {\boldsymbol {\gamma} \in \mathcal {N} (0, \rho^ {2} \boldsymbol {I})} \left[ \left\langle \boldsymbol {\gamma}, \left(\nabla^ {2} L _ {\mathcal {S}} (\boldsymbol {w}) \odot | \boldsymbol {w} | | \boldsymbol {w} | ^ {T}\right) \boldsymbol {\gamma} \right\rangle \right] + O (\rho^ {3}) \\ = \frac {\rho^ {2}}{2} \operatorname {t r} \left(\nabla^ {2} L _ {\mathcal {S}} (\boldsymbol {w}) \odot | \boldsymbol {w} | | \boldsymbol {w} | ^ {\top}\right) + O \left(\rho^ {3}\right) \\ \end{array} +$$ + +where we use that the components of $\gamma$ are independent and have zero mean and thus the first order term vanishes and for the second order term only the diagonal entries remain which are equal to the variance $\rho^2$ . Finally, we take the expectation with respect to $P_{m}$ . As in the proof of Proposition 1 we can drag the limit inside as the expectation with respect to $P_{m}$ corresponds to a finite sum. + +# A.2. Derivations for Diagonal Linear Networks + +Hessian for diagonal linear networks. Denote $\boldsymbol{r} = \boldsymbol{X}(\boldsymbol{u} \odot \boldsymbol{v}) - \boldsymbol{y}, \boldsymbol{V} = \mathrm{diag}(\boldsymbol{v}), \boldsymbol{U} = \mathrm{diag}(\boldsymbol{u})$ , then the Hessian of the loss $\nabla^2 L(\boldsymbol{w})$ for diagonal linear networks is given by: + +$$ +L (\boldsymbol {w}) = \left[ \begin{array}{c c} \boldsymbol {V} \boldsymbol {X} ^ {\top} \boldsymbol {X} \boldsymbol {V} & \boldsymbol {V} \boldsymbol {X} ^ {\top} \boldsymbol {X} \boldsymbol {U} + \operatorname {d i a g} \left(\boldsymbol {X} ^ {\top} \boldsymbol {r}\right) \\ \boldsymbol {V} \boldsymbol {X} ^ {\top} \boldsymbol {X} \boldsymbol {U} + \operatorname {d i a g} \left(\boldsymbol {X} ^ {\top} \boldsymbol {r}\right) & \boldsymbol {U} \boldsymbol {X} ^ {\top} \boldsymbol {X} \boldsymbol {U} \end{array} \right]. \tag {7} +$$ + +It is easy to verify that the data-dependent terms disappear due to the assumption of whitened data $\boldsymbol{X}^{\top}\boldsymbol{X} = \boldsymbol{I}$ and zero residuals $\boldsymbol{r}$ at a minimum. Thus, we arrive at a much simpler expression for the Hessian: + +$$ +L (\boldsymbol {w}) = \left[ \begin{array}{l l} \operatorname {d i a g} (\boldsymbol {v} \odot \boldsymbol {v}) & \operatorname {d i a g} (\boldsymbol {v} \odot \boldsymbol {u}) \\ \operatorname {d i a g} (\boldsymbol {v} \odot \boldsymbol {u}) & \operatorname {d i a g} (\boldsymbol {u} \odot \boldsymbol {u}) \end{array} \right], \tag {8} +$$ + +Maximum eigenvalue for diagonal linear networks. Since the Hessian has a simple block structure, we can rearrange the rows and columns coherently and get a block-diagonal structure as follows + +$$ +\left[ \begin{array}{c c c c c} v _ {1} ^ {2} & v _ {1} u _ {1} & 0 & \dots & 0 \\ v _ {1} u _ {1} & u _ {1} ^ {2} & 0 & \dots & 0 \\ 0 & 0 & \dots & \dots & 0 \\ \dots & \dots & \dots & \dots & \dots \\ 0 & \dots & 0 & v _ {d} ^ {2} & v _ {d} u _ {d} \\ 0 & \dots & 0 & v _ {d} u _ {d} & u _ {d} ^ {2} \end{array} \right] \tag {9} +$$ + +where eigenvalues of each $2 \times 2$ submatrix are $u_i^2 + v_i^2$ and 0. Thus, $\lambda_{\max} = \max_{1 \leq i \leq d} v_i^2 + u_i^2$ by using the property of block-diagonal matrices. + +# B. Correlation Between Sharpness and Generalization Gap + +Throughout the paper we focused on correlation between sharpness and test error, but it is natural to ask if the picture differs if we consider correlation between sharpness and generalization gap, i.e., the difference between the test error and training error. We note that in the experiments on CIFAR-10 in Section 5.1, since we consider only models with $\leq 1\%$ training error and since the test error is significantly larger than $1\%$ , the behavior of generalization gap vs. sharpness has to be almost identical to that of test error vs. sharpness. For other datasets, however, the training error is not necessarily close to 0, thus in Figure 8 and Figure 9, we additionally plot the generalization gap vs. sharpness (and side-by-side the test error vs. sharpness for the sake of convenience) for the ImageNet experiments. We observe only small differences in the correlation values which do not alter the conclusions about the relationship of sharpness and generalization. + +![](images/63ba5a6a6041aa49b0a487583412108bba9553147a229cfc48ce47b5e344e540.jpg) + +![](images/26a190ffc65827b471a482bb624b91b059c1ee7da69bdb764c174c6112f2ccee.jpg) +With logit normalization + +![](images/b0ceb8925ea75aa5ef81335a57190dd80dc6467e7cce3d3f051faf0cab12e86b.jpg) + +![](images/c776a42c45da1ba3ff6c4cbad8c02d9f3c33774d9985139f24c01bf2a17ec262.jpg) + +![](images/a3973c1b635e55db4381aae5743709a417d24a6db5d98f13073bb09a0c86057f.jpg) + +![](images/de34f3c0ec10aaa3aaa538a4a5207da30ff1328d1c2caf88ace149c276b18f9c.jpg) + +![](images/b21486c6fb44cbe513a81c51269f9cdbcfa7d56349af48eecafbdc20d6c933ee.jpg) + +![](images/00c14744d842915ef54c54740891a2543d5b4fd97482382f6018c5e08dd7afb4.jpg) + +![](images/7b48e76d2f2396d4b05470e7b964fed164981e0a3079edee8657b704dbd3e49e.jpg) + +![](images/4b574cd703ceb6379e0d8ec78d6993f31ffb1340d081e2dd9e8dc7d938320bea.jpg) +Without logit normalization + +![](images/46b50a6001f612a613461d7f870a8d79eebb7b887fa52844479506fd65d04a68.jpg) + +![](images/658d45985a02e732b2e2caad2390db91952830e1878ddc9e39596f4ad9ec55df.jpg) + +![](images/2d453fbb0e5b8c4024e0890f150f98c8af204cf344b28f73c2b6a67628116b91.jpg) +Figure 8: ViT-B/16 trained from scratch on ImageNet-1k. We show side-by-side the test error and generalization gap (Gen. Gap) for 56 models from Steiner et al. (2021) on ImageNet and its OOD variants vs. worst-case $\ell_{\infty}$ sharpness with (top) or without (bottom) normalization at $\rho = 0.002$ . The color indicates models trained with stochastic depth (sd) and dropout (do), markers and their size indicate the strength of weight decay (wd) and augmentations (aug), and $\tau$ indicates the rank correlation coefficient. + +![](images/4d308d512b90df2a0512b7e07554a6374a12bc5c0fb4f4d3a369592661b9e3a2.jpg) + +![](images/5ea87de6bbbbc964556661f0bea831c2f4eb7e8caad28199160b84f21a3544b8.jpg) + +![](images/fa2d5f00fee371941d0830e4bf2e7f40913273f6ff4f66766d067646fecc853f.jpg) + +![](images/0d924c33c62da4baaa8cbd322fc5743031cfdd0d8a0d19c1cc2d712961e9e9be.jpg) + +![](images/4ae9a40e77214f939a70cfd5b020fda79fd63538c3f5cd2a6b20a3a583a2809e.jpg) + +![](images/91f11a61cf272149ab26b0ce61c7d1dc7af57f8de9bf020fec50c2dd293b70cf.jpg) +With logit normalization + +![](images/40d459dc443f7608d6a0ddef65c2a3e5fb54d3a0284618c473b503e722150dbf.jpg) + +![](images/1d8c6af0988b0549439dfb15bc5a2f64a3f9b71a94740a83d03ce69450335dad.jpg) + +![](images/dff970e78999bb5ac696e8c900846227998ef1d172e58c2ab5abcff3297bb784.jpg) + +![](images/99d3dc78efd75a4150ad85f1e113fce3eec508dad0eb2abef1c9ff398686585b.jpg) + +![](images/1e2e7fbe4fa156704da0f4919a2c42a50424d24bd5448243bfd41ebb60d04e51.jpg) + +![](images/fc0466d375d416179fc37669e192e415da5ecb44bc5f22156196334027ac8e3c.jpg) + +![](images/3d98dfb4d95103311032d16b77878a682fd71fa75262d7d4e09a273820df8ed3.jpg) + +![](images/30e71a831ca8d1ab165bcbfd3ee521184a00d396cb8e54a47f275a80b9639e3f.jpg) + +![](images/878f60d63f2f13679a90e9002fe2e9b42b97cd75645f616220033f221f608f0a.jpg) +Figure 9: Fine-tuning CLIP ViT-B/32 on ImageNet-1k. We show side-by-side the test error and generalization gap (Gen. gap) for 72 models from Wortsman et al. (2022) on ImageNet and its OOD variants vs. worst-case $\ell_{\infty}$ sharpness with (top) or without (bottom) normalization at $\rho = 0.002$ . Darker color indicates larger learning rate used for fine-tuning. + +![](images/30c8c05600b5fa674ae3444eb47af5d587fc612b06f0420e245605a5debebb73.jpg) +Without logit normalization + +![](images/8502e2d5008cd287e967b286ec67fa17fb327b29607344529df8ad3c86f7d856.jpg) + +![](images/7cef7f4b78eb272c31c7f5a7e40e2e34f5ffa0c1e3b6761acd3a6131f14c9609.jpg) + +![](images/e66750f91c2601c2459da943c9b27444f950bf61e90ad025b9e97e4566bf25e9.jpg) + +![](images/cfa1875f81e50dc1831be6f292cd21972a830c94e2667a5e08ba74e7daf97826.jpg) + +![](images/2b101925a42361439c12904b6fdb6d24328bc18259cb1c5eb4e377595e05d424.jpg) + +![](images/2327e2aab1e6f8c23f3138873644c78c4387e2a41f6ca6066fcfb1e4a57fe1d9.jpg) + +![](images/6b38643dc65b3bc8cb42f14e3ab9da65eb9729ef81c620c3bf9abb7d80792175.jpg) + +# C. ImageNet-1k Models Trained from Scratch from (Steiner et al., 2021): Extra Details and Figures + +Experimental details. As explained in the main paper, the ViT-B/16-224 weights were trained on ImageNet-1k for 300 epochs with different hyperparameter settings, and subsequently fine-tuned on the same dataset for 20.000 steps with 2 different learning rates (0.01 and 0.03). The pretraining hyperparameters include 7 augmentation types (none, light0, light1, medium0, medium1, strong0, strong1), which we group into (none, light, medium, strong) in the plots. Weight decay was either 0.1 or 0.03, and dropout and stochastic depth were either both set to 0 or both set to 0.1. We evaluated the resulting 56 configurations. The model weights can be obtained from https://github.com/google-research/vision_transformer. + +Sharpness evaluation. For sharpness evaluation we use 2048 data points from the training set split in 8 batches: we compute sharpness on each of them and report the average. For worst-case sharpness we use Auto-PGD for 20 steps (for each batch) with random uniform initialization in the feasible set, while for average-case sharpness we sample 100 different weights perturbations for every batch. We use the same sharpness evaluation for all ImageNet-1k and MNLI models. For convenience we restate the algorithm of Auto-PGD in Algorithm 1: it follows the original version presented in Croce & Hein (2020) while using the network weights $\pmb{w}$ as optimization variables instead of the input image components. In Alg. 1 we denote $f$ the target objective function (cross-entropy loss on the batch of images in our experiments), $S$ the feasible set of perturbations and $P_{S}$ the projection onto it. Also, $\eta$ and $W$ are fixed hyperparameters (we keep the original values), and the two conditions in Line 13 can be found in Croce & Hein (2020). + +Algorithm 1 Auto-PGD +1: Input: objective function $f$ , perturbation set $S$ , $\boldsymbol{w}^{(0)}$ , $\eta$ , $N_{\mathrm{iter}}$ , $W = \{w_0, \dots, w_n\}$ +2: Output: $\boldsymbol{w}_{\mathrm{max}}$ , $f_{\mathrm{max}}$ +3: $\boldsymbol{w}^{(1)} \gets P_S(\boldsymbol{w}^{(0)} + \eta \nabla f(\boldsymbol{w}^{(0)}))$ +4: $f_{\mathrm{max}} \gets \max \{f(\boldsymbol{w}^{(0)}), f(\boldsymbol{w}^{(1)})\}$ +5: $\boldsymbol{w}_{\mathrm{max}} \gets \boldsymbol{w}^{(0)}$ if $f_{\mathrm{max}} \equiv f(\boldsymbol{w}^{(0)})$ else $\boldsymbol{w}_{\mathrm{max}} \gets \boldsymbol{w}^{(1)}$ +6: for $k = 1$ to $N_{\mathrm{iter}} - 1$ do +7: $z^{(k+1)} \gets P_S(\boldsymbol{w}^{(k)} + \eta \nabla f(\boldsymbol{w}^{(k)}))$ +8: $\boldsymbol{w}^{(k+1)} \gets P_S(\boldsymbol{w}^{(k)} + \alpha (\boldsymbol{z}^{(k+1)} - \boldsymbol{w}^{(k)}) + (1 - \alpha)(\boldsymbol{w}^{(k)} - \boldsymbol{w}^{(k-1)}))$ +9: if $f(\boldsymbol{w}^{(k+1)}) > f_{\mathrm{max}}$ then +10: $\boldsymbol{w}_{\mathrm{max}} \gets \boldsymbol{w}^{(k+1)}$ and $f_{\mathrm{max}} \gets f(\boldsymbol{w}^{(k+1)})$ +11: end if +12: if $k \in W$ then +13: if Condition 1 or Condition 2 then +14: $\eta \gets \eta / 2$ and $\boldsymbol{w}^{(k+1)} \gets \boldsymbol{w}_{\mathrm{max}}$ +15: end if +16: end if +17: end for + +Extra figures. For each sharpness definition we show for three values of $\rho$ the correlation between test error on ImageNet (in-distribution) and on the various distribution shifts. In particular, we use worst-case $\ell_{\infty}$ adaptive sharpness with (Fig. 10) and without (Fig. 11) logit normalization, and average-case adaptive sharpness with (Fig. 12) and without (Fig. 13) logit normalization. For all figures the color shows stochastic depth / dropout, the marker size corresponds to augmentation strength, and the marker type to weight decay. In addition to the OOD-datasets from the main paper, we here report the results for ImageNet-V2 (Recht et al., 2019) and ObjectNet (Barbu et al., 2019). ImageNet-V2 consists in a new test set for ImageNet models and is sampled from the same image distribution as the existing validation set: then, the performance of the classifiers on it are highly correlated to that on ImageNet validation set, and ImageNet-V2 cannot be considered a distribution shift in the same sense as the other datasets. In general, we observe that sharpness variants are not predictive of the performance on ImageNet and the OOD datasets, typically only separating models by stochastic depth / dropout, but not ranking them according to generalization properties, and often even yielding a negative correlation with OOD test error. The only case where low sharpness indicates low test-error is for logit-normalized average-case adaptive sharpness on ImageNet and ImageNet-v2. For the remaining OOD datasets, however, there are always models with low sharpness and larger test error. + +![](images/cc4987b411fb3e3d5022e37dde4b44ffbd3b27e8ab8be6422f5f72b808049a14.jpg) +Worst-case $\ell_{\infty}$ adaptive sharpness with logit normalization + +![](images/9a83c9ab31c45ba29a4177e56ee0480a0b1d147f68cb696130db307867941c81.jpg) + +![](images/82f13cf3e6d454bab632338123035f4093925b00ce2e950638a9d3dd5c11d9d6.jpg) + +![](images/9695a0ef8131d76b9744dd69683a4919aba78c627f0abadc7e7e8515fb4dcd9c.jpg) + +![](images/6891bd547ffa94486bc1187b4d8fc9c76fc857200de9bc5c1a4e729ca95b9a60.jpg) + +![](images/4c840e3bf0f32ecaba24f2e663b9a73533361f3c56b9b3416d111854a5e2d3c0.jpg) + +![](images/53d6571a6d14c41bf70c08da57bae98851c74ff234064980fff5a050f6869bc8.jpg) + +![](images/e18525acb1f2faef0c732d4d537889b03ecd41703d944342d5e98f17aeead451.jpg) + +![](images/b1349def49393dea2fe5d3d86663dc4cf19457b51e230a8b9d6a86d449a04d58.jpg) + +![](images/d88a17dd5c3dba984caf1050cbd892a34a4466a7cd9159cf866350b150741bd5.jpg) + +![](images/e9fc9b654df18664406c70d79d79d7c5dd427127f8dc3f12690974114c38e19c.jpg) + +![](images/7cc6a8a06fd56dd5081085dfe9f1165864a04532ce23a992e372798d544fc3a6.jpg) + +![](images/fb33d4b1aa1bfe2ddf138698a0e3b8b02c0a938eee428a3074fbb8dea603ae32.jpg) + +![](images/9b14f0f34ad59606eb6ab1ff7d6bc4d16dab1408ea379813d0ae0b75ab74ee7f.jpg) + +![](images/451435561b716132ec7a7089e1c0f4067315b16b7d76777b441df9868c84f582.jpg) + +![](images/fc0ccff47486e56b29e4d65a06744173090d7f332507141474b3a0dd5a56f8c6.jpg) +Figure 10: Correlation of sharpness with generalization on ImageNet for different $\rho$ and for different distribution shifts. + +![](images/edc01f0367c0a6747fff0f6a1af1bc0c358acdfa230326ca54f895626fc739be.jpg) + +![](images/f2a2a47c94cda6a6301d5f5719eed9af8589aee1118aed9a2e2d75ad44cb227f.jpg) + +![](images/47cb569d8d206886bc6b057557fda42dd714c14945dd5d8d017536741dc72a47.jpg) +Worst-case $\ell_{\infty}$ adaptive sharpness without logit normalization + +![](images/840228a2c9081b538c79bf5f8c678eec9e6898e53ff166c28731fa4d9b643ab4.jpg) + +![](images/f49f624c862b12b84642b9e380b5251aa1997a571fb032a942f7382894d03f0d.jpg) + +![](images/221d2376b49b2f49cb1c5f74a0a618791214c565f16ec70feb3a352a63b29f48.jpg) + +![](images/3505d7d6e5a1f2f0d80cbe131f1ae9aaf635056b5005c7b2143afbce6529dc93.jpg) + +![](images/37de43c00e033aae82eafabef99f77bbdb25e365c275f7d97d84053b2b8f8224.jpg) + +![](images/56099a3b1d9c7eae53c2b5c020ebbecd01e4d548c64fe97d7db1327cd25901e2.jpg) + +![](images/b111b1809ac2675fb4e3f94a4546109b8886f99e3ff0c6e986109704288e3cdc.jpg) + +![](images/cba1b8556c1bec384c22a0f401246e6e6eea50d1de4b08180ce1f0153a764467.jpg) + +![](images/fa1f8aa1f4d5c27a1e22f6b4c561164f0816a45aa98b884bb7f6ccd33bd93aab.jpg) + +![](images/2ed490f428cd7faeb0bc001b786dd515feb4944934ade45d72912cdc4c132337.jpg) + +![](images/1ebcb3448b1f493d53827edc13901ef7f6378cc43d5eeb7bee200dc9559b1882.jpg) + +![](images/f4476423792005973c33725ed0af28a1a4e2b9e7fa416ac46464a82dde4e6709.jpg) + +![](images/94da4880a896bdee51a446e7cce2e3280a0f523ab2e4f1d4ea3b8a247a46bfe8.jpg) + +![](images/1f3a74829f3df6a27d67171290ccadd88af19d959f37e1392bdafd2001750984.jpg) + +![](images/5eca53b769d6fe5effa33ad25a26803e04ff15ecaa84afc8631510bf5cfc904c.jpg) +Figure 11: Correlation of sharpness with generalization on ImageNet for different $\rho$ and for different distribution shifts. + +![](images/828fa94867930083b85d496a9ac5be04b229b1f558f0eb0b9bef5e51dccfe25d.jpg) + +![](images/4d232290dcb8fbaa69c04d97b64d037fc63c665257206b97e065ed4b51175cc2.jpg) + +![](images/8aedab9cc069234125433e184d2bfa96d7b25376ea34a44b64c9566db390d8ce.jpg) +Average-case adaptive sharpness with logit normalization + +![](images/c7b94bfc906cad0266e9c240d1a45e1a96c04faae1894ae4791cb11d42957e7f.jpg) + +![](images/63bdf0b850ce39a62b4d02379e739d2ced8dcdef65751d7ed5e04574310304c9.jpg) + +![](images/95f145018ebe78eab9a8f8e9321a68d1544a74701851055ae8c98b40bf913bd1.jpg) + +![](images/b41a83f033df6c9beb99824c6c60dca33c34d71d7153c8bd8768f1ab8fa737ef.jpg) + +![](images/9e6dc69150d8f043729921debbf124f0a967c37ea5fd40433b5271b97536bc1f.jpg) + +![](images/f57908eed127d7373574ff05085843659b7935f890fc695fc22b802a4f3442d4.jpg) + +![](images/2c96f9a8fdf517b9e9b5ea23ee7eea26a953d8a0b70e661e9daac8d02c20aad1.jpg) + +![](images/1eaada519243dcbb8f2646f5d6f7e40dcfd6cfdbaeec14749ab3267f34d88827.jpg) + +![](images/a84990cf58c8be8522d6592735041ae588375a086952566819a101b588f322a4.jpg) + +![](images/7bcfc6f1a777cd67f0ad3d9f42c44b1955a6e92c9f4daad64a0709b0b61a4533.jpg) + +![](images/5e1cc0545b5ed26a0d4335a946ac4e3e15a3d108b3767c6f8a4c0a33bd67ee6b.jpg) + +![](images/3cccd90e39f3d76c7401faa5abe2d633bae48f918b7e57dd2b7ca5ca52ddf799.jpg) + +![](images/872360478759357cc5f2df4220058cc453ed6ca54e0828604d8c0f720e5c69c7.jpg) + +![](images/f6707f4582fe261c89131aba18d82e255a95c3225da5c83f82b5d24a174ce708.jpg) + +![](images/75c98493a8d117e6eabc1393ccd21094dd384585b6f93f026d93e54b052cde49.jpg) +Figure 12: Correlation of sharpness with generalization on ImageNet for different $\rho$ and for different distribution shifts. + +![](images/a3cccbc6f3115aeae58219e2ae40460f2fd4c98c54ececd6cda42ee6552a7562.jpg) + +![](images/5d64221d347211636e0faeddcfda2e5ca15283a6421db27edd4a05132c108413.jpg) + +![](images/0c737623e9d51a7c9cab2f1850bd7d34e0691f3e432aa34201f9cdff4ddf915e.jpg) +Average-case adaptive sharpness without logit normalization + +![](images/fbace621c48dbd48ed3909e85cb679b2bd8faadc410d0e60def8039b1e3cb1da.jpg) + +![](images/a0c950269c21d55ee515b49f2dcc2c3a59a77114a2d86ff87d2ce320f4e17b74.jpg) + +![](images/8046bf687110cb3d1665f6b512fd364611b32bbe183ffb89739fd52eb2cfd402.jpg) + +![](images/6f888612c4e212eed24ce9d99607bb9e2d9cd35059daadd0b4be4a5cf3ba20e0.jpg) + +![](images/7a0c87c03973a0c146a580841e89cfd1e613062aae5dead9aad4a1b2ba7663b7.jpg) + +![](images/6bed6ab598622c93f35905b529dab9b119114a30377f5b38ef02f012f50e16e6.jpg) + +![](images/e3303ce8b5b96b098686aadf09fe4492efd48a99b436d3a02be2cd97bb90f4be.jpg) + +![](images/4a2783345614559ffaa0dbe9e3c672bebe6c2a579ef4f1e655e3c7f1ac256ca7.jpg) + +![](images/3da97d0bb98724796e32f2ee2757fb3d731b00c7b8008c64c2c8510cf4a01ea4.jpg) + +![](images/9ebcc924b330eedc90327d83e7adae667c367506030b5aa7e67dac0e7f6bcb2c.jpg) + +![](images/204295befb01d5df5523bd0afc4a94620e67966d50386353eaf87a233e74d9d0.jpg) + +![](images/130cf15f30cef175a060101c02781eba73d2b9c880df29a6a74fe9d2be83fcf7.jpg) + +![](images/35ba0c8d00c5b3237bad5d685755507d9e73af269d1a12a0afff2a7294e4ea98.jpg) + +![](images/d040bcbfbf5fac917a5d9bf4849bcdcea3cde606bfe0df28859f13d34f6cfed6.jpg) + +![](images/acf384a55985c8045ea9ca70ff00f8f3411d731a4ea2899944f49333b990d6d5.jpg) +Figure 13: Correlation of sharpness with generalization on ImageNet for different $\rho$ and for different distribution shifts. + +![](images/23fc1ec2d5c4de5f0a319c82c3fd7886248ea109897d1602c934b784df7dba3f.jpg) + +![](images/870000bccc67cd987c5cf4b6640534ef41d8a5e7d4deea05bdd93957d55ffeab.jpg) + +# D. Fine-tuning of ImageNet-1k Models Pretrained on ImageNet-21k from Steiner et al. (2021): Extra Figures and Details + +Experimental details. All hyperparameter settings are identical to those explained in Appendix C, only the pretraining dataset is ImageNet-21k instead of ImageNet-1k. Since two of the models showed close to $100\%$ test error, we did not evaluate them, resulting in 54 instead of 56 models. + +Extra figures. Like in Appendix C we show each sharpness definition for three values of $\rho$ and its the correlation to test error on ImageNet (in-distribution) and on the various distribution shifts. The observations are very similar to those on ImageNet-1k pretraining: sharpness variants are not predictive of the performance on ImageNet and the distribution shift datasets, typically only separating models by stochastic depth / dropout, and often even yielding a negative correlation with OOD test error. + +![](images/33e0dae2d85608f05cdbe033f284b6447fd781ca36070a3aa6632b8a06701b82.jpg) +Worst-case $\ell_{\infty}$ adaptive sharpness with logit normalization + +![](images/abde8be8ca0956f475e3657ce7a4aeb573df21ef63701e9fd36c36755de5e26c.jpg) + +![](images/42943e03d627bc92d86a734a2dfad90185ba836ea8d5c8abd44aa3667321d916.jpg) + +![](images/fa704cec9fd65ce3ff763d3f060d508dd657a14c237bbafde3367b53c79fcf1d.jpg) + +![](images/92a7680fb06bc6adfbaf84e95a1f0b7054d3586d00908adfeaac36ccd81ed88e.jpg) + +![](images/0d3d78c16f53ecd4b80226262cd2b9e71c374725f84d6d50f83c9b23bc90954a.jpg) + +![](images/bb11b73a48c9eb23ad08b2c04c37b9ce6bc6db0c45144e1434eef2b76faecea9.jpg) + +![](images/020f1f2e8a21ab14ed9f0eceb877fe1dd3e6e77759902d5def5b55306720b96f.jpg) + +![](images/35f4a94b21ca4ece899fea51d2e0f1ac4b7c1bb95d2919fc26715f26fc18e42e.jpg) + +![](images/f33fe53c750aa098b3fe10606d625a3b3fa6ed76f34c31535c5f71749d2162e0.jpg) + +![](images/67fc621c93c6dbe51b2a0e28bf8cd14615da61850803a9802fe32849b961b8eb.jpg) + +![](images/bcbe8b71f8de61f2eb4f277318f1ded3d0d1efb1fe59ac9833de17a7323e4b34.jpg) + +![](images/2542c87857c1d7299651ed61566c50435b22b477653d2c94c686b95d3f367282.jpg) + +![](images/e12ef75800da5a274018e549c0ea10a5028ebb454734af633ca25c1c38777409.jpg) + +![](images/9b7712ba44d57a1f2070d1cb1660e23771f70b8d173d17c9c94dd14c2c3e53eb.jpg) + +![](images/bd95c461efbd0d66266e70245b70f9207c7cb4b64ccb20e5289b62bb8f9911fe.jpg) +Figure 14: Correlation of sharpness with generalization on ImageNet for different $\rho$ and for different distribution shifts. + +![](images/c8fc934374dcae63c1a9645fdd82427865b37aed8ed22bcda69e2661b3b3ff72.jpg) + +![](images/60b4739a18e6fa273784e9c741c2ce39abf64839b4900e76d95afe1659571afe.jpg) + +![](images/5192975ed43ee9c83da5633b22ee2902f652326ac63c80379597f58db88c6080.jpg) +Worst-case $\ell_{\infty}$ adaptive sharpness without logit normalization + +![](images/ca3e0761e22a48927eabfca45fb73c8e285e9f355183f5e972c390f589d70fa5.jpg) + +![](images/6d7967c33d04ee4a77e64a149c1782090e2e12c973221722bddef66b90c26eb6.jpg) + +![](images/637af14878852eaa5e863d4bdef4b2ee22871403f1033889770f10585df8ae0e.jpg) + +![](images/425a4d0b9f4a82d1c56afd70e7dacb525b9533e813c02cefe27af20409bd41bb.jpg) + +![](images/7511159893e4516863f2ab4037ce493cb411a4d6dd23819c76bbf484eba5a9fd.jpg) + +![](images/4da0ab83e0a84d6b22005333fc9de4b5762ec9d3ce3bb00b64dcd11a937f71a4.jpg) + +![](images/1b8a74457cb29834b83addcd1355a982d86606a390e2b060e72dc93c714d3600.jpg) + +![](images/701ce52d57c7c574db9c3bf5ff9b3676a356a433dd86f5345234a40d81c7b08d.jpg) + +![](images/6dc0aa12e5989c8b0b4f1deebeda6472597d150d0bf5e7fd9bedc808494afd2f.jpg) + +![](images/d649bd5a559b0f61d2c03a1291d140a58e147a73fbd84fdb675d3467a5d1f47b.jpg) + +![](images/968ce83fa7ceff328ee4f37b3e89b552c5946126ba42f008dd99d56df0b8010b.jpg) + +![](images/881ea02841b050c2ddddda4273c7fbf7fcea9dad8781fa88bbd71b2cadff155c.jpg) + +![](images/09d70edfda57c6cdbbbfa1b5a24d02c01447af7d26a5045a9e408eadd32cbdab.jpg) + +![](images/74ae0a64118a9833855e98d5266ec9afe0768af27ed4e1863ed74c33899cf634.jpg) + +![](images/ea62758022662f732b29463df0738526d98a956e3d69be667b463d5466060702.jpg) +Figure 15: Correlation of sharpness with generalization on ImageNet for different $\rho$ and for different distribution shifts. + +![](images/6668cdec41d48c02ccda75d3606f611810af613c05c8f81016b97f00465537ca.jpg) + +![](images/ad691300c63969f0b7971f6f3b4c32c3923201e635ffbdacff8cf4337954fa83.jpg) + +![](images/76fda63350612929b12e9fc51bd1915d670577aa2c302c7c771d6310a9794f90.jpg) +Average-case adaptive sharpness with logit normalization + +![](images/de5a07d41bd53d97864975ea3dbedf9901db0e789f6e92cd110e930cd2877e33.jpg) + +![](images/a39008c276c7218603a7e3c41d7151ec19ea0cecd72ec90b7d712b0022ed04ab.jpg) + +![](images/8ba5e53a5ae4802f0adc0e9f1a3f29be41d5577f7d1a723ebac1047b587aa449.jpg) + +![](images/88b94c838a62209fa8c86f74e47631300dd9126072068309bf4cc9627b3bc69d.jpg) + +![](images/e21427c83b4eef6adc7be7da69cc3c216ef639e00b084430a4eebffb0bfab2eb.jpg) + +![](images/13320afff4a1176ec2e62335e9a4dc448039ea1064945cc6646fe5e2dee9dd4f.jpg) + +![](images/a935f84379eb4bbb5c334b1425ff2a000543d1a6ee56a375fe775e287d316a5f.jpg) + +![](images/c6f1893da435c47228c8b7e4042fa9d3e660358ca45598a99e0a0de9b8fdf3ec.jpg) + +![](images/70dfc77fb3a482a3d0b2779009e9ac2b493d54b6c11a00a7229a27c309e14bd1.jpg) + +![](images/01fadb505b1b0fc0c334ba37fa7ebeab5c35d1a3f43e43e20d55d4eca7214094.jpg) + +![](images/8cd5ad7981cbed89fabb5f6555ff304099b5db57eba6332e7a1de2da66865fae.jpg) + +![](images/7136608e1ffeefbfa1fedf90e9fa6f7625d7de09a4be4201f4862bf888e0ef80.jpg) + +![](images/0b9f1a9d7723f0741a8bebe1963da24a8110bc498ecbe532da33dcce612e43a2.jpg) + +![](images/2bf696ebc41f021443fe2f64ab4fe91374c4c577e7112ec91f2590903ecad01b.jpg) + +![](images/857a8895b77af6a0b805999665f9b895cf1143b5aa787307d94807218db0f34b.jpg) +Figure 16: Correlation of sharpness with generalization on ImageNet for different $\rho$ and for different distribution shifts. + +![](images/bba16d7fac11a98cb29caedbccc53d3044f3860c5e486ce8e094e09763474dd6.jpg) + +![](images/6e6a09cd6adb03e95a9c1676e8b254a8cf06ce11ab8d8749b4a4460523bd10cc.jpg) + +![](images/677c62e80a61efbf2d18e03ea5834cf8a25a562cb08762f8a4f75213fe26208f.jpg) +Average-case adaptive sharpness without logit normalization + +![](images/c9fe20bb6364d12c5a5c6e18b70224e81ec7cfecb6c0862c9145ca3fdb95a31a.jpg) + +![](images/5c1d4ce2c58ecc2b21dbf950de5a598552a1fa6046b9ab018412025fc5fa2591.jpg) + +![](images/350d0f6356912c9a1dee11b8685ac7e753104f42dbbde2d28b86e6dddeb9e032.jpg) + +![](images/5dc58503f2f16864420d1a7fa24f6ce83928113745629127daf8642cfa860ea3.jpg) + +![](images/2b488526281cb289eca38a82be227d1ecce5bf93cfc6813639204967c2bd4b86.jpg) + +![](images/1fc6d5b61c98741735b6d92b76b7f28c686693b451c291efda70d2af8c339097.jpg) + +![](images/22dd2dac65d024b1cbc8ab1fbd8b3248dd41dbf890856d75b9016a361906c295.jpg) + +![](images/0dd1ed23e24c53ed77e9d9c6c37e7d8894496e7b975ac5240ba25d58b3adce26.jpg) + +![](images/d595b68e1a22709fde3223f723b0ff66aa53245e4de7cee1cc81b44f106d0b5b.jpg) + +![](images/37c02b17b81a894a7da8a894a41f06c6da88cc8f9221cb8a9c2471ecc85bac0c.jpg) + +![](images/35bdaab862398b280b93d12e204d9af2976ece50c392d52787901298a1fa291b.jpg) + +![](images/91d7380965a5f8a6ac307a59b90db61bdd70ef076c07c20482e55d4aacfa72d2.jpg) + +![](images/bf864ab572dda71f684d22633b8ee1f3cb5bd9da78058b049050cf8445cbf46a.jpg) + +![](images/195f79f81ea51fc6b64b07178e4c8f53e1fbaa18a0e2028266bfd16381535d7e.jpg) + +![](images/b5fd7981f00ef2602e04defc4e553b720204e93e9d7702c398f1c206064fe55e.jpg) +Figure 17: Correlation of sharpness with generalization on ImageNet for different $\rho$ and for different distribution shifts. + +![](images/1773ef4f662e5a4ab195fb34aa0013e5d6cd8e7e4023931c3ee0c33437393f42.jpg) + +![](images/e156cc976639e182fbe452b2164e36149d881e68fa72c4013285bd6b0c68d24c.jpg) + +# E. ImageNet Models both Pretrained on ImageNet-1k and ImageNet-21k from Steiner et al. (2021) + +For completeness, we here show for two sharpness definitions the models pretrained on ImageNet-21k and ImageNet-1k together. We find the better-generalizing models pretrained on ImageNet-21k to have significantly higher worst-case sharpness, and roughly equal or higher logit-normalized average-case adaptive sharpness, underlining that the models generalization properties resulting from different pretraining datasets are not captured. + +![](images/e402a9d79c3561857f0baedebbfdf91a0b09c4a86592fb9170858f9ad2b59dd8.jpg) +Worst-case $\ell_{\infty}$ adaptive sharpness without logit normalization + +![](images/65caa8d507e559d79f4799c01263cb7700382746b3cfa1d1831e714132c0d41f.jpg) + +![](images/f3d95dc345f0de3025ed788ee6f62857fc11a9c06ce940a6e6de050b3012b2d8.jpg) + +![](images/5a3942caa79c3ed38b67265c8bfc8d22c0383175589cfe5d60ac851b17fda1a9.jpg) + +![](images/84ae25a6a7e67ddc0d3404a7c37f7d08b02601c3b241c9437172d6b49678e7ae.jpg) + +![](images/b1a42c7577958c930d24aac777d230bcb2e5b5b2ceb29775def1c87baf3ed0d4.jpg) + +![](images/cce394ebe1bbeb9f597f247250b415d711cd6ceb091afa14c3f28a8ebf11c5dc.jpg) + +![](images/501da4c93e80782e6d872c229538bc3c594fb776958db46b5d30877406865f7b.jpg) + +![](images/5b6b0c5fc9f967a56efedbadeeb7e35512db99421d354b7e2655695c9b19b9c8.jpg) + +![](images/a8d0359d62e06f878fe7103c997604d1468a875395f1ee50834c967d617d8b2a.jpg) + +![](images/3540bc525bbef0009c56c51d9428d341cd19e82168113eb8092ef1431c1f6760.jpg) + +![](images/c2c2e489833f24c3aa88aca6ab752a69973396a92038d97c49d44bb8eee09d23.jpg) + +![](images/e640253a7f16b5d46fe11eb416d4e32c724b33b0d3efcae9bf22a272256ea771.jpg) + +![](images/9b7e72aa3ff1345aa5a36f3bd5b1a378cf1b47d3bf270bb0cabf94db103ef2be.jpg) + +![](images/1aabc3274b819c01ce19faa69ceb091c9a53eb98398a8d53a18078426acf5cfb.jpg) + +![](images/6f91cca93a655cb4c8321c62ec948738975f378f100d0a5ae5c940ba57004f11.jpg) +Figure 18: Correlation of sharpness with generalization on ImageNet-1k for different $\rho$ and for different distribution shifts. + +![](images/1f1e44a0adafc28c8540f943104ce921a4350d0bc833b66db46dcbd08ddfd9a4.jpg) + +![](images/948ad757b06d5d91f9486f5e581c20ad1aa12978c5571c9909b847bb3f1d10f0.jpg) + +![](images/3ed42811af8162d33c2a0347b8c4f59e6162dde23d4d12bbdb31d64a8d646eab.jpg) +Average-case adaptive sharpness with logit normalization + +![](images/4a7c9876e54a63d56f0d43cef535af12db41ff1cfbce03f59028699c18726c8e.jpg) + +![](images/40865fa5c316210d7f2bceeec0a9be7835470be1203a67b4e58d80ba0e21dc74.jpg) + +![](images/a96e2b4d910044abe407c3cf26eca4b2c054341e9ccacf356c70489691f691b8.jpg) + +![](images/4d5098670e24cd63158f3a66c9bd3e03843e8b80ce200dddaba85677caeaf711.jpg) + +![](images/06f30193466a381d3c6688fabd8f527b4a4e056a1c2f4fb15324911afe23e90c.jpg) + +![](images/b0adf62e0d03be0e66dacec02377fae353bc918430443cac87327020056ec041.jpg) + +![](images/b0515c47b4ceaa13a7bb9e51882e35c9418779adbabf4270cc3ce14a9e9a03a5.jpg) + +![](images/fc2b5cf874dc014aadc9215a34beb8bfa0c860a300a077d0378384001a40613f.jpg) + +![](images/cb0f7010742a09903f46fe4ca8f1a55372fa9d9b33ab035ac6326f70f18e23c0.jpg) + +![](images/42e35e67baae2c7f2a0d52696923e072c28cb8801dcbe96bf3fce8135ea8171f.jpg) + +![](images/92cc9c4276ae73d8236ad1e09614c686918ca7b535409712c74a10e650192d5f.jpg) + +![](images/d3d5d20f0d547949db23b90c37a809b3c86ed754c3fb7575d8a969deff9e24b6.jpg) + +![](images/477b3231a891c1676c74c7a237efa349ebe5029bc9a54cff498a496bcf7bfbcb.jpg) + +![](images/5d2312e9cfac061028201ed0ccb414c034ded59740f3a77b1a491305dd51b49d.jpg) + +![](images/565312fcbbad37cf4b93937218a88097f3a95e50aaf8db06f218d26db31223bd.jpg) +Figure 19: Correlation of sharpness with generalization on ImageNet-1k for different $\rho$ and for different distribution shifts. + +![](images/f0968c7f9841c57f29710e26256abacefde90b45f5dadda45ff5705dcc96c085.jpg) + +![](images/94f7180c22440bdc5208cb9ba61944bca8ffe6df429e2c3cb20ead6df2bdb543.jpg) + +# F. Fine-tuning CLIP Models on ImageNet: Extra Details and Figures + +![](images/cf1bde24ca70b155932b88eab9dbbabbec622376f3b76eaf23970f4da499a7cf.jpg) +With logit normalization + +![](images/d356bec846b391bab41c5a80873d8f1c61d18a26f2ab8cfc562e04f1c3bf413b.jpg) +Figure 20: Fine-tuning CLIP ViT-B/32 on ImageNet-1k. Sharpness negatively correlates with the size of learning rate for fine-tuning, both with (left) and without (right) using logit normalization. For worst-case sharpness $\rho = 0.002$ is used, for average-case sharpness $\rho = 0.1$ . + +![](images/cd700ebf9b4721827406d9235e96c1b28095dce9d16e783803257a4a780c6164.jpg) +Without logit normalization + +![](images/077b35595034048332ad06a57f6e9b3a627f7a2a393eb8a22578e64abe25bcb9.jpg) + +Experimental details. We take advantage of the models fine-tuned by Wortsman et al. (2022a) from a pre-trained CLIP ViT-B/32, with randomly sampled training hyperparameters (see random search setup in Wortsman et al. (2022a)), for which the evaluation of ImageNet validation set and distribution shifts are provided. + +Extra figures. For each sharpness definition we show for three values of $\rho$ the correlation between test error on ImageNet (in-distribution) and on the various distribution shifts. In particular, we use worst-case $\ell_{\infty}$ adaptive sharpness with (Fig. 21) and without (Fig. 22) logit normalization, and average-case adaptive sharpness with (Fig. 23) and without (Fig. 24) logit normalization. For all figures we represent with colors represent the size of the learning rate used for fine-tuning (darker color means larger learning rate). In addition to the datasets shown in Sec. 4, we here report the results for ImageNet-V2 (Recht et al., 2019) and ObjectNet (Barbu et al., 2019). ImageNet-V2 consists in a new test set for ImageNet models and is sampled from the same image distribution as the existing validation set: then, the performance of the classifiers on it are highly correlated to that on ImageNet validation set, and ImageNet-V2 cannot be considered a distribution shift in the same sense as the other datasets. In general, we observe that sharpness variants are not predictive of the performance on ImageNet and ImageNet-V2. Moreover, there is in most cases a negative correlation with test error in presence of distribution shifts. We hypothesize that this is related to the influence that the learning rate has on sharpness (see Fig. 20), i.e. lower values lead to sharper models. + +![](images/d74c1cbb3513a499359295c9e47494a64a392f688896c41c7a56e45e98328789.jpg) +Worst-case $\ell_{\infty}$ adaptive sharpness with logit normalization + +![](images/e7ed895e0e37d79294bab68e6787e538631807efdbaebf385d0c3f7119a18b05.jpg) + +![](images/ebcc4c53b4185d82335ec760375a6d82adfcfb87c84fee4b9508509ead19d8f9.jpg) + +![](images/53d9119e0a90fb166de87d1b39a69099566e540dcf6325659c26fbe8248ddb8d.jpg) + +![](images/5fd4741f2e92f96d871dd48ace1ebd4ccea850fed8999e376835437cdf36e928.jpg) + +![](images/852e11c02f82efbc3c4b39f08e6e37b18c9cbc06d8c7015e2e1aa80b9f1905a3.jpg) + +![](images/37bc537c43f34c9bb6f724663a1d8229134f5eee85b76854fce859476cff576d.jpg) + +![](images/510efa1a7913efacd217805ea904ad9c18d29f93a2ab91f18d621676f979d3d9.jpg) + +![](images/22e74c11a5b8fd8d99b307fa0a04a03c45476c668ed9f9b6795721238a81c1d3.jpg) + +![](images/55bd268bb7db68cba61b6190db4c92dd781bb399ceb9824e101771e54d0c8e5f.jpg) + +![](images/e55f2e5da4aae237b5088a4467090dacbe3dba18683cc420ed8b9ae7b30b96c4.jpg) + +![](images/e86af94da11902db22717a19c5d5d80a347153d8b98cd6744aa2d5b6c44cbd4c.jpg) + +![](images/3baa011fa9ddad37e2a99c643d430549eaec21a5e7a23923403a15bcfdfe0290.jpg) + +![](images/e50a38a776f9540bd0ec56ab0109f5a535cf6c7730b36b14966af44157f4c967.jpg) + +![](images/81364b991da819da8dd9939a4bb5d3200af73916a621ea101afd96624a89f474.jpg) + +![](images/c2f8c0b1684ee30a494e1b6d37c650b19971b7ec0ea75055ab13fe34bb20794a.jpg) +Figure 21: Correlation of sharpness with varying $\rho$ with generalization on ImageNet for different distribution shifts. + +![](images/cb0d4d980048991ff06517bf8bc17db6f9b4ba06b6ce095791d5be1c1378a48a.jpg) + +![](images/c93df36e84008ee68e8d54443d879c87a999f9acfec0a0b73eab4e1837bf4323.jpg) + +![](images/0f1f00010c56ce554070804c825c3825f022dd3fffc149792914437060a3222e.jpg) +Worst-case $\ell_{\infty}$ adaptive sharpness without logit normalization + +![](images/4e2f51dafd10c262faec31f32d9099e69e0e649a0b2acfb48f727cce73758b2e.jpg) + +![](images/42090569b8b62be97f2886e1b3d951ea78faa885c2b29f62b2d6bf067a9165ef.jpg) + +![](images/aa054d9fde24278d281be8f11be962fce86ef8eb088f23f958e2e63678d33814.jpg) + +![](images/702a942fe97f5e92cb8351d3db0b88915cf16b11cb395d8a3d667756b5dcfb40.jpg) + +![](images/7ed89bb2438b2ab14bbbf5b1b68c2d374f5249165888f1171fff55a33940b7bc.jpg) + +![](images/a9d235c91d4628977c2e70999d7b769b2c944d79c67cded6df9285431da68c7c.jpg) + +![](images/a5b2e6d5896fe5cd7e6a7a03c30ecf182a54bb17cb08df94e67506ce758912c5.jpg) + +![](images/c2bf85d6c6eb4a2b6a4e0fd79dd1aa43e96f93882aedcaeda0200caeeb6da790.jpg) + +![](images/e33ad66f940d901b845cdf152f7091a9a8638352f721dc64eb618ae070b44ad2.jpg) + +![](images/ba54a036a6a23963b35c32848a89ca68f5bafa4a30aed1b0cdf43a46e3af72d2.jpg) + +![](images/d3a0ccbfab49c62c47e43eefdc7db595f8212c1e4e4663e7a22d213767551be2.jpg) + +![](images/479661789ec9d45e0358c29c58761eab636950de929f1643f15e23de6f7e7b4b.jpg) + +![](images/6a93c9778422f306774064217d052a554a1448977dc1302f2a76cccc116936df.jpg) + +![](images/074133d44b81e11e13996db76e98b959fec0a991e79fe4181c0ba4a55b8635ed.jpg) + +![](images/8a69550d328655bd86f3e5f53dbf52bf4bdb3238be144d75146647e40b7be8a0.jpg) +Figure 22: Correlation of sharpness with varying $\rho$ with generalization on ImageNet for different distribution shifts. + +![](images/c0f13bf9beef080de95f49d0d1884910f05ef4d7bb6f2fb36f808a4d1abb0d12.jpg) + +![](images/ca3a675899f8c3c6897c17f46a2eab539fd06cd52369594856386adc56b20b5d.jpg) + +![](images/23e0c3b5d9b21b7458bbc72544862931da77ebc46c9bdd5d0e5a3f8e6f1e003f.jpg) +Average-case adaptive sharpness with logit normalization + +![](images/7fa840fa2eb2039f86887c419839f2c0d59697439c66425035a434d15e71a1f2.jpg) + +![](images/e33fc7f3d1940a78bbcc709173c29c6a65d5514de060a4d79483aa87ab8b15c7.jpg) + +![](images/03680b97f7ea96d175fb1d8badbaddd500b965a0f20d7663ea8e601fc136ac45.jpg) + +![](images/9a04e9137e5129e625f06211f8874bdfd55028369c4d23ab5cbe197414119129.jpg) + +![](images/fca59e43dcc1414c36b299940e5c01ea3ef39f11b07ebd292168aa8c36376c4f.jpg) + +![](images/99a9005ab4e4c10eada808474f4914c02773c4c0f09af8411b393b172be0e740.jpg) + +![](images/5c7ffe511b9e3adbfc375149ad4d9121a084a48c385c6a05f678bfbc318eb504.jpg) + +![](images/81bc34ae8ff732f221161d2bef11fba8ee54e4c30cd1b2fba0f066f2d4df54c0.jpg) + +![](images/bbb5a7b46c8f79fd906e75f497c8b0a5e9eb5ad9ab10d574f87e6be69496dcaf.jpg) + +![](images/f7684fea5ddc10305ca14693634c5b9a84bed0d7dbfdee9721a7ae6e17a44e80.jpg) + +![](images/01b6c8798f307d71932975e8d7ab5def6bdfc9afe60e9f7c060c22c61c9f2bf9.jpg) + +![](images/1d0f8c70df13f164e9d9ebb94e172de18f5ab6207fd5276e32ee9d3a23b46ae0.jpg) + +![](images/c34755c19982bddada13d5e685669658b55537c36b2ca980fb7cc2e7f6e65d1a.jpg) + +![](images/8673ab6e89eedae0a35861b6bfae02af1b7ecac4f41f443aa922b5e8fef3618d.jpg) + +![](images/ef73d6d84960bab136c586ebe1fc77308501f33f6c8c46ffe9680c88756db8f8.jpg) +Figure 23: Correlation of sharpness with varying $\rho$ with generalization on ImageNet for different distribution shifts. + +![](images/ba05e40e06d7cca50d8aa589d6423d7079484d8b1058d40019cdd91f0125bc39.jpg) + +![](images/501c87ae20f36756789b68d519323d349110fcb82d5be2acb4d0f169714f87eb.jpg) + +![](images/b6eed0f39ce243735c4b013acb35a32b02b3950aa6c1823c9281afb67a038002.jpg) +Average-case adaptive sharpness without logit normalization + +![](images/89df5f27e45d1a2d4b5d9045f46ec703e46e034191ff3edc595841aca58745b5.jpg) + +![](images/c16cb607204e88964b7af25f28f19e4d257b73b919130a231e0118dc5d24d958.jpg) + +![](images/7e6fe67f30dc9e725d0b720eaf2f63a6299bb1179cbf482c9f5849aa34ac72e8.jpg) + +![](images/4adc481b2646cd7f3fd07b813b8ee866e4d97dfda7bcc1b5eedfca5994f10dcd.jpg) + +![](images/16ffb8d44a5cfbb8defe6c1ad0813f9f091448179bda2ab97071d737ff74e964.jpg) + +![](images/948c3f76f0fa7a7fc079f2ac06bdea4c24e64a23d2a821ef164f163185108a00.jpg) + +![](images/8dfac6d9fd211c6076646d862a5f5db746b3c57faf7d0fc203487b54ace2c3aa.jpg) + +![](images/5e165e41d42f9cb4996a0414561fe0983e50218d436a45c5674acbd12cda6494.jpg) + +![](images/891f81aa4d9fe810035e586bcafae64beedfab8246f9f7fee5d24e2ca70e58a6.jpg) + +![](images/5b17219ad28bb531be31da9cf174cc7029dea973ecb54b8772f40db4c0de3693.jpg) + +![](images/a5f0f1f61e125b4f2fcf03041892fb35fc15e4d0a6e9938d9fa91aa8d9f0a17a.jpg) + +![](images/a7900a793e6133edc575c9ada499f13b6071048c9282a37903c960b448774989.jpg) + +![](images/0d07110ad4efd859847317e2d1d56b9d8b8b607c8e64b9c2664f96d3ce016f7e.jpg) + +![](images/05c7720420634ba2328d4a1edec97057a647ab191708feb8bb428144b1ca1287.jpg) + +![](images/09ff7cadfd0a7622c384a38d815ebbd685477c0595463d12e5711c5b84bee61f.jpg) +Figure 24: Correlation of sharpness with varying $\rho$ with generalization on ImageNet for different distribution shifts. + +![](images/b500af027cdba59f623a4a6799ecb94f2695968e8dbbdf3312351eb84c38ae32.jpg) + +![](images/aa0be9b6704f5354d83897459a1f24af4ab23acd2b27f5041e21f3f7c54cc586.jpg) + +# G. Fine-tuning on MNLI: Extra Details and Figures + +Experimental details. The models from McCoy et al. (2020) we use are BERT fine-tuned with initialization weights of bert-case-uncased. The in-distribution test error is computed on the MNLI matched development set, that is a classification task with three classes. As out-of-distribution datasets we use three categories of HANS considered "Inconsistent with heuristic" (see McCoy et al. (2020): Lexical overlap, on which the classifiers show the largest variance in test error, Subsequence and Constituent. In this case, there are only two possible classes. + +Extra figures. For each sharpness definition we show for three values of $\rho$ the correlation between test error on MNLI (in-distribution) and on various HANS subsets (out-of-distribution). In particular, we use worst-case $\ell_{\infty}$ adaptive sharpness with (Fig. 25) and without (Fig. 26) logit normalization, and average-case adaptive sharpness with (Fig. 27) and without (Fig. 28) logit normalization. For all figures we represent with darker colors the models with higher test error on MNLI. In general, all sharpness variants we consider are not predictive of the generalization performance of the model, and in some cases (e.g. Fig. 28) there is rather a weak negative correlation between sharpness and test error on out-of-distribution tasks. + +![](images/11371eb08858b14c5c3bd8336c092a071d875f1e8e1b28dffa5ed0541c0a4a81.jpg) +Worst-case $\ell_{\infty}$ adaptive sharpness with logit normalization +Figure 25: Correlation of sharpness with varying $\rho$ with generalization on MNLI for different distribution shifts. + +![](images/1623a93c746179f2145375216dea55c290c6522c92afedf247e584356bd0d581.jpg) +Worst-case $\ell_{\infty}$ adaptive sharpness without logit normalization +Figure 26: Correlation of sharpness with varying $\rho$ with generalization on MNLI for different distribution shifts. + +![](images/dc8264934593e99ebe21183f5f4403572a562acf44b4a4b69cdd35ac844cd4ee.jpg) +Average-case adaptive sharpness with logit normalization +Figure 27: Correlation of sharpness with varying $\rho$ with generalization on MNLI for different distribution shifts. + +![](images/a10fa03f4cb2d461b97c85dd6d25ead222f5a5639eb50867f44fe6f928cd547e.jpg) +Average-case adaptive sharpness without logit normalization +Figure 28: Correlation of sharpness with varying $\rho$ with generalization on MNLI for different distribution shifts. + +# H. Training from Scratch on CIFAR-10: Extra Details and Figures + +Extra details. We train 200 ResNet-18 and 200 ViT models for 200 epochs using SGD with momentum and linearly decreasing learning rates after a linear warm-up for the first $40\%$ iterations. We found that adding such warm-up to SGD allows us to bridge the gap between SGD and Adam training for ViTs. We use the SimpleViT architecture from the vit-pytorch library which is a modification of the standard ViT (Dosovitskiy et al., 2021) with a fixed positional embedding and global average pooling instead of the CLS embedding. We use a ViT model with $4 \times 4$ patches, depth of 6 blocks, with 16 heads, embedding size 512, and MLP dimension of 1024. We sample the learning rate from the log-uniform distribution in the range $[0.005, 0.5]$ for ViTs and $[0.05, 5.0]$ for ResNets. We sample uniformly $\rho \in \{0, 0.05, 0.1\}$ of SAM (Foret et al., 2021), with probability $50\%$ mixup ( $\alpha = 0.5$ ) (Zhang et al., 2018), and with probability $50\%$ standard augmentations combined with RandAugment (with parameters $N = 2$ , $M = 14$ ) (Cubuk et al., 2020). We use $2 \times$ repeated augmentations to reduce the augmentation variance from RandAugment (Fort et al., 2021). For CIFAR-10 models, we only show sharpness for well-trained models that have $\leq 1\%$ training error. We note that this selection criterion leaves more ResNets than ViTs on the figures below. + +Sharpness evaluation. For sharpness evaluation we use 1024 data points from the training set split in 8 batches: we compute sharpness on each of them and report the average. For worst-case sharpness we use Auto-PGD for 20 steps (for each batch) with random uniform / Gaussian initialization in the feasible set depending on the $\ell_{\infty}$ vs. $\ell_{2}$ norm of sharpness. For average-case sharpness, we sample 100 different weights perturbations for every batch. + +Extra figures. We present additional figures in Sec. H.1 on the role of data used to evaluate sharpness, in Sec. H.2 on the role of the number of iterations in Auto-PGD to estimate sharpness, in Sec. H.3 on the role of $m$ in $m$ -sharpness, and in Sec. H.4 on the influence of different sharpness definitions and radii on correlation with generalization. + +# H.1. The Role of Data Used for Sharpness Evaluation + +We emphasize that for all experiments, we evaluate sharpness on the original training set (CIFAR-10, ImageNet or MNLI) without augmentations. However, one may wonder how sensitive this choice is compared to evaluation on the augmented training set, particularly in presence of strong data augmentations such as RandAugment (Cubuk et al., 2020) used for training of some models. To test this, in Fig. 29, we compare adaptive average-case sharpness computed on the original training set and on augmented training set of CIFAR-10 for ResNets-18. We find that the overall trend is nearly the same for small $\rho$ and differs more strongly for larger $\rho$ where the overall correlation with generalization becomes significantly negative $(-0.74$ for the largest $\rho)$ on augmented data. In addition, a side-by-side comparison of sharpness on standard vs. augmented training shows that the relationship between them does not deviate too much from a linear trend, especially when considering separately models trained with and without augmentations. + +Adaptive average-case $\ell_{\infty}$ (uniform perturbations) sharpness (normalized) for ResNets-18 on original training data + +![](images/057ec28182b5abaddb35271ec1aa276f5116fd295fe95acd65b25182a424d9d0.jpg) + +![](images/8be96895e10ecb1912f92fda39ab753b8f864d807a4b6dd46b82ad7452a5f42b.jpg) + +![](images/a71e3aa8907ff71471bc895443c4ac6232582694ffda95f267810a59cc3d0a9d.jpg) + +![](images/9729becc055ef72fc76e9982d32c4a8837b1ae21bd4b755d79a42c5422c9eb29.jpg) + +Adaptive average-case $\ell_{\infty}$ (uniform perturbations) sharpness (normalized) for ResNets-18 on augmented training + +![](images/a5ffa5f1c744f0239e10137680e45397cc0f347548db51786af90a2b76d0cc20.jpg) + +![](images/de60dc9b564068a9b7fbf6b78c4ccdb55441205f820e8f776324ad0c6c27f818.jpg) + +![](images/c6fe607c7dbd31b1fcca553a4735e354b94da4b0767a565f1c18250c86b9cdf1.jpg) + +![](images/50513ab28a1cde77d24a3eeb2e68ead74dd3e7a14da2d96a57bf2e3319f8fdaa.jpg) +Figure 29: Adaptive average-case $\ell_{\infty}$ (uniform perturbations) sharpness (normalized) for ResNets-18 on original vs. augmented CIFAR-10 training data for ResNets-18 for different radii $\rho$ . + +![](images/28c2e825247bc345eb72bc312b74766687847028f3613de66da075deb656ad81.jpg) +A side-by-side comparison of sharpness on original vs. augmented training data + +![](images/34fc8a129e5c26b3fac2002a72b941c233fa0ac7d52ba65c2d02ae101def4870.jpg) + +![](images/fecc28bac1834d19c6d4a757e5b8a2a11df12e9e62e65dbd7d8aa3ba64ed711f.jpg) + +# H.2. The Role of the Number of Iterations in Auto-PGD + +Here we aim to justify the choice of 20 iterations of Auto-PGD in our experiments. In Fig. 30, we present results for adaptive worst-case $\ell_{\infty}$ sharpness (normalized) for ResNets-18 on CIFAR-10 for 20, 50, 100, and 200 iterations. We can see that the sharpness values are not visibly affected by increasing the number of iterations and the overall trend stays exactly the same. + +![](images/150c1746f37c72878552c4de34c41458eb9a6d761601a1ed9e2a69a8fead2d8b.jpg) +Adaptive worst-case $\ell_{\infty}$ sharpness (normalized) for ResNets-18 (20 iterations) + +![](images/7703dc8bb9153335a12b872b438ac574295efc2391c9134feba201e5d688716a.jpg) +Adaptive worst-case $\ell_{\infty}$ sharpness (normalized) for ResNets-18 (50 iterations) + +![](images/60f17679facbeb76c644e65b9d033a0b20edcd4f3ba14a9ffc9b76c178af735a.jpg) +Adaptive worst-case $\ell_{\infty}$ sharpness (normalized) for ResNets-18 (100 iterations) + +![](images/a5118b3a55909d97c1b1484b73d50c08ef1a95e06befbcbc98936a0c5a6d5e59.jpg) +Adaptive worst-case $\ell_{\infty}$ sharpness (normalized) for ResNets-18 (200 iterations) +Figure 30: Adaptive worst-case $\ell_{\infty}$ sharpness (normalized) for different number of iterations in Auto-PGD vs. test error on CIFAR-10 for ResNets-18 for different radii $\rho$ . + +# H.3. The Role of m in m-Sharpness + +Foret et al. (2021) suggested that a lower $m$ in $m$ -sharpness, i.e., the batch size used for maximizing sharpness, can lead to a higher correlation with generalization in some settings. We note that we have already used a small $m$ for all our experiments ( $m = 128$ on CIFAR-10 and $m = 256$ on ImageNet and MNLI), but here we check additionally whether even smaller $m$ change the trend. Fig. 31 shows the results of sharpness for adaptive worst-case $\ell_{\infty}$ sharpness (normalized) for ResNets-18 and ViTs on CIFAR-10 for $m \in \{16, 32, 64, 128\}$ . We can see that different $m$ only slightly affects the sharpness values and the overall trend stays unaffected. + +![](images/c780c5e49923c84af439c922abd8ef0480d7ebf2a20c08e62858d868990762a5.jpg) +Adaptive worst-case $\ell_{\infty}$ sharpness (normalized) for ResNets-18 ( $m = 16$ ) + +![](images/bc7f5d3b813c15f4fdf503ff47501a94a389a2ba492b299f04a1f21d4fef2c6b.jpg) +Adaptive worst-case $\ell_{\infty}$ sharpness (normalized) for ResNets-18 $(\mathrm{m} = 32)$ + +![](images/91be71a2727bc20fa2e06c4cfa72d51c4371cfd97ba7cf2be98b823742725c49.jpg) +Adaptive worst-case $\ell_{\infty}$ sharpness (normalized) for ResNets-18 $(\mathbf{m} = 64)$ + +![](images/c468178641d9b9fe068371532e7171b438984a5f97039eef9f01c8b67d70fa02.jpg) +Adaptive worst-case $\ell_{\infty}$ sharpness (normalized) for ResNets-18 $(\mathbf{m} = 128)$ +Figure 31: Adaptive worst-case $\ell_{\infty}$ sharpness (normalized) for different $m$ in $m$ -sharpness vs. test error on CIFAR-10 for ResNets-18 for different radii $\rho$ . + +![](images/ed79ad46c10e3901d2d9e42a99077a5bdf4f30cf17c7e1114c2de057ad844539.jpg) +Adaptive worst-case $\ell_{\infty}$ sharpness (normalized) for ViTs $(\mathbf{m} = 16)$ + +![](images/f8fa8dd817d5e1d9e7e2b1c2c155af484e49e1c3d701ba6add1423cd590a4c57.jpg) +Adaptive worst-case $\ell_{\infty}$ sharpness (normalized) for ViTs $(\mathbf{m} = 32)$ + +![](images/d0b0a6abbf7d75038cc0fb06cab6d1e4679bae501c06dd46af770bff84ae645d.jpg) +Adaptive worst-case $\ell_{\infty}$ sharpness (normalized) for ViTs $(\mathbf{m} = 64)$ + +![](images/d68b9d7e049a051eddd8a22f945cfa8264818a6693d6db4363abe2b1c5a9c745.jpg) +Adaptive worst-case $\ell_{\infty}$ sharpness (normalized) for ViTs $(\mathbf{m} = 128)$ +Figure 32: Adaptive worst-case $\ell_{\infty}$ sharpness (normalized) for different $m$ in $m$ -sharpness vs. test error on CIFAR-10 for ViTs for different radii $\rho$ . + +# H.4. The Role of Different Sharpness Definitions and Radii + +Here we present results for 12 different sharpness definitions: + +- standard average-case $\ell_2$ (Gaussian perturbations) sharpness without logit normalization, +- standard worst-case $\ell_2$ sharpness without logit normalization, +- adaptive average-case $\ell_2$ (Gaussian perturbations) sharpness without logit normalization, +- adaptive worst-case $\ell_2$ sharpness without logit normalization, +- adaptive average-case $\ell_2$ (Gaussian perturbations) sharpness with logit normalization, +- adaptive worst-case $\ell_2$ sharpness with logit normalization, +- standard average-case $\ell_{\infty}$ (uniform perturbations) sharpness without logit normalization, +- standard worst-case $\ell_{\infty}$ sharpness without logit normalization, +- adaptive average-case $\ell_{\infty}$ (uniform perturbations) sharpness without logit normalization, +- adaptive worst-case $\ell_{\infty}$ sharpness without logit normalization (shown in the main part for a single $\rho$ ), +- adaptive average-case $\ell_{\infty}$ (uniform perturbations) sharpness with logit normalization, +- adaptive worst-case $\ell_{\infty}$ sharpness with logit normalization (shown in the main part for a single $\rho$ ). + +We evaluate a wide range of radii for each sharpness definition to make sure that we do not miss the right scale of sharpness. We present results first for ResNets and then for ViTs. + +Observations for ResNets. For ResNets, we observe that many sharpness definitions can successfully capture correlation with standard generalization within each subgroup defined by the values of augment $\times$ mixup. In particular, on average, adaptive sharpness shows a better correlation with generalization within each subgroup, and the best correlation within each subgroup is achieved by $\ell_{\infty}$ adaptive worst-case sharpness with logit normalization for a small $\rho$ . In many cases, the correlation of sharpness with OOD generalization on CIFAR-10-C is noticeably lower compared to the correlation of sharpness with standard generalization. Overall, we see that there is no coherent global trend of correlation with generalization that would apply to all models at once. We also observe that for some sharpness definitions, the flattest models generalize best (for adaptive worst-case $\ell_{2}$ sharpness with normalization for the smallest $\rho$ and for adaptive worst-case $\ell_{\infty}$ sharpness without normalization for the largest $\rho$ ) but this appears to be unsystematic and there exist nearly equally flat solutions that generalize much worse. + +Observations for ViTs. For ViTs, in contrast to ResNets, we do not observe a consistent correlation with generalization even within subgroups. The only exception is the subgroup of points with augmentations where multiple definitions of sharpness tend to correlate with generalization and capture the effect of larger learning rate. We think it is likely due to the fact that with heavy augmentations optimizing the training objective to smaller values is helpful for generalization, while without augmentations all runs have converged within 200 epochs and the learning rate plays no visible role for generalization there. Globally, when taken over all models, the correlation with standard generalization is close to 0 and tends to slightly decrease when we measure OOD generalization on CIFAR-10-C. Finally, we note that there are no cases where the flattest ViT models achieve the best generalization. Thus, even our weak hypothesis about the role of sharpness is not confirmed here. + +![](images/7f6686f182577df05e9425477d1b6da645698def7e98cad8912f160481b2ac1e.jpg) + +![](images/082b09985ad0112921faf562d4f6f494c792695e9e3669d8b8571b798b9cc8dd.jpg) +Standard average-case $\ell_2$ (Gaussian perturbations) sharpness (unnormized) for ResNets-18 + +![](images/043f885cde5b468219f5768a334d14f6099cad27d586f15110a7db704a05e6d8.jpg) + +![](images/a8ee57d819eced287f93723b18b7f0a4a37008a609bdae755f2462cf5a224ca7.jpg) +$\log_2$ LR -3.0 -1.5 0.0 1.5 Augment. False True Mixup False True + +![](images/8a01cef94a424d969f7f49808775c5f9c0eeb8548ce9f2d8f2b9883812fd427f.jpg) + +![](images/b5f579bcc2cabac1e0b9d0219eec7fca9fef35461317c1a0bf3d063753b1b8c7.jpg) + +![](images/884c8049eca2e32691c20a8d3fcd5549be2b649a24f1403fa15436f80ed05824.jpg) + +![](images/0f1dd8e31dc142c1b24e74a9031de04db6a788bbd882a44a59d1606468a17829.jpg) +$\log_2$ LR -3.0 -1.5 0.0 1.5 Augment. False True Mixup False True + +![](images/1beb1a3cb303815a15a1b4e92fe6c4e142b63afd865d89ce35139d8ff95e8202.jpg) +Standard worst-case $\ell_2$ sharpness (unnormized) for ResNets-18 + +![](images/dedee657320f3b34a45f65b505c7aee0154aad77bcabf26eef79fcb9df0b1f66.jpg) + +![](images/0a25ad22ea4a4eb99f69e60e009569c777d31c7605e0e726d747592fda86e979.jpg) + +![](images/8cf7afb5ea55ff7c5a5b6f03335f0fd3edb98c5243652a858372ab67feeb4630.jpg) +$\log_2$ LR -3.0 -1.5 Augment. False True Mixup False True + +![](images/6b493a9797120fd115d11d8b412e583558301a23ae453bd34fbff79c868d296a.jpg) +Figure 33: Average and worst-case $\ell_2$ standard sharpness definitions (unnormized) vs. test error and OOD test error (common corruptions) on CIFAR-10 for ResNets-18 for different radii $\rho$ . + +![](images/699ebe62af70c7eff1cff59589e1e71232006c1580036dadeeeb604aed950e35.jpg) + +![](images/a8ac42bd51dadc1197f6d10ad2bb0bd612f5576fd92174ac702b6c475b8538a8.jpg) + +![](images/f16a2b1b7e88f0bbda594c63e20ac5b5b79b2ab92b5f897cd95760e953bbd371.jpg) +$\mathrm{log_2LR}$ -3.0 1.5 0.0 1.5 Augment.. False True Mixup False True + +![](images/94976bd10fbb0b14b2197891cc577c8dc9a2c5e9ec8e6fee7cdc63e5eb2bbf3e.jpg) +Adaptive average-case $\ell_2$ (Gaussian perturbations) sharpness (unnormized) for ResNets-18 + +![](images/333257deeaffd3ff0b553b8737a5b2da11609a04d9271dc4f6fa74ce1bb23250.jpg) +Adaptive worst-case $\ell_2$ sharpness (unnormized) for ResNets-18 +Figure 34: Average and worst-case $\ell_2$ adaptive sharpness definitions (unnormized) vs. test error and OOD test error (common corruptions) on CIFAR-10 for ResNets-18 for different radii $\rho$ . + +![](images/a577b9b74c7ef4d6a8858b736855fc269488af2fff3c4b11467e24538809c4a4.jpg) +Adaptive average-case $\ell_2$ (Gaussian perturbations) sharpness (normalized) for ResNets-18 + +![](images/a89d4df36eeeb347e0af575e45c0b2c431a120f324228b06089001488766b09a.jpg) +Adaptive worst-case $\ell_2$ sharpness (normalized) for ResNets-18 +Figure 35: Average and worst-case $\ell_2$ adaptive sharpness definitions (normalized) vs. test error and OOD test error (common corruptions) on CIFAR-10 for ResNets-18 for different radii $\rho$ . + +![](images/37293958391dd1bf6b44867a645bb9fa279a7ab1e1a750493337b0bf15aaa8df.jpg) +Standard average-case $\ell_{\infty}$ (uniform perturbations) sharpness (unnormized) for ResNets-18 + +![](images/46bded84bfe02240c216bd9ba9b61edbf643e4e0edab7107d94391da5868c6a4.jpg) + +![](images/a8ced36b7721863e9facbec53a17de04658a60687f65db1eaa885c453713a41c.jpg) + +![](images/caeb0e3bfb4f80bd675bee7c87ccc23c8f3fcf584d6b9ba4289998b19be3b78a.jpg) +$\log_2$ LR -3.0 -1.5 0.0 1.5 Augment. False True Mixup False True + +![](images/e2c355c73ad230651d0028b4b154c37bf711e197505ab371e56e412411166de3.jpg) +Standard worst-case $\ell_{\infty}$ sharpness (unnormized) for ResNets-18 + +![](images/4ab0c5f9c567791252eb5dd427ec289c2d19dc1b32475ae5e665945cb80cdeb5.jpg) + +![](images/f14552003d3097938ae89b0e39cadb63b893ab5eab299fa29cef93b23defd126.jpg) + +![](images/7c06c4551223c1333a69461731e3f69d0bece703ee834757e350f3d65326fdd5.jpg) +$\mathrm{log_2LR}$ +-3.0 +-1.5 +0.0 +1.5 +Augment. False True Mixup False True + +![](images/b8b3b0f7b17915e5e6ef4ffde8684cd52b6bde011aa91d93e68dca509fd5ce3e.jpg) + +![](images/adfb86c44cd2e21fa43dfb3c56c94bc415dcba66252740ca5c9aad63c16d1aa5.jpg) + +![](images/f0493d0e49086ce071e1729448966b3cdfada76278cebeb267a78a00c20dcf42.jpg) + +![](images/40affb96800e4808a4a26d667807dccdd93af3a467ac999cfe9e72f5a8854dcc.jpg) +$\mathrm{log_2}$ LR -3.0 -1.5 0.0 1.5 Augment. False True Mixup False True + +![](images/e18b1fb89a0b1512e0a5d2ebf4bd9b82b5ef178bd48fbdab38b37a7d90251bbc.jpg) +Figure 36: Average and worst-case $\ell_{\infty}$ standard sharpness definitions (unnormized) vs. test error and OOD test error (common corruptions) on CIFAR-10 for ResNets-18 for different radii $\rho$ . + +![](images/0d818933c9a6300e7761012b3cf1c757aaa807a4c4e6e36db6ae58f163cb3949.jpg) + +![](images/5c7af61677f98402bf2888bd55f8f0c54eccd985a27a89a52eb249935033be81.jpg) + +![](images/59be0693cd8a52bccdd7cc5ffbf7d53940d55bd89848392dc6cec0b28ff62240.jpg) +$\log_2$ LR -3.0 -1.5 0.0 1.5 Augment. False True Mixup False True + +![](images/69c14b6081ef4d774ffbece532db7a645199e75eb0c2d8ceb6b6888853a250be.jpg) +Adaptive average-case $\ell_{\infty}$ (uniform perturbations) sharpness (unnormized) for ResNets-18 + +![](images/b3e2a3a57f9a59ab47e8fe4bc83d59a2f6736c29fd7721a9fc3dcbae7c860263.jpg) + +![](images/b57b6a2e34bdbd560818c1786dedec7ff40f2c3dcb9f869bcf25af67961afda7.jpg) + +![](images/17988f8adee95dab8abd4316efa4db6919e1844abfce03db8df71db188799d5d.jpg) + +![](images/28084cfd9cf355b9c45e54762b562b3b44ab826e040ef0b6936882b8ba33c805.jpg) +Adaptive worst-case $\ell_{\infty}$ sharpness (unnormized) for ResNets-18 + +![](images/5c80827fb85c507e23fb8c2d3012f5529f97ec276fe35da57b0c8a37212a8b58.jpg) + +![](images/bcfd698e3ec4cb7408b43f7474450a66ff678695d001e70f8184db5e06ddfed6.jpg) + +![](images/02584e3c7fd6c90f8e3cd8a9122a404e5621a8723c20f719c2dd1dcb04c217d7.jpg) + +![](images/93c41d27da7c25513d66b2e81df71affc61067087ff1042b62ea9cfc93575850.jpg) + +![](images/45e8e5972f57450f0b8ea3a575630e57410de45ee840d4002084ef3c43354475.jpg) + +![](images/de7a15bcdc3a0782a9842a2563422bb677f605f8251499648a382a82fd6e892a.jpg) + +![](images/4ef58792082cf9ccb2043f4fe48f0ddd8599141e022e67227af4bbaef52886b6.jpg) + +![](images/839fb660292181043372730d85c2e7848f080efa7b480954c75ff8e78f4a0403.jpg) +Figure 37: Average and worst-case $\ell_{\infty}$ adaptive sharpness definitions (unnormized) vs. test error and OOD test error (common corruptions) on CIFAR-10 for ResNets-18 for different radii $\rho$ . + +![](images/b8fb4c9fbf5139ca25a2990813ea2974bb66d2324aeb095b7328cd2b57c0f7c5.jpg) + +![](images/6204f54f32d9ece07d5a112f1f962cd2c3531f5d16196aab50d76f9c3148ff3c.jpg) + +![](images/2a97fca8ae9eae0451f9e26659161fcebced3d963a51e5b2c0246b0251f35d57.jpg) + +![](images/83355a675a2ca480829274c575e0d65d6c7e72e6a2a16f76a436b0d1c97fdf3b.jpg) +Adaptive average-case $\ell_{\infty}$ (uniform perturbations) sharpness (normalized) for ResNets-18 + +![](images/80d88e9b40544f77d73e77abc6ecdd33a90aefc3d1e4d3c2e6e54b1d1398dc51.jpg) + +![](images/31cf7a9c75a85a474568fff5fb5a2edefcc49086011f85724dc7fb1468f986ed.jpg) + +![](images/4f6918acb15d858ec5f7465c094fed1a9640943379575ceac701ce4984f1b08d.jpg) + +![](images/26dc017e8bbaec1b74a8e7d5c72ffc43121bb1cb2354260ef12d912155dbd9a6.jpg) +Adaptive worst-case $\ell_{\infty}$ sharpness (normalized) for ResNets-18 + +![](images/e8781d5a1cc2159238d60f78baf3dfb19a2df895b7b74260e2da8a265bbad43b.jpg) + +![](images/08d76a9f7b3dc4fca7ea69991248a2ac990ea281a9791d3a777241a5b45796a1.jpg) + +![](images/7d2c0b2c44ff360b10042d91409cbe3ca75b9cde295f233af99f431f7207e154.jpg) + +![](images/4cf778f88bfa3962c194fd4e9d3e0e639ba46e759570cd627dd3eb364ac057ec.jpg) + +![](images/5988ed60f6eec04ebba7fb3f32dea7578e65bc7b73730976cf12cd081d3e9e78.jpg) + +![](images/321a5545459c981fe331ec8a4586aaf8a6e9184b8d4093d4e186b7b8fca7f3c0.jpg) + +![](images/6cd78d6f6ec52558afb2077d2d1c056e49e3bb2e0c9b000d0ddf3f13d010ef97.jpg) + +![](images/229b1e25a6d30dedf82ad9da7413d2ccab56d1b254b33a7bd620dc391f50bd8a.jpg) +Figure 38: Average and worst-case $\ell_{\infty}$ adaptive sharpness definitions (normalized) vs. test error and OOD test error (common corruptions) on CIFAR-10 for ResNets-18 for different radii $\rho$ . + +![](images/405354a768913ae7103c0d5d3536477abe36798f266870b5b125145d2015b7d8.jpg) + +![](images/c4b31adce7d84b281af391fe7bd6d6c49aed2109641d87a6a19b50f2146a6056.jpg) + +![](images/c199f462cdd10a04eea4ac2605eed33c98a70e6b0743f902ccc51febc0206120.jpg) + +![](images/99bf629b9d22a052bf41414cb46432e3d1c7570272e4e1a78e006a76e1f4788e.jpg) + +![](images/517ac197690a68636a541f3c9a776a296afa2988d320298a34bac898453d54f3.jpg) + +![](images/6d9b90def477f059dac7a006c93fff63fa4f47e2fdcc5155c721747fe7767212.jpg) +Standard average-case $\ell_2$ (Gaussian perturbations) sharpness (unnormized) for ViTs + +![](images/29964a720bb653f06acdea187f398571ffa86fb85966fbc1c9e46e8694971156.jpg) +$\mathrm{log_2R}$ -7.5 -6.0 -4.5 -3.0 -1.5 Augment. False True Mixup False True + +![](images/ce0de6595729a4854490be54bce461cacca238204db9a56e3d311c5809709d10.jpg) + +![](images/8bd5d1947f45e09acc534e5557871b787afb88178ef348749c03e859bae87227.jpg) + +![](images/9cf34ebf22824048da1945ddec64a758936234e257d64d058691af19a46e7833.jpg) + +![](images/dcb3111b757891ed530ea6dcebf6cce792a3bfc82aa959fa25d86403177e78af.jpg) +$\mathrm{log_2LR}$ +-7.5 +6.0 +4.5 +3.0 +1.5 Augment. False True Mixup False True + +![](images/a44e6232e5aef291696309dcfdf939c66bcaa3d118acb6091d0292a9142b6ff6.jpg) +Standard worst-case $\ell_2$ sharpness (unnormized) for ViTs + +![](images/5c004494239d84589043ce783193c1cea7869555e9acef75ed277eca292c160b.jpg) + +![](images/d722c7200ee559ce2a8323b9f767bd2b3a4ee530f84ef1849a2673742c80f0ab.jpg) + +![](images/526b827c9c2770e5224824cb70723b52dc11089201bcee8038ab831f147db403.jpg) +$\log_2R$ -7.5 -6.0 -4.5 -3.0 -1.5 Augment. False True Mixup False True + +![](images/6d8f392df97c754990e57dd44b492b1c8f01a6e2d45bdfe11554f603c3f0c94b.jpg) +Figure 39: Average and worst-case $\ell_2$ standard sharpness definitions (unnormized) vs. test error and OOD test error (common corruptions) on CIFAR-10 for ViTs for different radii $\rho$ . + +![](images/676d8441117342f411e593ddf17ed1182ed0c12529776a0a5670aec7efefc6f3.jpg) + +![](images/f9d74880abfc77e399c2ce397a9a20f3b6b567d7874de996125b521fa888ad85.jpg) + +![](images/112b9440af59cef90c5a0ff44159e310b4535f189840387f9d5fc9ab94591625.jpg) +$\mathrm{log_2LR}$ +-7.5 +6.0 +4.5 +3.0 +1.5 Augment. False True Mixup False True + +![](images/cb39efc2f4cfef6f4bf650a2cf4eaaa6bdd30f6ba6bf432035c8d05c47cbb459.jpg) +Adaptive average-case $\ell_2$ (Gaussian perturbations) sharpness (unnormized) for ViTs + +![](images/967eb63056cb8fd0752c5bf7e754bf23d964572cdaf5d51e324ff9819b5563f5.jpg) +Adaptive worst-case $\ell_2$ sharpness (unnormized) for ViTs +Figure 40: Average and worst-case $\ell_2$ adaptive sharpness definitions (unnormized) vs. test error and OOD test error (common corruptions) on CIFAR-10 for ViTs for different radii $\rho$ . + +![](images/3a7a3af727d692211e174ac6dbecc85f0e957f384978f7cf620d38a8b8825d63.jpg) +Adaptive average-case $\ell_2$ (Gaussian perturbations) sharpness (normalized) for ViTs + +![](images/76d18d816585b1084e0599ec565bde62d149f52937a404e451f24b8b3d47cf02.jpg) +Adaptive worst-case $\ell_2$ sharpness (normalized) for ViTs +Figure 41: Average and worst-case $\ell_2$ adaptive sharpness definitions (normalized) vs. test error and OOD test error (common corruptions) on CIFAR-10 for ViTs for different radii $\rho$ . + +![](images/5dd2ead009826d7660e08e5117661408e43c52952e1988215c961255b742f397.jpg) +Standard average-case $\ell_{\infty}$ (uniform perturbations) sharpness (unnormized) for ViTs + +![](images/518624a17391c67840d4698bce454ab444fddac3e66db89f04412295b8dc79da.jpg) + +![](images/7cf71d43ff47be69abdd411966da3e928ef36e898fb8d6723ff6cc6d86ab0f84.jpg) + +![](images/8aa982ab7bcc2e48a14d4e1fac1336d212bbd39676c8c8c58c05a2111c3e9c11.jpg) + +![](images/1c59643ab3350e81747b6dd86fed8483eabb067617af88808345b67760bbbf28.jpg) + +![](images/301d7cc0a8fb9e3371ad8e4c061b9d15d831812e346cf1455fabcad719b352fd.jpg) + +![](images/56fb6a88a13a2ddcb8ebafac0a5f0d6cae0dedd592d984cbfe699f48c804106c.jpg) + +![](images/7acc9976b78c65530b520b90153dc6fadf92610277a55436e482d7a468f0738f.jpg) + +![](images/7c27e9c4187d107ef46cdd7daab6f14ee6448da2a066718d248930213750b87e.jpg) + +![](images/2ef9e8c8560347b789d6a82a88db1e709e9eebba26df117f1face112b94741d3.jpg) + +![](images/a600b669c211136d1e54635f8ee759fc33262ac0eb92a1cb89cf921244a433ce.jpg) +Standard worst-case $\ell_{\infty}$ sharpness (unnormized) for ViTs + +![](images/6c5ad57cebc697ad3c719ae15c6524974658ea70970d5133534e0e4e5721341a.jpg) + +![](images/1c0af75b44cde8e35fbf659b0abc7230531de07ccb932419b5fb858124930f93.jpg) + +![](images/070950637a6cce7cc535706c40f5c0221bda74c3bb1f2592021d89a87459e67e.jpg) + +![](images/8a3825d1cd88675ac894856b71b918fa18a17b687f2f731873d2a41d7870ac31.jpg) + +![](images/a1647d19ecb999274cda4f94a15b528e0a753095c84fe9ad8f86a496f93689ae.jpg) +Figure 42: Average and worst-case $\ell_{\infty}$ standard sharpness definitions (unnormized) vs. test error and OOD test error (common corruptions) on CIFAR-10 for ViTs for different radii $\rho$ . + +![](images/c17d4e2f91c95264db421ccf7b6723d0ff966abef02226e6d211da3d4de3b75f.jpg) + +![](images/6d957b93e8cd30aa33c72acf1fff0a0e6701094909ccdc89028c256a10679f3c.jpg) + +![](images/5955397b6c73a08f61f8a87edf580650841175167eaea617307ee8b39fe83e49.jpg) + +![](images/61fd69d74a354799379c04eace128a43cf70c843b9c20e7197f0bc0e8c8d8197.jpg) + +![](images/fcaecd65a046df3fe94b2f49cfefe5d1bd30b639f6856877806d57a6ac4cff65.jpg) +Adaptive average-case $\ell_{\infty}$ (uniform perturbations) sharpness (unnormized) for ViTs + +![](images/b3ce8905b6d99d796ba166623f8c8ac5930e91fb59d73d3268d01ae11271c20d.jpg) + +![](images/68c37495320acde3aeab5ed1b150e2b0493f5e4d674630b4bd0465fd365dc2e0.jpg) + +![](images/2f39ea32e5718cfd0153b4ac19e851050303b2df6a38ccbaca5b8a83951b4261.jpg) + +![](images/eb151681dbf0480090968755d16cbbf3090bcf20c2dfffc3bb826bbe58efba34.jpg) + +![](images/e7b17f902ad1c06025b5622542ec0d3dfced46d1bbf191baf1f51f28d2b2d415.jpg) + +![](images/0c1d8583b25ea4cbb34d7e845916a13dec9cbd97e520efdfffb81f10191ca198.jpg) + +![](images/6b0ee542e0a5814a90ae5083fabdf751bd70b389c10ea24c8579c05eb40e9516.jpg) + +![](images/9ac9da51e00eb9ca434812bc2fc27d6d8b82615ee574f9609056ca58c46fffe6.jpg) +Adaptive worst-case $\ell_{\infty}$ sharpness (unnormized) for ViTs + +![](images/df3ab96ccc55f4bbeca3a33486377df327c838d5aeedcf5498c334499ebed49f.jpg) + +![](images/6daa11d4c378cbe7e01d3bb971d1a15920ded260f05f8f2245d213ad61f6eaac.jpg) + +![](images/8ef6d16afe68c9e398d850dfe44f7eea7cb7a445553191bd9e6ef02ec3a88989.jpg) + +![](images/c7689a51ce7903a469a59cc32c2fed7cd004aa4273bead2d5dd551453790d553.jpg) +Figure 43: Average and worst-case $\ell_{\infty}$ adaptive sharpness definitions (unnormized) and OOD test error (common corruptions) on CIFAR-10 for ViTs for different radii $\rho$ . + +![](images/36a9f39dfefcc8e8b67dfc24d49900089117a2fabc0d5e42eb426fcecdfe7c4f.jpg) + +![](images/0bc5dd7fa82a1b5e406046447e969d78889229f2c4de59429360c3c98ecfe008.jpg) + +![](images/b0379b85ebd57b2ca82c0a6c64f16b56cbeb7ffcce4fb58663290f7a66f7b2b8.jpg) + +![](images/3392773f718eb7b894ceb4ed1e17a5dbdb03fd6ed92335635041e825d1d56fcd.jpg) +Adaptive average-case $\ell_{\infty}$ (uniform perturbations) sharpness (normalized) for ViTs + +![](images/da5080aa8d09c1bf0bb7cbafdc62e200f89799101fae7137a04e689d76e9e08f.jpg) +Adaptive worst-case $\ell_{\infty}$ sharpness (normalized) for ViTs +Figure 44: Average and worst-case $\ell_{\infty}$ adaptive sharpness definitions (normalized) vs. test error and OOD test error (common corruptions) on CIFAR-10 for ViTs for different radii $\rho$ . \ No newline at end of file diff --git a/amodernlookattherelationshipbetweensharpnessandgeneralization/images.zip b/amodernlookattherelationshipbetweensharpnessandgeneralization/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6e7af9f621f6461e63ed38281b05e53458a0103b --- /dev/null +++ b/amodernlookattherelationshipbetweensharpnessandgeneralization/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e6c51b8c13cefd29c04c240d4471997fe913ec518721de3fcebec8c6fa62fa4 +size 10825116 diff --git a/amodernlookattherelationshipbetweensharpnessandgeneralization/layout.json b/amodernlookattherelationshipbetweensharpnessandgeneralization/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b2a123d8ca3b43c1fceca7034b611a9e3c957a7e --- /dev/null +++ b/amodernlookattherelationshipbetweensharpnessandgeneralization/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d14245d932c74690f76c8c6eae92d5f6665c559ab7d26ad4636d9984faa1a49f +size 2064579 diff --git a/hconsistencyboundsforpairwisemisrankinglosssurrogates/b67322de-812e-46a9-912a-90c3c1c54f4e_content_list.json b/hconsistencyboundsforpairwisemisrankinglosssurrogates/b67322de-812e-46a9-912a-90c3c1c54f4e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..159a473e4fc3c5bc48460529a3bba010a2146161 --- /dev/null +++ b/hconsistencyboundsforpairwisemisrankinglosssurrogates/b67322de-812e-46a9-912a-90c3c1c54f4e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ceb17ce17147059423fa21b145b9456b9078cf8d35d3b239910d8dc513080e81 +size 557489 diff --git a/hconsistencyboundsforpairwisemisrankinglosssurrogates/b67322de-812e-46a9-912a-90c3c1c54f4e_model.json b/hconsistencyboundsforpairwisemisrankinglosssurrogates/b67322de-812e-46a9-912a-90c3c1c54f4e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..afcc19f7bb609ccd0b043cfe8b7d48d0df7be5b7 --- /dev/null +++ b/hconsistencyboundsforpairwisemisrankinglosssurrogates/b67322de-812e-46a9-912a-90c3c1c54f4e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6077ff70b3dc214ffa4c8997f847dda35077cbe5ea9583aa951337a7ffbe65a +size 624445 diff --git a/hconsistencyboundsforpairwisemisrankinglosssurrogates/b67322de-812e-46a9-912a-90c3c1c54f4e_origin.pdf b/hconsistencyboundsforpairwisemisrankinglosssurrogates/b67322de-812e-46a9-912a-90c3c1c54f4e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..498a5140985fc06fcebaff76b1b8cc1bac4ad4eb --- /dev/null +++ b/hconsistencyboundsforpairwisemisrankinglosssurrogates/b67322de-812e-46a9-912a-90c3c1c54f4e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c3d59bbf075afa4e2b6b3ea490b25ef4c31ae021818e3abf0ba2ef6c8e56de4 +size 676963 diff --git a/hconsistencyboundsforpairwisemisrankinglosssurrogates/full.md b/hconsistencyboundsforpairwisemisrankinglosssurrogates/full.md new file mode 100644 index 0000000000000000000000000000000000000000..cc8532bcfbe1c9ce7e990e2888fd880a343cb47a --- /dev/null +++ b/hconsistencyboundsforpairwisemisrankinglosssurrogates/full.md @@ -0,0 +1,2607 @@ +# $\mathcal{H}$ -Consistency Bounds for Pairwise Misranking Loss Surrogates + +Anqi Mao1 Mehryar Mohri2 Yutao Zhong1 + +# Abstract + +We present a detailed study of $\mathcal{H}$ -consistency bounds for score-based ranking. These are upper bounds on the target loss estimation error of a predictor in a hypothesis set $\mathcal{H}$ , expressed in terms of the surrogate loss estimation error of that predictor. We will show that both in the general pairwise ranking scenario and in the bipartite ranking scenario, there are no meaningful $\mathcal{H}$ -consistency bounds for most hypothesis sets used in practice including the family of linear models and that of the neural networks, which satisfy the equicontinuous property with respect to the input. To come up with ranking surrogate losses with theoretical guarantees, we show that a natural solution consists of resorting to a pairwise abstention loss in the general pairwise ranking scenario, and similarly, a bipartite abstention loss in the bipartite ranking scenario, to abstain from making predictions at some limited cost $c$ . For surrogate losses of these abstention loss functions, we give a series of $\mathcal{H}$ -consistency bounds for both the family of linear functions and that of neural networks with one hidden-layer. Our experimental results illustrate the effectiveness of ranking with abstention. + +# 1. Introduction + +In many applications, ranking is a more appropriate formulation of the learning task than classification, given the crucial significance of the ordering of the items. As an example, for movie recommendation systems, an ordered list of movies is preferable to a comprehensive list of recommended titles, since users are more likely to watch those ranked highest. + +The problem of learning to rank has been studied in a large number of publications. Ailon & Mohri (2008; 2010) distin + +$^{1}$ Courant Institute of Mathematical Sciences, New York, NY; $^{2}$ Google Research, New York, NY. Correspondence to: Anqi Mao , Mehryar Mohri , Yutao Zhong . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +guish two general formulations of the problem: the score-based setting and the preference-based setting. In the score-based setting, a real-valued function over the input space is learned, whose values determine a total ordering of all input points. In the preference-based setting, a pairwise preference function is first learned, typically by training a classifier over a sample of labeled pairs; next, that function is used to derive an ordering, potentially randomized, of any subset of points. + +This paper deals with the score-based ranking formulation both in the general ranking setting, where items are not assigned any specific category, and the bipartite setting, where they are labeled with one of two classes. The evaluation of a ranking solution in this context is based on the average pairwise misranking metric. In the bipartite setting, this metric is directly related to the AUC (Area Under the ROC Curve), which coincides with the average correct pairwise ranking (Hanley & McNeil, 1982; Cortes & Mohri, 2003), also known as the Wilcoxon-Mann-Whitney statistic. + +For most hypothesis sets, directly optimizing the pairwise misranking loss is intractable. Instead, ranking algorithms resort to a surrogate loss. As an example, the surrogate loss for RankBoost (Freund et al., 2003; Rudin et al., 2005) is based on the exponential function and that of SVM ranking (Joachims, 2002) on the hinge loss. But, what guarantees can we rely on when minimizing a surrogate loss instead of the original pairwise misranking loss? + +The property often invoked in this context is Bayes consistency, which has been extensively studied for classification (Zhang, 2004; Bartlett et al., 2006; Tewari & Bartlett, 2007). The Bayes consistency of ranking surrogate losses has been studied in the special case of bipartite ranking: in particular, Uematsu & Lee (2017) proved the inconsistency of the pairwise ranking loss based on the hinge loss and Gao & Zhou (2015) gave excess loss bounds for pairwise ranking losses based on the exponential or the logistic loss (see also (Menon & Williamson, 2014)). A related but distinct consistency question has been studied in several publications (Agarwal et al., 2005; Kotlowski et al., 2011; Agarwal, 2014). It is one with respect to binary classification, that is whether a near minimizer of the surrogate loss of the binary classification loss is a near minimizer of the bipartite misranking loss (Cortes & Mohri, 2003). + +However, as recently argued by Awasthi, Mao, Mohri, and Zhong (2022a), Bayes consistency is not a sufficiently informative notion since it only applies to the entire class of measurable functions and does not hold for specific subsets, such as sub-families of linear functions or neural networks. Furthermore, Bayes consistency is solely an asymptotic concept and does not offer insights into the performance of predictors trained on finite samples. In response, the authors proposed an alternative concept called $\mathcal{H}$ -consistency bounds, which provide non-asymptotic guarantees tailored to a given hypothesis set $\mathcal{H}$ . They proceeded to establish such bounds within the context of classification both in binary and multi-class classification (Awasthi et al., 2022a;b), see also (Mao et al., 2023). These are stronger and more informative guarantees than Bayes consistency. + +But, can we derive $\mathcal{H}$ -consistency guarantees for ranking? In other words, can we find surrogate losses for pairwise misranking whose approximate estimation error minimization guarantees approximate minimization of the pairwise or bipartite misranking loss? We will show that, surprisingly, this is not possible for most hypothesis sets used in practice, including the family of constrained linear models or that of the constrained neural networks, or any family of equicontinuous functions with respect to the input. In fact, we will give a relatively simple example where the pairwise misranking error of the RankBoost algorithm remains significant, even after training with relatively large sample sizes. How can we then come up with ranking surrogate losses with theoretical guarantees? + +We will show that a natural solution consists of resorting to a pairwise abstention loss in the general pairwise ranking scenario and, similarly, a bipartite abstention loss in the bipartite ranking scenario, to abstain from making predictions at some limited cost $c$ . For surrogate losses of these abstention loss functions, we give a series of $\mathcal{H}$ -consistency bounds for both the family of linear functions and that of neural networks with one hidden-layer. A key term appearing in these bounds is the minimizability gap, which measures the difference between the best-in-class expected loss and the expected infimum of the pointwise expected loss. This plays a crucial role in these bounds and we give a detailed analysis of these terms. We also present the results of experiments illustrating the effectiveness of ranking with abstention. + +Comparison with previous work. Here, we briefly discuss the relationship of prior work on $\mathcal{H}$ -consistency bounds (Awasthi et al., 2022a;b; Mao et al., 2023; Zheng et al., 2023) with ours. Awasthi et al. (2022a) studied $\mathcal{H}$ -consistency bounds in the binary classification setting. They provided a series of positive results for common binary surrogate losses with the hypothesis sets of linear models and one-hidden-layer ReLU networks. Awasthi et al. (2022b); Mao et al. + +(2023); Zheng et al. (2023) studied $\mathcal{H}$ -consistency bounds in the context of multi-class classification. Awasthi et al. (2022b) provided an extensive analysis of $\mathcal{H}$ -consistency bounds for multi-class max losses such as those of Crammer & Singer (2001), sum losses such as that of Weston & Watkins (1998), and constrained losses, such as the loss function adopted by Lee et al. (2004) in the analysis of multiclass SVM. They further gave the analysis of $\mathcal{H}$ -consistency bounds for all these multi-class losses in the adversarial setting. More recently, Mao et al. (2023) presented a theoretical analysis of a broad family of loss functions, compsum losses, that includes cross-entropy (or logistic loss) (Verhulst, 1838; 1845; Berkson, 1944; 1951), generalized cross-entropy (Zhang & Sabuncu, 2018), the mean absolute error (Ghosh et al., 2017) and other cross-entropy-like loss functions. They gave tight $\mathcal{H}$ -consistency bounds for these loss functions with any complete hypothesis set. They also introduced new smooth adversarial comp-sum losses in the adversarial setting and proved $\mathcal{H}$ -consistency bounds guarantees for these loss functions. Zheng et al. (2023) also provided $\mathcal{H}$ -consistency bounds for the logistic loss with the hypothesis set of linear models, under some distributional assumptions. They used these bounds to compare multi-class logistic regression and naive Bayes methods. + +Our paper primarily concentrates on score-based ranking with a binary label space, setting it apart from the multi-class scenario (Awasthi et al., 2022b; Mao et al., 2023; Zheng et al., 2023). The primary technical differences and challenges between the ranking and binary classification settings (Awasthi et al., 2022a) stem from the fundamental distinction that ranking loss functions take as an argument a pair of samples rather than a single one, as is the case for binary classification loss functions. This makes it more challenging to derive $\mathcal{H}$ -consistency bounds, as upper bounding the calibration gap of the target loss by that of the surrogate loss becomes technically more difficult. + +Additionally, this fundamental difference leads to a negative result for ranking, as $\mathcal{H}$ -consistency bounds cannot be guaranteed for most commonly used hypothesis sets, including the family of constrained linear models and that of constrained neural networks, both of which satisfy the equicontinuity property concerning the input. As a result, a natural alternative involves using ranking with abstention, for which $\mathcal{H}$ -consistency bounds can be proven. In the abstention setting, an extra challenge lies in carefully monitoring the effect of a threshold $\gamma$ to relate the calibration gap of the target loss to that of the surrogate loss. + +Furthermore, the bipartite ranking setting introduces an added layer of complexity, as each element of a pair of samples has an independent conditional distribution, which results in a more intricate calibration gap. + +Structure of the paper. The remaining sections of this pa + +per are organized as follows. In Section 2, we study general pairwise ranking. We first prove several negative results in Section 2.2 showing that there exists no meaningful $\mathcal{H}$ -consistency bound for general surrogate loss functions with an equicontinuous hypothesis set $\mathcal{H}$ . We then present a series of positive results by considering a family of piecewise continuous functions (Section 2.3), which we will simply refer to as piecewise functions, and the family of all measurable functions (Section 2.4). In Section 3, we study general pairwise ranking with abstention. We provide a series of explicit $\mathcal{H}$ -consistency bounds in the case of the pairwise abstention loss, with multiple choices of the surrogate loss and for both the family of linear functions (Section 3.1) and that of neural networks with one hidden-layer (Section 3.2). We also study bipartite ranking in Section 4. Here too, we provide both negative results with general hypothesis sets (Section 4.2) and positive results with the family of all measurable functions (Section 4.3). We then present $\mathcal{H}$ -consistency bounds for bipartite ranking with abstention in Section 5, for linear hypothesis sets (Section 5.1) and the family of neural networks with one hidden-layer (Section 5.2). In Section 6, we report the results of experiments illustrating the effectiveness of ranking with abstention. + +We give a detailed discussion of related work in Appendix A. + +# 2. General Pairwise Ranking + +In this section, we analyze the properties of surrogate losses for the general pairwise misranking loss. We begin by introducing the necessary definitions and concepts. Next, we present negative results demonstrating the impossibility of deriving non-trivial $\mathcal{H}$ -consistency bounds for widely used surrogate losses and hypothesis sets. In contrast, we give positive results for a family of piecewise functions and that of all measurable functions. + +# 2.1. Preliminaries + +We study the learning scenario of score-based ranking in the general pairwise ranking scenario (e.g. see (Mohri et al., 2018)). Let $\mathcal{X}$ denote the input space and $\mathcal{Y} = \{-1, +1\}$ the label space. We denote by $\mathcal{H}$ a hypothesis set of functions mapping from $\mathcal{X}$ to $\mathbb{R}$ . The general pairwise misranking loss $\mathsf{L}_{0-1}$ is defined for all $h$ in $\mathcal{H}$ , $x, x'$ in $\mathcal{X}$ and $y$ in $\mathcal{Y}$ by + +$$ +\mathsf {L} _ {0 - 1} \left(h, x, x ^ {\prime}, y\right) = \mathbb {1} _ {y \neq \operatorname {s i g n} \left(h \left(x ^ {\prime}\right) - h (x)\right)}, \tag {1} +$$ + +where $\mathrm{sign}(u) = \mathbb{1}_{u\geq 0} - \mathbb{1}_{u < 0}$ . Thus, $h$ incurs a loss of one on the labeled pair $(x,x^{\prime},y)$ when it ranks the pair $(x,x^{\prime})$ opposite to the sign of $y$ , where, by convention, $x^{\prime}$ is considered as ranked above $x$ when $h(x^{\prime})\geq h(x)$ . Otherwise, the loss incurred is zero. Optimizing the pairwise misranking loss $\mathsf{L}_{0 - 1}$ is intractable for most hypothesis sets. Thus, general ranking algorithms rely on a surrogate loss function $\mathsf{L}$ instead of $\mathsf{L}_{0 - 1}$ . We will analyze the properties + +of such surrogate loss functions. + +Let $\mathcal{D}$ denote a distribution over $\mathcal{X} \times \mathcal{X} \times \mathcal{Y}$ . We denote by $\eta(x, x') = \mathcal{D}(Y = +1 \mid (X, X') = (x, x'))$ the conditional probability of $Y = +1$ given $(X, X') = (x, x')$ . We also denote by $\mathcal{R}_{\mathsf{L}}(h)$ the expected L-loss of a hypothesis $h$ and by $\mathcal{R}_{\mathsf{L}}^*(\mathcal{H})$ its infimum over $\mathcal{H}$ : + +$$ +\mathcal {R} _ {\mathrm {L}} (h) = \underset {(x, x ^ {\prime}, y) \sim \mathcal {D}} {\mathbb {E}} [ \mathrm {L} (h, x, x ^ {\prime}, y) ] \quad \mathcal {R} _ {\mathrm {L}} ^ {*} (\mathcal {H}) = \inf _ {h \in \mathcal {H}} \mathcal {R} _ {\mathrm {L}} (h) +$$ + +$\mathcal{H}$ -consistency bounds. We will analyze the $\mathcal{H}$ -consistency bounds properties (Awasthi et al., 2022a) of such surrogate loss functions. An $\mathcal{H}$ -consistency bound for a surrogate loss $L$ is a guarantee of the form: + +$$ +\forall h \in \mathcal {H}, \quad \mathcal {R} _ {\mathsf {L} _ {0 - 1}} (h) - \mathcal {R} _ {\mathsf {L} _ {0 - 1}} ^ {*} (\mathcal {H}) \leq f \big (\mathcal {R} _ {\mathsf {L}} (h) - \mathcal {R} _ {\mathsf {L}} ^ {*} (\mathcal {H}) \big), +$$ + +for some non-decreasing function $f\colon \mathbb{R}_{+}\to \mathbb{R}_{+}$ . This provides a quantitative relationship between the estimation loss of $L_{0 - 1}$ and that of the surrogate loss $L$ . The guarantee is stronger and more informative than Bayes consistency, or $\mathcal{H}$ -consistency, $\mathcal{H}$ -calibration or the excess error bounds (Zhang, 2004; Bartlett et al., 2006; Steinwart, 2007; Mohri et al., 2018) discussed in the literature. + +A key quantity appearing in $\mathcal{H}$ -consistency bounds is the minimizability gap, which is the difference between the best-in-class expected loss and the expected pointwise infimum of the loss: + +$$ +\mathcal{M}_{\mathrm{L}}(\mathcal{H}) = \mathcal{R}_{\mathrm{L}}^{*}(\mathcal{H}) - \underset {(x,x^{\prime})}{\mathbb{E}}\left[ \inf_{h\in \mathcal{H}}\underset {y}{\mathbb{E}}[\mathsf{L}(h,x,x^{\prime},y)\mid (x,x^{\prime})]\right]. +$$ + +By the super-additivity of the infimum, the minimizability gap is always non-negative. + +We will specifically study the hypothesis set of all measurable functions, $\mathcal{H}_{\mathrm{all}}$ ; that of linear hypotheses, $\mathcal{H}_{\mathrm{lin}} = \{x \mapsto w \cdot x + b \mid \|w\|_q \leq W, |b| \leq B\}$ ; and the hypothesis set of one-hidden-layer ReLU networks: $\mathcal{H}_{\mathrm{NN}} = \{x \mapsto \sum_{j=1}^n u_j(w_j \cdot x + b_j)_+ \mid \|u\|_1 \leq \Lambda, \|w_j\|_q \leq W, |b_j| \leq B\}$ , where $(\cdot)_+ = \max(\cdot, 0)$ . A table of notation (Table 7) is presented in Appendix B. We will say that a hypothesis set is regular for general pairwise ranking if, for any $x \neq x' \in \mathcal{X}$ , we have $\{\text{sign}(h(x') - h(x)): h \in \mathcal{H}\} = \{-1, +1\}$ . Hypothesis sets commonly used in practice all admit this property. + +# 2.2. Negative Results + +Here, we give a negative result for a broad family of surrogate losses and hypothesis sets. + +The general pairwise ranking surrogate losses widely used in practice admit the following form: + +$$ +\left. \mathrm {L} _ {\Phi} (h, x, x ^ {\prime}, y) = \Phi \left(y \left(h \left(x ^ {\prime}\right) - h (x)\right)\right), \right. \tag {2} +$$ + +where $\Phi$ is a non-increasing function that is continuous at 0 and upper bounding $u\mapsto \mathbb{1}_{u\leq 0}$ over $\mathbb{R}$ . The following + +result shows that these surrogate losses do not benefit from a non-trivial $\mathcal{H}$ -consistency bound when the hypothesis set used is equicontinuous, which includes most hypothesis sets used in practice, in particular the family of linear hypotheses and that of neural networks. + +Theorem 2.1 (Negative results). Assume that $\mathcal{X}$ contains an interior point $x_0$ and that $\mathcal{H}$ is regular for general pairwise ranking, contains 0 and is equicontinuous at $x_0$ . If for some function $f$ that is non-decreasing and continuous at 0, the following bound holds for all $h \in \mathcal{H}$ and any distribution, + +$$ +\mathcal {R} _ {\mathsf {L} _ {0 - 1}} (h) - \mathcal {R} _ {\mathsf {L} _ {0 - 1}} ^ {*} (\mathcal {H}) \leq f \big (\mathcal {R} _ {\mathsf {L} _ {\Phi}} (h) - \mathcal {R} _ {\mathsf {L} _ {\Phi}} ^ {*} (\mathcal {H}) \big), +$$ + +then, $f(t) \geq 1$ for any $t \geq 0$ . + +Theorem 2.1 shows that for equicontinuous hypothesis sets, any $\mathcal{H}$ -consistency bound is vacuous, assuming that $f$ is a non-decreasing function continuous at zero. This is because for any such bound, a small $\mathsf{L}_{\Phi}$ -estimation loss does not guarantee a small $\mathsf{L}_{0-1}$ -estimation loss, as the right-hand side remains lower-bounded by one. + +The proof is given in Appendix D, where we give a simple example on pairs whose distance is relatively small for which the standard surrogate losses including the RankBoost algorithm $(\mathsf{L}_{\mathrm{exp}})$ fail (see also Section 6). It is straightforward to see that the assumptions of Theorem 2.1 hold for the case $\mathcal{H} = \mathcal{H}_{\mathrm{lin}}$ or $\mathcal{H} = \mathcal{H}_{\mathrm{NN}}$ . Indeed, we can take $x_0 = 0$ as the interior point and thus for any $h \in \mathcal{H}_{\mathrm{lin}}$ , $|h(x) - h(x_0)| = |w \cdot x| < \epsilon$ for any $x \in \{x \in \mathcal{X} : \|x\|_p < \frac{\epsilon}{W}\}$ , which implies that $\mathcal{H}_{\mathrm{lin}}$ is equicontinuous at $x_0$ . As with the linear hypothesis set, for any $h \in \mathcal{H}_{\mathrm{NN}}$ , $|h(x) - h(x_0)| = \left|\sum_{j=1}^{n} u_j(w_j \cdot x + b_j)_+ - \sum_{j=1}^{n} u_j(b_j)_+\right| = \left|\sum_{j=1}^{n} u_j[(w_j \cdot x + b_j)_+ - (b_j)_+]\right| \leq \Lambda W \|x\|_p < \epsilon$ , for any $x \in \{x \in \mathcal{X} : \|x\|_p < \frac{\epsilon}{\Lambda W}\}$ , which implies that $\mathcal{H}_{\mathrm{NN}}$ is equicontinuous at $x_0$ . In fact, Theorem 2.1 holds for any family of Lipschitz constrained neural networks, since a family of functions that share the same Lipschitz constant is equicontinuous. + +Is straightforward to verify that the proof of Theorem 2.1 also holds in the deterministic case where $\eta(x, x')$ equals 0 or 1 for any $x \neq x'$ , which yields the following corollary. + +Corollary 2.2 (Negative results in the deterministic case). In the deterministic case where $\eta(x, x')$ equals 0 or 1 for any $x \neq x'$ , the negative result of Theorem 2.1 still holds. + +# 2.3. Positive Results: A Family of Piecewise Functions + +In this section, we seek alternative positive results. In light of the negative results just presented, we need to consider hypothesis sets that are not equicontinuous. A natural choice + +is a family of piecewise functions. For any fixed parameter $\tau > 0$ , we consider the family of piecewise functions $\mathcal{H}_{\mathrm{pw}} = \left\{u \mapsto \alpha \left(1_{u \neq x \land \| u \| > \tau} - 1_{u \neq x \land \| u \| \leq \tau}\right) \mid x \in \mathcal{X}, \alpha \in \mathbb{R}\right\}$ . Then, we have the following positive result in the deterministic setting. + +Theorem 2.3 (Positive results for piecewise functions). Assume that $\Phi$ satisfies $\lim_{u\to +\infty}\Phi (u) = 0$ . Then, for all $h\in \mathcal{H}_{\mathrm{pw}}$ and any deterministic distribution, + +$$ +\begin{array}{l} \mathcal {R} _ {\mathrm {L} _ {0 - 1}} (h) - \mathcal {R} _ {\mathrm {L} _ {0 - 1}} ^ {*} (\mathcal {H} _ {\mathrm {p w}}) \\ \leq \mathcal {R} _ {\mathrm {L} _ {\Phi}} (h) - \mathcal {R} _ {\mathrm {L} _ {\Phi}} ^ {*} (\mathcal {H} _ {\mathrm {p w}}) + \mathcal {M} _ {\mathrm {L} _ {\Phi}} (\mathcal {H} _ {\mathrm {p w}}) - \mathcal {M} _ {\mathrm {L} _ {0 - 1}} (\mathcal {H} _ {\mathrm {p w}}). \\ \end{array} +$$ + +The proof is included in Appendix E. Theorem 2.3 provides a meaningful $\mathcal{H}$ -consistency bound for the hypothesis set of piecewise functions: modulo the minimizability gaps, which are zero when the best-in-class error coincides with the Bayes error or can be small in some other cases, reducing the surrogate estimation loss appearing on the right-hand side guarantees a small target estimation loss (left-hand side). + +One example of the corresponding hypotheses in $\mathcal{H}_{\mathrm{pw}}$ is $u \mapsto \mathbb{1}_{\| u \| > \tau} - \mathbb{1}_{0 < \| u \| \leq \tau}$ , where $\alpha = 1$ and $x = 0$ . This is a family of piecewise functions based on the magnitude of $u$ in two distinct ranges. Note that such a hypothesis set is not typical and in particular does not admit equicontinuity, a necessary condition according to our negative results. Nevertheless, it provides an example of a hypothesis set supported by $\mathcal{H}$ -consistency bounds in the general ranking setting. We will further show in section 3 that, for a standard hypothesis set such as an equicontinuous one, we need to resort to ranking with abstention. This approach will be the primary focus of the positive results presented in our paper, as it allows us to leverage hypothesis sets that are more commonly used in real-world applications. + +# 2.4. Positive Results: $\mathcal{H}_{\mathrm{all}}$ -Consistency Bounds + +In this section, we also present a series of positive results in the case of the family of all measurable functions, $\mathcal{H}_{\mathrm{all}}$ . + +We prove $\mathcal{H}_{\mathrm{all}}$ -consistency bounds for the surrogate loss $L_{\Phi}$ when using as auxiliary function $\Phi$ the hinge-loss, the $\rho$ -margin loss, the exponential loss, the logistic loss, the squared hinge-loss, or the sigmoid loss defined in Table 1. Table 8 (Appendix F) gives the full expression of our $\mathcal{H}_{\mathrm{all}}$ -consistency upper bounds, with detailed proofs given in Appendix F. Table 1 gives the expression of the corresponding minimizability gaps. As an example, we have + +$$ +\begin{array}{l} \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\exp}}} (\mathcal {H} _ {\mathrm {a l l}}) \\ = \mathcal {R} _ {\mathsf {L} _ {\Phi_ {\exp}}} ^ {*} \left(\mathcal {H} _ {\mathrm {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \Big [ 2 \sqrt {\eta (x , x ^ {\prime}) (1 - \eta (x , x ^ {\prime}))} \Big ]. \\ \end{array} +$$ + +In Appendix G, we show that these minimizability gaps are in general not null for $\mathcal{H} = \mathcal{H}_{\mathrm{all}}$ in general pairwise ranking, + +Table 1: Auxiliary functions and the minimizability gaps of their pairwise ranking and bipartite ranking surrogates. + +
Auxiliary FunctionsDefinitionsMLΦ(Hall)MLΦ(Hall)
HingeΦhinge(t) = max{0,1-t}(12)(25)
ρ-MarginΦρ(t) = min{1,max{0,1-t/ρ}}, ρ>0(14)(27)
ExponentialΦexp(t) = e-t(16)(29)
LogisticΦlog(t) = log2(1+e-t)(18)(31)
Squared hingeΦsq(t) = (1-t)21t≤1(20)(33)
SigmoidΦsig(t) = 1-tanh(kt), k>0(22)(35)
+ +in contrast with binary classification where the minimizability gaps for $\mathcal{H}_{\mathrm{all}}$ are zero. This is because the distribution order for general pairwise ranking cannot always be induced by a real-valued function. We first introduce the definition of that order. Next, we characterize the distribution order that can be induced by a predictor $h$ , which leads to the zero minimizability gap of pairwise misranking loss. + +Definition 2.4. The distribution order is a homogeneous relation $\stackrel{\mathcal{D}}{\leq}$ over $\mathcal{X}$ , defined as follows for all $x, x' \in \mathcal{X}$ , + +$$ +x \stackrel {{\mathcal {D}}} {{\leq}} x ^ {\prime} \iff \eta (x, x ^ {\prime}) \geq \eta (x ^ {\prime}, x). +$$ + +We say that a hypothesis $h$ induces the distribution order if, for all $x, x' \in \mathcal{X}$ , $(h(x) \leq h(x'))$ holds iff $(x \stackrel{\mathcal{D}}{\leq} x')$ . We say that a subset $\bar{\mathcal{X}} \subset \mathcal{X}$ is dense countable in $\mathcal{X}$ with respect to the distribution order if $\bar{\mathcal{X}}$ is countable and, for all $x, x' \in \mathcal{X}$ satisfying $x \stackrel{\mathcal{D}}{\leq} x'$ and not $x' \stackrel{\mathcal{D}}{\leq} x$ , there exists $\bar{x} \in \bar{\mathcal{X}}$ such that $x \stackrel{\mathcal{D}}{\leq} \bar{x} \stackrel{\mathcal{D}}{\leq} x'$ . + +The following result characterizes the distribution order induced by a hypothesis $h$ . + +Theorem 2.5 (Characterization of distribution order). The distribution order is transitive and there exists a dense countable subset $\widetilde{\mathcal{X}}\subset \mathcal{X}$ with respect to the distribution order if and only if there exists $h\in \mathcal{H}_{\mathrm{all}}$ inducing the distribution order: + +A special case of Theorem 2.5 is when the distribution order is a total order and $\eta(x, x')$ is continuous. + +Theorem 2.6. Assume that the distribution order is a total order and $\eta(x, x')$ is continuous on $\mathcal{X} \times \mathcal{X}$ . Then, there exists $h \in \mathcal{H}_{\mathrm{all}}$ inducing the distribution order. + +Theorem 2.5 and Theorem 2.6 characterize cases where the distribution order can be induced by a hypothesis. These results actually characterize the case where the minimizability gap of the pairwise misranking loss is null, since we will see immediately that, when the distribution order is induced by a hypothesis $h \in \mathcal{H}$ , the minimizability gaps of the pairwise misranking loss will be zero. + +Theorem 2.7. Assume that for all $x, x' \in \mathcal{X}$ , $\eta(x, x') + \eta(x', x) = 1$ . Then, for any hypothesis set $\mathcal{H}$ , if there exists + +$h\in \mathcal{H}$ inducing the distribution order, the minimizability gap of the pairwise misranking loss is null, $\mathcal{M}_{\mathrm{L}_{0 - 1}}(\mathcal{H}) = 0$ + +The proofs of Theorems 2.5, 2.6 and 2.7 are presented in Appendix H. These results provide a detailed analysis of the distribution order and the minimizability gaps appearing in the $\mathcal{H}_{\mathrm{all}}$ -consistency bounds in general pairwise misranking. However, learning algorithms are not based on $\mathcal{H}_{\mathrm{all}}$ . Instead, they rely on a restricted hypothesis set such as $\mathcal{H}_{\mathrm{lin}}$ or $\mathcal{H}_{\mathrm{NN}}$ . To come up with ranking surrogate losses with theoretical guarantees for such hypothesis sets, a natural solution consists of resorting to a pairwise abstention loss. + +# 3. General Pairwise Ranking with Abstention + +The negative results of the previous section suggest that general pairwise ranking with theoretical guarantees is difficult with common hypothesis sets. The inherent issue for pairwise ranking is that for equicontinuous hypotheses, when $x$ and $x'$ are arbitrarily close, the confidence value $|h(x) - h(x')|$ can be arbitrary close to zero. This motivates us to study the learning scenario of general pairwise ranking with abstention. + +In this scenario, the learner abstains from making a prediction on input pair $(x,x^{\prime})$ if the distance between $x^{\prime}$ and $x$ is relatively small, in which case a cost $c$ is incurred. Let $\| \cdot \|$ denote the norm adopted, which is typically an $\ell_p$ -norm, $p\in [1, + \infty ]$ . The pairwise abstention loss is defined as follows for any $h\in \mathcal{H}$ and $(x,x^{\prime},y)\in \mathcal{X}\times \mathcal{X}\times \mathcal{Y}$ : + +$$ +\begin{array}{l} \mathsf {L} _ {0 - 1} ^ {\mathrm {a b s}} (h, x, x ^ {\prime}, y) \\ = \mathbb {1} _ {y \neq \operatorname {s i g n} \left(h \left(x ^ {\prime}\right) - h (x)\right)} \mathbb {1} _ {\| x - x ^ {\prime} \| > \gamma} + c \mathbb {1} _ {\| x - x ^ {\prime} \| \leq \gamma}, \tag {3} \\ \end{array} +$$ + +where $\gamma$ is a given threshold value. For $\gamma = 0$ , $\mathsf{L}_{0-1}^{\mathrm{abs}}$ reduces to the pairwise misranking loss $\mathsf{L}_{0-1}$ without abstention. Let $p, q \in [1, +\infty]$ be conjugate numbers, that is $\frac{1}{p} + \frac{1}{q} = 1$ . Without loss of generality, we consider $\mathcal{X} = B_p^d(1)$ and $\|\cdot\|$ in (3) to be the $\ell_p$ norm. The corresponding conjugate $\ell_q$ norm is adopted in the hypothesis sets $\mathcal{H}_{\mathrm{lin}}$ and $\mathcal{H}_{\mathrm{NN}}$ . In the following, we will prove $\mathcal{H}$ -consistency bounds for $\mathsf{L}_{\Phi}$ when using as an auxiliary function $\Phi$ the hinge-loss, the $\rho$ -margin loss, the exponential loss, the logistic loss, the squared hinge-loss or the sigmoid loss with respect to + +Table 2: ${\mathcal{H}}_{\text{lin }}$ -consistency upper bounds for general pairwise abstention. + +
Loss functionHlin-consistency upper bound
LΦhingeRlΦhinge(h) - RlΦhinge*(Hlin) + MLΦhinge(Hlin)/min{Wγ,1} - MLabs0-1(Hlin)
LΦρρ(RlΦρ(h) - RlΦρ(Hlin) + MLΦρ(Hlin))/min{Wγ,ρ} - MLabs0-1(Hlin)
LΦexpΓexp(RlΦexp(h) - RlΦexp(Hlin) + MLΦexp(Hlin)) - MLabs0-1(Hlin) where Γexp(t) = max{√2t, 2(e2Wγ+1/e2Wγ-1)t}
LΦlogΓlog(RlΦlog(h) - RlΦlog(Hlin) + MLΦlog(Hlin)) - MLabs0-1(Hlin) where Γlog(t) = max{√2t, 2(eWγ+1/eWγ-1)t}
LΦsqΓsq(RlΦsq(h) - RlΦsq(Hlin) + MLΦsq(Hlin)) - MLabs0-1(Hlin) where Γsq(t) = max{√t, t/2Wγ + Wγ/2}
LΦsigRlΦsig(h) - RlΦsig(Hlin) + MLΦsig(Hlin)/tanh{kWγ} - MLabs0-1(Hlin)
+ +Table 3: ${\mathcal{H}}_{\mathrm{{NN}}}$ -consistency upper bounds for general pairwise abstention. + +
Loss functionHNN-consistency upper bound
LΦhingeRlΦhinge(h)-RlΦhinge(HNN)+MLΦhinge(HNN)/min{ΛWγ,1}-MLabs0-1(HNN)
LΦρρ(RlΦρ(h)-RlΦρ(HNN)+MLΦρ(HNN))/min{ΛWγ,ρ}-MLabs0-1(HNN)
LΦexpΓexp(RlΦexp(h)-RlΦexp(HNN)+MLΦexp(HNN))-MLabs0-1(HNN) where Γexp(t)=max{√2t,2(e2ΛWγ+1)e2ΛWγ-1)t}
LΦlogΓlog(RlΦlog(h)-RlΦlog(HNN)+MLΦlog(HNN))-MLabs0-1(HNN) where Γlog(t)=max{√2t,2(e2ΛWγ+1)e2ΛWγ-1)t}
LΦsqΓsq(RlΦsq(h)-RlΦsq(HNN)+MLΦsq(HNN))-MLabs0-1(HNN) where Γsq(t)=max{√t, t/2ΛWγ+ΛWγ/2}
LΦsigRlΦsig(h)-RlΦsig(HNN)+MLΦsig(HNN)/tanh(kΛWγ)-MLabs0-1(HNN)
+ +$\mathsf{L}_{0 - 1}^{\mathrm{abs}}$ , in the case of the linear hypothesis set $\mathcal{H}_{\mathrm{lin}}$ or that of one-hidden-layer ReLU networks $\mathcal{H}_{\mathrm{NN}}$ + +# 3.1. Linear Hypotheses + +Table 2 supplies the $\mathcal{H}_{\mathrm{lin}}$ -consistency upper bounds for $L_{\Phi}$ when using as $\Phi$ the auxiliary functions in Table 1. The bounds of Table 2 depend directly on the threshold value $\gamma$ , the parameter $W$ in the linear models and parameters of the loss function (e.g., $k$ in sigmoid loss). + +As an example, when using as $\Phi$ the exponential loss function, modulo the minimizability gaps (which are zero when the best-in-class error coincides with the Bayes error or can be small in some other cases), the bound implies that if the surrogate estimation loss $\mathcal{R}_{\mathsf{L}_{\Phi_{\mathrm{exp}}}}(h) - \mathcal{R}_{\mathsf{L}_{\Phi_{\mathrm{exp}}}}^{*}(\mathcal{H}_{\mathrm{lin}})$ is reduced to $\epsilon$ , then, the target estimation loss $\mathcal{R}_{\mathsf{L}_{0-1}^{\mathrm{abs}}}^*(h) - \mathcal{R}_{\mathsf{L}_{0-1}^{\mathrm{abs}}}^{*}(\mathcal{H}_{\mathrm{lin}})$ is upper bounded by $\Gamma_{\mathrm{exp}}(\epsilon)$ . For sufficiently small values of $\epsilon$ , the dependence of $\Gamma_{\mathrm{exp}}$ on $\epsilon$ exhibits a square root relationship. However, if this is not the case, the dependence becomes linear, subject to a constant factor depending on the threshold value $\gamma$ and the parameter $W$ in the linear models. + +The proofs consist of analyzing calibration gaps of the target loss and that of each surrogate loss and seeking a tight lower bound of the surrogate calibration gap in terms of the target + +one. As an example, for $\Phi = \Phi_{\mathrm{exp}}$ we have the tight lower bound $\Delta \mathcal{C}_{\mathsf{L}_{\Phi_{\mathrm{exp}}},\mathcal{H}_{\mathrm{lin}}}(h,x,x^{\prime})\geq \Delta \mathcal{C}_{\mathsf{L}_{\Phi_{\mathrm{exp}}},\mathcal{H}_{\mathrm{lin}}}(h_0,x,x^{\prime}) =$ $\Psi_{\mathrm{exp}}\Bigl (\Delta \mathcal{C}_{\mathsf{L}_{0 - 1}^{\mathrm{abs}},\mathcal{H}_{\mathrm{lin}}} (h,x,x^{\prime})\Bigr)$ , where $h_0$ can be the null hypothesis when $\Delta \mathcal{C}_{\mathsf{L}_{0 - 1}^{\mathrm{abs}},\mathcal{H}_{\mathrm{lin}}} (h,x,x^{\prime})\neq 0$ and $\Psi_{\mathrm{exp}}$ is an increasing and piecewise convex function on [0,1] defined by $\Psi_{\mathrm{exp}}(t) = \left\{ \begin{array}{ll}1 - \sqrt{1 - t^2}, & t\leq \frac{e^{2W\gamma} - 1}{e^{2W\gamma} + 1}\\ 1 - \frac{t + 1}{2} e^{-W\gamma} - \frac{1 - t}{2} e^{W\gamma}, & t > \frac{e^{2W\gamma} - 1}{e^{2W\gamma} + 1} \end{array} \right.$ The detailed derivation and the expression of the corresponding minimizability gaps are included in Appendix K.1. + +# 3.2. One-hidden-layer ReLU Neural Networks + +Table 3 gives $\mathcal{H}_{\mathrm{NN}}$ -consistency upper bounds for $L_{\Phi}$ when using as $\Phi$ the auxiliary functions in Table 1. Different from the bounds in the linear case, all the bounds in Table 3 not only depend on $W$ , but also depend on $\Lambda$ , which is a parameter appearing in $\mathcal{H}_{\mathrm{NN}}$ . The proof is similar to that of the linear case. The detailed derivation and the expression of the corresponding minimizability gaps are given in Appendix K.2. + +As with the linear case, taking the exponential loss function $\Phi_{\mathrm{exp}}$ as an example, modulo the minimizability gaps, the bound implies that if the surrogate estimation loss $\mathcal{R}_{\mathsf{L}_{\Phi_{\mathrm{exp}}}}(h) - \mathcal{R}_{\mathsf{L}_{\Phi_{\mathrm{exp}}}}^{*}(\mathcal{H}_{\mathrm{NN}})$ is reduced to $\epsilon$ , then, the target estimation loss $\mathcal{R}_{\mathsf{L}_{0-1}^{\mathrm{abs}}}^*(h) - \mathcal{R}_{\mathsf{L}_{0-1}^{\mathrm{abs}}}^{*}(\mathcal{H}_{\mathrm{NN}})$ is upper bounded + +by $\Gamma_{\mathrm{exp}}(\epsilon)$ . For sufficiently small values of $\epsilon$ , the dependence of $\Gamma_{\mathrm{exp}}$ on $\epsilon$ exhibits a square root relationship. However, if this is not the case, the dependence becomes linear, subject to a constant factor depending on the threshold value $\gamma$ , the parameter $W$ , and an additional parameter $\Lambda$ in the one-hidden-layer ReLU networks. + +# 4. Bipartite Ranking + +In this section, we analyze the properties of surrogate loss functions in the bipartite ranking setting. We first introduce the relevant definitions and concepts. Next, as in the general pairwise ranking setting, we present general negative results, as well as positive results for the family of all measurable functions. + +# 4.1. Preliminaries + +In the bipartite setting, each point $x$ admits a label $y \in \{-1, +1\}$ . The bipartite misranking loss $\widetilde{\mathsf{L}}_{0-1}$ is defined for all $h$ in $\mathcal{H}$ , and $(x, y), (x', y')$ in $(\mathcal{X} \times \mathcal{Y})$ by + +$$ +\begin{array}{l} \widetilde {\mathsf {L}} _ {0 - 1} \big (h, x, x ^ {\prime}, y, y ^ {\prime} \big) = \\ \mathbb {1} _ {(y - y ^ {\prime}) (h (x) - h \left(x ^ {\prime}\right)) < 0} + \frac {1}{2} \mathbb {1} _ {(h (x) = h \left(x ^ {\prime}\right)) \wedge (y \neq y ^ {\prime})}. \tag {4} \\ \end{array} +$$ + +Optimizing the bipartite misranking loss $\widetilde{\mathsf{L}}_{0-1}$ is intractable for most hypothesis sets and bipartite ranking algorithms rely instead on a surrogate loss $\widetilde{\mathsf{L}}$ . We will analyze the properties of such surrogate losses. + +Let $\mathcal{D}$ be a distribution over $\mathcal{X} \times \mathcal{Y}$ . We denote by $\eta(x) = \mathcal{D}(Y = +1 \mid X = x)$ the conditional probability of $Y = +1$ given $X = x$ . We will use a definition and notation for the expected $\widetilde{\mathsf{L}}$ -loss of $h \in \mathcal{H}$ , its infimum, and the minimizability gaps similar to what we used in the general pairwise misranking setting: + +$$ +\begin{array}{l} \mathcal {R} _ {\widetilde {\mathsf {L}}} (h) = \underset {(x, x ^ {\prime}, y) \sim \mathcal {D}} {\mathbb {E}} \left[ \widetilde {\mathsf {L}} \left(h, x, x ^ {\prime}, y\right) \right] \quad \mathcal {R} _ {\widetilde {\mathsf {L}}} ^ {*} (\mathcal {H}) = \inf _ {h \in \mathcal {H}} \mathcal {R} _ {\widetilde {\mathsf {L}}} (h) \\ \mathcal {M} _ {\widetilde {\Gamma}} (\mathcal {H}) \\ = \mathcal {R} _ {\widetilde {\mathsf {L}}} ^ {*} (\mathcal {H}) - \underset {(x, x ^ {\prime})} {\mathbb {E}} \left[ \inf _ {h \in \mathcal {H}} \underset {(y, y ^ {\prime})} {\mathbb {E}} \left[ \mathsf {L} \left(h, x, x ^ {\prime}, y, y ^ {\prime}\right) \mid (x, x ^ {\prime}) \right] \right]. \\ \end{array} +$$ + +We say that a hypothesis set is regular for bipartite ranking if, for any $x \neq x' \in \mathcal{X}$ , there exists $h_+ \in \mathcal{H}$ such that $h_+(x) < h_+(x')$ and $h_- \in \mathcal{H}$ such that $h_-(x) > h_-(x')$ . Hypothesis sets commonly used in practice all admit this property. + +# 4.2. Negative Results + +Here, as in general pairwise misranking scenario, we present a negative result for a broad family of surrogate losses and hypothesis sets in the bipartite setting. + +The bipartite ranking surrogate losses widely used in prac + +tice, admit the following form: + +$$ +\widetilde {\mathbb {L}} _ {\Phi} \left(h, x, x ^ {\prime}, y, y ^ {\prime}\right) = \Phi \left(\frac {(y - y ^ {\prime}) (h (x) - h \left(x ^ {\prime}\right))}{2}\right) \mathbb {1} _ {y \neq y ^ {\prime}}, \tag {5} +$$ + +where $\Phi$ is a non-increasing function that is continuous at 0 upper bounding $u\mapsto \mathbb{1}_{u\leq 0}$ over $\mathbb{R}$ . As with the general pairwise ranking, we show that these surrogate losses do not benefit from $\mathcal{H}$ -consistency bounds when $\mathcal{H}$ is an equicontinuous family. + +Theorem 4.1 (Negative results for bipartite ranking). Assume that $\mathcal{X}$ contains an interior point $x_0$ and that $\mathcal{H}$ is regular for bipartite ranking, contains 0 and is equicontinuous at $x_0$ . If for some function $f$ that is non-decreasing and continuous at 0, the following bound holds for all $h \in \mathcal{H}$ and any distribution, + +$$ +\mathcal {R} _ {\widetilde {\mathcal {L}} _ {0 - 1}} (h) - \mathcal {R} _ {\widetilde {\mathcal {L}} _ {0 - 1}} ^ {*} (\mathcal {H}) \leq f \left(\mathcal {R} _ {\widetilde {\mathcal {L}} _ {\Phi}} (h) - \mathcal {R} _ {\widetilde {\mathcal {L}} _ {\Phi}} ^ {*} (\mathcal {H})\right), +$$ + +then, $f(t) \geq \frac{1}{2}$ for any $t \geq 0$ . + +As with Theorem 2.1, Theorem 4.1 shows that in the bipartite ranking setting, any $\mathcal{H}$ -consistency bound with an equicontinuous hypothesis sets is vacuous, assuming a nondecreasing function $f$ continuous at zero. + +The proof is given in Appendix I. It is straightforward to verify that the proof holds in the deterministic case where $\eta(x)$ equals 0 or 1 for any $x \in \mathcal{X}$ , which yields the following corollary. + +Corollary 4.2 (Negative results in the bipartite deterministic case). In the bipartite deterministic case where $\eta(x)$ equals 0 or 1 for any $x \in \mathcal{X}$ , the same negative result as in Theorem 4.1 holds. + +# 4.3. Positive Results: $\mathcal{H}_{\mathrm{all}}$ -Consistency Bounds + +In this section, we present a series of positive results by proving $\mathcal{H}_{\mathrm{all}}$ -consistency bounds for $\widetilde{\mathsf{L}}_{0-1}$ and the surrogate loss $\widetilde{\mathsf{L}}$ when using as an auxiliary function $\Phi$ the hinge-loss, the $\rho$ -margin loss, the exponential loss, the logistic loss, the squared hinge-loss and the sigmoid loss, as summarized in Table 9 of Appendix J, where the corresponding proofs are also provided. The expression of the corresponding minimizability gaps are summarized in Table 1. In bipartite ranking with $\mathcal{H} = \mathcal{H}_{\mathrm{all}}$ , the minimizability gaps are zero for $\Phi_{\rho}, \Phi_{\exp}, \Phi_{\log}, \Phi_{\mathrm{sig}}$ , while they are non-zero for $\Phi_{\mathrm{hinge}}$ and $\Phi_{\mathrm{sq}}$ in general. + +# 5. Bipartite Ranking with Abstention + +As with the general pairwise ranking case, the negative results shown in Section 4.2 motivate us to study bipartite ranking with abstention, where the learner can abstain from making prediction on a pair $(x,x^{\prime})$ with $x$ an $x^{\prime}$ relatively + +Table 4: ${\mathcal{H}}_{\text{lin }}$ -consistency upper bounds for bipartite abstention loss. + +
Loss functionHlin-consistency upper bound
LΦhingeRΦhinge(h) - RΦhinge*(Hlin) + MΦhinge(Hlin)/min{Wγ,1} - MΦabs0-1(Hlin)
LΦρρ(RΦρ(h) - RΦρ(Hlin) + MΦρ(Hlin)) / min{Wγ,ρ} - MΦabs0-1(Hlin)
LΦexpΓexp(RΦexp(h) - R*Φexp(Hlin) + MΦexp(Hlin)) - MΦabs0-1(Hlin) where Γexp(t) = max{√t, (e2Wγ+1/e2Wγ-1)t}
LΦlogΓlog(RΦlog(h) - R*Φlog(Hlin) + MΦlog(Hlin)) - MΦabs0-1(Hlin) where Γlog(t) = max{√t, (eWγ+1/eWγ-1)t}
LΦsqΓsq(RΦsq(h) - R*Φsq(Hlin) + MΦsq(Hlin)) - MΦabs0-1(Hlin) where Γsq(t) = max{√t, t/2Wγ + Wγ/2}
LΦsigRΦsig(h) - R*Φsig(Hlin) + MΦsig(Hlin)/tanh{kWγ} - MΦabs0-1(Hlin)
+ +Table 5: ${\mathcal{H}}_{\mathrm{{NN}}}$ -consistency upper bounds for bipartite abstention loss. + +
Loss functionHNN-consistency upper bound
LΦhingeRΦhinge(h) - R*Φhinge(HNN) + MΦhinge(HNN)/min{ΛWγ,1} - MΦabs0-1(HNN)
LΦρρ(RΦρ(h) - R*Φρ(HNN) + MΦρ(HNN)) / min{ΛWγ,ρ} - MΦabs0-1(HNN)
LΦexpΓexp(RΦexp(h) - R*Φexp(HNN) + MΦexp(HNN)) - MΦabs0-1(HNN) where Γexp(t) = max{√t, (e2ΛWγ+1)e2ΛWγ-1)t}
LΦlogΓlog(RΦlog(h) - R*Φlog(HNN) + MΦlog(HNN)) - MΦabs0-1(HNN) where Γlog(t) = max{√t, (e2ΛWγ+1)e2ΛWγ-1)t}
LΦsqΓsq(RΦsq(h) - R*Φsq(HNN) + MΦsq(HNN)) - MΦabs0-1(HNN) where Γsq(t) = max{√t, t/2ΛWγ + ΛWγ/2}
LΦsigRΦsig(h) - R*Φsig(HNN) + MΦsig(HNN)/tanh(kΛWγ) - MΦabs0-1(HNN)
+ +close. The bipartite abstention loss is defined as follows for any $h \in \mathcal{H}$ and $(x, y), (x', y') \in \mathcal{X} \times \mathcal{Y}$ : + +$$ +\begin{array}{l} \widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}} \left(h, x, x ^ {\prime}, y, y ^ {\prime}\right) \\ = \widetilde {\mathsf {L}} _ {0 - 1} \left(h, x, x ^ {\prime}, y, y ^ {\prime}\right) \mathbb {1} _ {\| x - x ^ {\prime} \| > \gamma} + c \mathbb {1} _ {\| x - x ^ {\prime} \| \leq \gamma}, \tag {6} \\ \end{array} +$$ + +where $\gamma$ is a given threshold value. When $\gamma = 0$ , $\widetilde{\mathsf{L}}_{0 - 1}^{\mathrm{abs}}$ reduces to bipartite misranking loss $\widetilde{\mathsf{L}}_{0 - 1}$ without abstention. + +# 5.1. Linear Hypotheses + +Table 4 presents a series of $\mathcal{H}_{\mathrm{lin}}$ -consistency upper bounds for $\widetilde{\mathsf{L}}_{\Phi}$ when using as $\Phi$ the auxiliary functions in Table 1. The bounds of Table 4 depend directly on the threshold value $\gamma$ , the parameter $W$ in the linear models and parameters of the loss function (e.g., $k$ in sigmoid loss). + +As an example, when adopting the exponential loss function as $\Phi$ , modulo the minimizability gaps (which are zero when the best-in-class error coincides with the Bayes error or can be small in some other cases), the bound implies that if the surrogate estimation loss $\mathcal{R}_{\widetilde{\mathsf{L}}_{\Phi_{\mathrm{exp}}}}(h) - \mathcal{R}_{\widetilde{\mathsf{L}}_{\Phi_{\mathrm{exp}}}}^{*}(\mathcal{H}_{\mathrm{lin}})$ is reduced to $\epsilon$ , then, the target estimation loss $\mathcal{R}_{\widetilde{\mathsf{L}}_{0-1}^{\mathrm{abs}}}^{\mathrm{abs}}(h) -$ + +$\mathcal{R}_{\widetilde{\mathsf{L}}_{0 - 1}}^{*}(\mathcal{H}_{\mathrm{lin}})$ is upper bounded by $\Gamma_{\mathrm{exp}}(\epsilon)$ . For sufficiently small values of $\epsilon$ , the dependence of $\Gamma_{\mathrm{exp}}$ on $\epsilon$ exhibits a square root relationship. However, if this is not the case, the dependence becomes linear, subject to a constant factor depending on the threshold value $\gamma$ and the parameter $W$ in the linear models. + +As with the general pairwise ranking setting, the proofs consist of analyzing calibration gaps of the target loss and that of each surrogate loss and seeking a tight lower bound of the surrogate calibration gap in terms of the target one. Additionally, the bipartite ranking setting introduces an added layer of complexity, as $x$ and $x'$ in a pair have independent conditional distributions $\eta(x)$ and $\eta(x')$ , which results in a more intricate calibration gap that is harder to address. + +As an example, for $\Phi = \Phi_{\mathrm{exp}}$ the exponential loss function, we have the lower bound $\Delta \mathcal{C}_{\widetilde{\mathsf{L}}_{\Phi_{\mathrm{exp}}},\mathcal{H}_{\mathrm{lin}}}(h,x,x^{\prime})\geq$ $\Psi_{\mathrm{exp}}\Bigl (\Delta \mathcal{C}_{\mathtt{L}_{0 - 1}^{\mathrm{abs}},\mathcal{H}_{\mathrm{lin}}}(h,x,x^{\prime})\Bigr)$ , where $\Psi_{\mathrm{exp}}$ is an increasing and piece-wise convex function on [0,2] defined by $\Psi_{\mathrm{exp}}(t) = \min \left\{t^{2},\left(\frac{e^{2W\gamma} + 1}{e^{2W\gamma} - 1}\right)t\right\}$ . The detailed derivation and the expression of the corresponding minimizability gaps are included in Appendix L.1. + +Table 6: General pairwise abstention loss for the Rankboost loss on CIFAR-10; mean ± standard deviation over three runs for various $\gamma$ and cost $c$ . + +
γ00.30.50.70.9
Cost 0.18.33% ± 0.15%8.33% ± 0.15%8.33% ± 0.15%8.25% ± 0.07%8.54%± 0.07%
Cost 0.38.33% ± 0.15%8.33% ± 0.15%8.35% ± 0.15%9.73% ± 0.11%20.41%± 0.06%
Cost 0.58.33% ± 0.15%8.33% ± 0.15%8.36% ± 0.14%11.20% ± 0.14%32.28% ± 0.07%
+ +# 5.2. One-hidden-layer ReLU Neural Networks. + +Table 5 presents the $\mathcal{H}_{\mathrm{NN}}$ -consistency upper bounds for $\widetilde{\mathsf{L}}_{\Phi}$ when using as $\Phi$ the auxiliary functions in Table 1. Different from the bounds in the linear case, all the bounds in Table 5 not only depend on $W$ , but also depend on $\Lambda$ , a parameter in $\mathcal{H}_{\mathrm{NN}}$ . The proof is similar to that of the linear case. The detailed derivation and the expression of the corresponding minimizability gaps are given in Appendix L.2. + +As with the linear case, taking the exponential loss function $\Phi_{\mathrm{exp}}$ as an example, modulo the minimizability gaps, the bound implies that if the surrogate estimation loss $\mathcal{R}_{\widetilde{\mathsf{L}}_{\Phi_{\mathrm{exp}}}}(h) - \mathcal{R}_{\widetilde{\mathsf{L}}_{\Phi_{\mathrm{exp}}}}^{*}(\mathcal{H}_{\mathrm{NN}})$ is reduced to $\epsilon$ , then, the target estimation loss $\mathcal{R}_{\widetilde{\mathsf{L}}_{0-1}^{\mathrm{abs}}}^*(h) - \mathcal{R}_{\widetilde{\mathsf{L}}_{0-1}^{\mathrm{abs}}}^{*}(\mathcal{H}_{\mathrm{NN}})$ is upper bounded by $\Gamma_{\mathrm{exp}}(\epsilon)$ . For sufficiently small values of $\epsilon$ , the dependence of $\Gamma_{\mathrm{exp}}$ on $\epsilon$ exhibits a square root relationship. However, if this is not the case, the dependence becomes linear, subject to a constant factor depending on the threshold value $\gamma$ , the parameter $W$ , and an additional parameter $\Lambda$ in the one-hidden-layer ReLU networks. + +# 6. Experiments + +In this section, we provide empirical results for general pairwise ranking with abstention on the CIFAR-10 dataset (Krizhevsky, 2009). + +We used ResNet-34 with ReLU activations (He et al., 2016). Here, ResNet- $n$ denotes a residual network with $n$ convolutional layers. Standard data augmentations, 4-pixel padding with $32 \times 32$ random crops and random horizontal flips are applied for CIFAR-10. For training, we used Stochastic Gradient Descent (SGD) with Nesterov momentum (Nesterov, 1983). We set the batch size, weight decay, and initial learning rate to $1,024$ , $1 \times 10^{-4}$ and $0.1$ respectively. We adopted the cosine decay learning rate schedule (Loshchilov & Hutter, 2016) for a total of 200 epochs. The pairs $(x,x^{\prime},y)$ are randomly sampled from CIFAR-10 during training, with $y = \pm 1$ indicating if $x$ is ranked above or below $x^{\prime}$ per the natural ordering of labels of $x$ and $x^{\prime}$ . + +We evaluated the models based on their averaged pairwise abstention loss (3) with $\gamma$ selected from $\{0.0, 0.3, 0.5, 0.7, 0.9\}$ and the cost $c$ selected from $\{0.1, 0.3, 0.5\}$ . We randomly sampled 10,000 pairs $(x, x')$ from the test data for evaluation. The $\ell_{\infty}$ distance is adopted in the algorithm. We averaged losses over three runs and + +report the standard deviation as well. + +We used the surrogate loss (2) with $\Phi(t) = \exp(-t)$ the exponential loss, $\mathsf{L}_{\Phi_{\mathrm{exp}}}$ , which coincides with the loss function of RankBoost. Table 6 shows that when $\gamma$ is as small as 0.3, no abstention takes place and the abstention loss coincides with the standard misranking loss ( $\gamma = 0$ ) for any cost $c$ . As $\gamma$ increases, there are more samples that are abstained. When using a minimal cost $c$ of 0.1 (as demonstrated in the first row of Table 6), abstaining on pairs with a relatively small distance ( $\gamma = 0.7$ ) results in a lower target abstention loss compared to the scenario without abstention ( $\gamma = 0$ ). Conversely, abstaining on pairs with larger distances ( $\gamma = 0.9$ ) led to a higher abstention loss. This can be attributed to the fact that rejected samples at $\gamma = 0.7$ had lower accuracy compared to those at $\gamma = 0.9$ . This empirically verifies that the surrogate loss $\mathsf{L}_{\Phi_{\mathrm{exp}}}$ is not favorable on pairs whose distance is relatively small, for equicontinuous hypotheses. When the cost $c$ is larger, the abstention loss, in general, increases with $\gamma$ , since the number of samples rejected increases with $\gamma$ . + +Overall, the experiment shows that, in practice, for small $\gamma$ , abstention actually does not take place. Thus, the abstention loss coincides with the standard pairwise misranking loss in those cases, and the surrogate loss is consistent with respect to both of them. Our results also indicate that the surrogate loss $L_{\Phi_{\mathrm{exp}}}$ , a commonly used loss function, for example for RankBoost, is not optimal for pairs with a relatively small distance. Instead, rejecting these pairs at a minimal cost proves to be a more effective strategy. + +# 7. Conclusion + +We presented a series of theoretical $\mathcal{H}$ -consistency guarantees for surrogate losses in pairwise misranking. Our proposed abstention methods are important when using common equicontinuous hypothesis sets in practice. It will be useful to explore alternative non-equicontinuous hypothesis sets that may be of practical use, and to further study the choice of the parameter $\gamma$ for abstention in practice. We have also initiated the study of randomized ranking solutions with theoretical guarantees without resorting to abstention. + +# Acknowledgments + +We thank reviewers for their comments on the presentation. + +# References + +Agarwal, S. Surrogate regret bounds for bipartite ranking via strongly proper losses. The Journal of Machine Learning Research, 15(1):1653-1674, 2014. +Agarwal, S., Graepel, T., Herbrich, R., Har-Peled, S., Roth, D., and Jordan, M. I. Generalization bounds for the area under the ROC curve. Journal of Machine Learning Research, 6(4), 2005. +Ailon, N. and Mohri, M. An efficient reduction of ranking to classification. In Conference on Learning Theory, 2008. +Ailon, N. and Mohri, M. Preference-based learning to rank. Machine Learning, 80(2-3):189-211, 2010. +Awasthi, P., Frank, N., Mao, A., Mohri, M., and Zhong, Y. Calibration and consistency of adversarial surrogate losses. In Advances in Neural Information Processing Systems, 2021a. +Awasthi, P., Mao, A., Mohri, M., and Zhong, Y. A finer calibration analysis for adversarial robustness. arXiv preprint arXiv:2105.01550, 2021b. +Awasthi, P., Mao, A., Mohri, M., and Zhong, Y. H-consistency bounds for surrogate loss minimizers. In International Conference on Machine Learning, 2022a. +Awasthi, P., Mao, A., Mohri, M., and Zhong, Y. Multi-class $\mathcal{H}$ -consistency bounds. In Advances in neural information processing systems, 2022b. +Awasthi, P., Mao, A., Mohri, M., and Zhong, Y. DC-programming for neural network optimizations. Journal of Global Optimization, 2023a. +Awasthi, P., Mao, A., Mohri, M., and Zhong, Y. Theoretically grounded loss functions and algorithms for adversarial robustness. In International Conference on Artificial Intelligence and Statistics, pp. 10077-10094, 2023b. +Bartlett, P. L., Jordan, M. I., and McAuliffe, J. D. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138-156, 2006. +Berkson, J. Application of the logistic function to bio-assay. Journal of the American Statistical Association, 39:357-365, 1944. +Berkson, J. Why I prefer logits to probits. Biometrics, 7(4): 327-339, 1951. +Buffoni, D., Calauzenes, C., Gallinari, P., and Usunier, N. Learning scoring functions with order-preserving losses and standardized supervision. In International Conference on Machine Learning, pp. 825-832, 2011. + +Calauzenes, C., Usunier, N., and Gallinari, P. On the (non-) existence of convex, calibrated surrogate losses for ranking. In Advances in Neural Information Processing Systems, 2012. +Carlini, N. and Wagner, D. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy (SP), pp. 39-57, 2017. +Clemençon, S., Lugosi, G., and Vayatis, N. Ranking and empirical minimization of U-statistics. The Annals of Statistics, 36(2):844-874, 2008. +Cohen, W. W., Schapire, R. E., and Singer, Y. Learning to order things. Advances in neural information processing systems, 10, 1997. +Cortes, C. and Mohri, M. AUC optimization vs. error rate minimization. Advances in neural information processing systems, 16, 2003. +Cossock, D. and Zhang, T. Statistical analysis of bayes optimal subset ranking. IEEE Transactions on Information Theory, 54(11):5140-5154, 2008. +Crammer, K. and Singer, Y. On the algorithmic implementation of multiclass kernel-based vector machines. Journal of machine learning research, 2(Dec):265-292, 2001. +Duchi, J. C., Mackey, L. W., and Jordan, M. I. On the consistency of ranking algorithms. In International conference on Machine learning, pp. 327-334, 2010. +Freund, Y., Iyer, R., Schapire, R. E., and Singer, Y. An efficient boosting algorithm for combining preferences. Journal of machine learning research, 4(Nov):933-969, 2003. +Gao, W. and Zhou, Z.-H. On the consistency of multi-label learning. In Conference on learning theory, pp. 341-358, 2011. +Gao, W. and Zhou, Z.-H. On the consistency of AUC pairwise optimization. In International Joint Conference on Artificial Intelligence, 2015. +Gao, W., Jin, R., Zhu, S., and Zhou, Z.-H. One-pass auc optimization. In International conference on machine learning, pp. 906-914, 2013. +Ghosh, A., Kumar, H., and Sastry, P. S. Robust loss functions under label noise for deep neural networks. In Proceedings of the AAAI conference on artificial intelligence, 2017. +Goodfellow, I. J., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. + +Hanley, J. A. and McNeil, B. J. The meaning and use of the area under a receiver operating characteristic (roc) curve. Radiology, 1982. +He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. +Joachims, T. Optimizing search engines using clickthrough data. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 133-142, 2002. +Kotlowski, W., Dembczynski, K. J., and Huellermeier, E. Bipartite ranking through minimization of univariate loss. In International Conference on Machine Learning, pp. 1113-1120, 2011. +Krizhevsky, A. Learning multiple layers of features from tiny images. Technical report, Toronto University, 2009. +Kuznetsov, V., Mohri, M., and Syed, U. Multi-class deep boosting. In Advances in Neural Information Processing Systems, pp. 2501-2509, 2014. +Lan, Y., Guo, J., Cheng, X., and Liu, T.-Y. Statistical consistency of ranking methods in a rank-differentiable probability space. In Advances in Neural Information Processing Systems, 2012. +Lee, Y., Lin, Y., and Wahba, G. Multicategory support vector machines: Theory and application to the classification of microarray data and satellite radiance data. Journal of the American Statistical Association, 99(465):67-81, 2004. +Long, P. and Servedio, R. Consistency versus realizable H-consistency for multiclass classification. In International Conference on Machine Learning, pp. 801-809, 2013. +Loshchilov, I. and Hutter, F. SGDR: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. +Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. +Mao, A., Mohri, M., and Zhong, Y. Cross-entropy loss functions: Theoretical analysis and applications. arXiv preprint arXiv:2304.07288, 2023. +Menon, A. K. and Williamson, R. C. Bayes-optimal scorers for bipartite ranking. In Conference on Learning Theory, pp. 68-106, 2014. + +Mohri, M., Rostamizadeh, A., and Talwalkar, A. Foundations of Machine Learning. MIT Press, second edition, 2018. +Nesterov, Y. E. A method for solving the convex programming problem with convergence rate $o(1 / k^2)$ . Dokl. akad. nauk Sssr, 269:543-547, 1983. +Ramaswamy, H. G. and Agarwal, S. Classification calibration dimension for general multiclass losses. In Advances in Neural Information Processing Systems, 2012. +Ramaswamy, H. G., Agarwal, S., and Tewari, A. Convex calibrated surrogates for low-rank loss matrices with applications to subset ranking losses. In Advances in Neural Information Processing Systems, 2013. +Ramaswamy, H. G., Babu, B. S., Agarwal, S., and Williamson, R. C. On the consistency of output code based learning algorithms for multiclass learning problems. In Conference on Learning Theory, pp. 885-902, 2014. +Ravikumar, P., Tewari, A., and Yang, E. On ndcg consistency of listwise ranking methods. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 618-626, 2011. +Rudin, C., Cortes, C., Mohri, M., and Schapire, R. E. Margin-based ranking meets boosting in the middle. In Conference on Learning Theory, pp. 63-78, 2005. +Steinwart, I. How to compare different loss functions and their risks. Constructive Approximation, 26(2):225-287, 2007. +Tewari, A. and Bartlett, P. L. On the consistency of multiclass classification methods. Journal of Machine Learning Research, 8(36):1007-1025, 2007. +Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., and Madry, A. Robustness may be at odds with accuracy. arXiv preprint arXiv:1805.12152, 2018. +Uematsu, K. and Lee, Y. On theoretically optimal ranking functions in bipartite ranking. Journal of the American Statistical Association, 112(519):1311-1322, 2017. +Verhulst, P. F. Notice sur la loi que la population suit dans son accroissement. Correspondance mathématique et physique, 10:113-121, 1838. +Verhulst, P. F. Recherches mathématiques sur la loi d'accroissement de la population. Nouveaux Mémoires de l'Académie Royale des Sciences et Belles-Lettres de Bruxelles, 18:1-42, 1845. +Weston, J. and Watkins, C. Multi-class support vector machines. Technical report, Citeseer, 1998. + +Xia, F., Liu, T.-Y., Wang, J., Zhang, W., and Li, H. Listwise approach to learning to rank: theory and algorithm. In International conference on Machine learning, pp. 1192-1199, 2008. +Zhang, M. and Agarwal, S. Bayes consistency vs. H-consistency: The interplay between surrogate loss functions and the scoring function class. In Advances in Neural Information Processing Systems, 2020. +Zhang, M., Ramaswamy, H. G., and Agarwal, S. Convex calibrated surrogates for the multi-label f-measure. In International Conference on Machine Learning, pp. 11246-11255, 2020. +Zhang, T. Statistical behavior and consistency of classification methods based on convex risk minimization. The Annals of Statistics, 32(1):56-85, 2004. +Zhang, Z. and Sabuncu, M. Generalized cross entropy loss for training deep neural networks with noisy labels. In Advances in neural information processing systems, 2018. +Zheng, C., Wu, G., Bao, F., Cao, Y., Li, C., and Zhu, J. Revisiting discriminative vs. generative classifiers: Theory and implications. arXiv preprint arXiv:2302.02334, 2023. + +# Contents of Appendix + +A Related work 15 +B Notation 15 +C General tools 15 +D Negative results for general pairwise ranking (Proof of Theorem 2.1) 18 +E Positive results for piecewise functions in general pairwise ranking (Proof of Theorem 2.3) 18 +F $\mathcal{H}_{\mathrm{all}}$ - consistency bounds for pairwise misranking losses 19 + +F.1 Derivation for $\mathsf{L}_{\Phi_{\mathrm{hinge}}}$ 20 +F.2 Derivation for $\mathsf{L}_{\Phi_p}$ 20 +F.3 Derivation for $\mathsf{L}_{\Phi_{\mathrm{exp}}}$ 21 +F.4 Derivation for $\mathsf{L}_{\Phi_{\log}}$ 21 +F.5 Derivation for $\mathsf{L}_{\Phi_{\mathrm{sq}}}$ 22 +F.6 Derivation for $\mathsf{L}_{\Phi_{\mathrm{sig}}}$ 23 + +G Minimizability gaps can be non-zero for $\mathcal{H} = \mathcal{H}_{\mathrm{all}}$ in the general pairwise misranking case. 23 +H Characterization of distribution order and minimizability gap (Proof of Theorem 2.5, Theorem 2.6 and Theorem 2.7) 24 +I Negative results for bipartite ranking (Proof of Theorem 4.1) 26 +J $\mathcal{H}_{\mathrm{all}}$ - consistency bounds for bipartite misranking losses 27 + +J.1 Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{hinge}}}$ 28 +J.2 Derivation for $\widetilde{\mathsf{L}}_{\Phi_o}$ 29 +J.3 Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{exp}}}$ 29 +J.4 Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\log}}$ 30 +J.5 Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sq}}}$ 31 +J.6 Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sig}}}$ 32 + +K $\mathcal{H}$ - consistency bounds for pairwise abstention loss 33 + +K.1 Linear Hypotheses 34 + +K.1.1 Derivation for $\mathsf{L}_{\Phi_{\mathrm{hinge}}}$ 34 +K.1.2 Derivation for $\mathsf{L}_{\Phi_p}$ 35 +K.1.3 Derivation for $\mathsf{L}_{\Phi_{\mathrm{exp}}}$ 35 +K.1.4 Derivation for $\mathsf{L}_{\Phi_{\log}}$ 37 + +K.1.5 Derivation for $\mathsf{L}_{\Phi_{\mathrm{sq}}}$ 39 +K.1.6 Derivation for $\mathsf{L}_{\Phi_{\mathrm{sig}}}$ 40 + +# K.2 One-Hidden-Layer ReLU Neural Networks 40 + +K.2.1 Derivation for $\mathsf{L}_{\Phi_{\mathrm{hinge}}}$ 41 +K.2.2 Derivation for $\mathsf{L}_{\Phi_p}$ 41 +K.2.3 Derivation for $\mathsf{L}_{\Phi_{\mathrm{exp}}}$ 42 +K.2.4 Derivation for $\mathsf{L}_{\Phi_{\log}}$ 44 +K.2.5 Derivation for $\mathsf{L}_{\Phi_{\mathrm{sq}}}$ 45 +K.2.6 Derivation for $\mathsf{L}_{\Phi_{\mathrm{sig}}}$ 46 + +# L $\mathcal{H}$ - consistency bounds for bipartite abstention losses 47 + +# L.1 Linear Hypotheses 48 + +L.1.1 Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{hinge}}}$ 48 +L.1.2 Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\rho}}$ 49 +L.1.3 Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{exp}}}$ 50 +L.1.4 Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\log}}$ 51 +L.1.5 Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sq}}}$ 53 +L.1.6 Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sig}}}$ 54 + +# L.2 One-Hidden-Layer ReLU Neural Networks 54 + +L.2.1 Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{hinge}}}$ 54 +L.2.2 Derivation for $\widetilde{\mathsf{L}}_{\Phi_p}$ 55 +L.2.3 Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{exp}}}$ 56 +L.2.4 Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\log}}$ 57 +L.2.5 Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sq}}}$ 59 +L.2.6 Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sig}}}^*$ 60 + +# A. Related work + +The notions of Bayes consistency (also known as consistency) and calibration have been extensively studied for classification (Zhang, 2004; Bartlett et al., 2006; Tewari & Bartlett, 2007). The Bayes consistency of ranking surrogate losses has been studied in the special case of bipartite score-based ranking: in particular, Uematsu & Lee (2017) proved the inconsistency of the pairwise ranking loss based on the hinge loss and Gao & Zhou (2015) gave excess loss bounds for pairwise ranking losses based on the exponential or the logistic loss. Later, these results were further generalized by Menon & Williamson (2014). A related but distinct consistency question has been studied in several publications (Agarwal et al., 2005; Kotlowski et al., 2011; Agarwal, 2014). It is one with respect to binary classification, that is whether a near minimizer of the surrogate loss of the binary classification loss is a near minimizer of the bipartite misranking loss (Cortes & Mohri, 2003). + +Considerable attention has been devoted to the study of the learning to rank algorithms and their related problems: including one-pass AUC pairwise optimization (Gao et al., 2013), preference-based ranking (Cohen et al., 1997; Clemençon et al., 2008), subset ranking with Discounted Cumulative Gain (DCG) (Cossock & Zhang, 2008; Buffoni et al., 2011), listwise ranking (Xia et al., 2008), subset ranking based on Pairwise Disagreement (PD) (Duchi et al., 2010; Lan et al., 2012), subset ranking using Normalized Discounted Cumulative Gain (NDCG) (Ravikumar et al., 2011), subset ranking with Average Precision (AP) (Calauzenes et al., 2012; Ramaswamy et al., 2013), general multi-class problems (Ramaswamy & Agarwal, 2012; Ramaswamy et al., 2014) and multi-label problems (Gao & Zhou, 2011; Zhang et al., 2020). + +Bayes consistency only holds for the full family of measurable functions, which of course is distinct from the more restricted hypothesis set used by a learning algorithm. Therefore, a hypothesis set-dependent notion of $\mathcal{H}$ -consistency has been proposed by Long & Servedio (2013) in the realizable setting, which was used by Zhang & Agarwal (2020) for linear models, and generalized by Kuznetsov et al. (2014) to the structured prediction case. Long & Servedio (2013) showed that there exists a case where a Bayes-consistent loss is not $\mathcal{H}$ -consistent while inconsistent loss functions can be $\mathcal{H}$ -consistent. Zhang & Agarwal (2020) further investigated the phenomenon in (Long & Servedio, 2013) and showed that the situation of loss functions that are not $\mathcal{H}$ -consistent with linear models can be remedied by carefully choosing a larger piecewise linear hypothesis set. Kuznetsov et al. (2014) proved positive results for the $\mathcal{H}$ -consistency of several multi-class ensemble algorithms, as an extension of $\mathcal{H}$ -consistency results in (Long & Servedio, 2013). + +Recently, Awasthi et al. (2022a) presented a series of results providing $\mathcal{H}$ -consistency bounds in binary classification. These guarantees are significantly stronger than the $\mathcal{H}$ -calibration or $\mathcal{H}$ -consistency properties studied by Awasthi et al. (2021a;b). Awasthi et al. (2022b) and Mao et al. (2023) (see also (Zheng et al., 2023)) generalized $\mathcal{H}$ -consistency bounds to the scenario of multi-class classification. Awasthi et al. (2023b) proposed a family of loss functions that benefit from such $\mathcal{H}$ -consistency bounds guarantees for adversarial robustness (Goodfellow et al., 2014; Madry et al., 2017; Tsipras et al., 2018; Carlini & Wagner, 2017; Awasthi et al., 2023a). $\mathcal{H}$ -consistency bounds are also more informative than similar excess error bounds derived in the literature, which correspond to the special case where $\mathcal{H}$ is the family of all measurable functions (Zhang, 2004; Bartlett et al., 2006; Mohri et al., 2018). Our work significantly generalizes the results of Awasthi et al. (2022a) to the score-based ranking setting, including both the general pairwise ranking and bipartite ranking scenarios. + +# B. Notation + +We provide a table of notation in Table 7. + +# C. General tools + +To begin with the proof, we first introduce some notation. In general pairwise ranking scenario, we denote by $\mathcal{D}$ a distribution over $\mathcal{X} \times \mathcal{X} \times \mathcal{Y}$ and by $\mathcal{P}$ a set of such distributions. We further denote by $\eta(x, x') = \mathcal{D}(Y = 1 | (X, X') = (x, x'))$ the conditional probability of $Y = 1$ given $(X, X') = (x, x')$ . Without loss of generality, we assume that $\eta(x, x) = 1/2$ . The generalization error for a surrogate loss $\mathsf{L}$ can be rewritten as $\mathcal{R}_{\mathsf{L}}(h) = \mathbb{E}_X[\mathcal{C}_{\mathsf{L}}(h, x, x'])$ , where $\mathcal{C}_{\mathsf{L}}(h, x, x')$ is the conditional L-risk, defined by + +$$ +\mathcal {C} _ {\mathsf {L}} (h, x, x ^ {\prime}) = \eta (x, x ^ {\prime}) \mathsf {L} (h, x, x ^ {\prime}, + 1) + (1 - \eta (x, x ^ {\prime})) \mathsf {L} (h, x, x ^ {\prime}, - 1). +$$ + +We denote by $\mathcal{C}_{\mathsf{L}}^{*}(\mathcal{H},x,x^{\prime}) = \inf_{h\in \mathcal{H}}\mathcal{C}_{\mathsf{L}}(h,x,x^{\prime})$ the minimal conditional L-risk. Then, the minimizability gap can be rewritten as follows: + +$$ +\mathcal {M} _ {\mathrm {L}} (\mathcal {H}) = \mathcal {R} _ {\mathrm {L}} ^ {*} (\mathcal {H}) - \mathbb {E} _ {X} \left[ \mathcal {C} _ {\mathrm {L}} ^ {*} (\mathcal {H}, x) \right]. +$$ + +Table 7: Summary of notation. + +
xInput space
yLabel space
HA hypothesis set of functions mapping from x to R
DA distribution over x × x × y or x × y
L0-1General pairwise misranking loss
Rl0-1Expected general pairwise misranking loss
sign(u)1u≥0 - 1u<0
η(x, x')The conditional probability of Y = +1 given (X, X') = (x, x')
L0-1Bipartite misranking loss
Rl0-1Expected bipartite misranking loss
η(x)The conditional probability of Y = +1 given X = x
LA surrogate loss for L0-1
LA surrogate loss for L0-1
RL*(H)(RL*(H))The minimal generalization error
ML(H)(ML(H))The minimizability gap
HallThe hypothesis set of all measurable functions
HlinLinear hypothesis set
HNNThe hypothesis set of one-hidden-layer ReLU networks
(·)+max(·, 0)
HpwThe hypothesis set of piecewise functions
DThe distribution order
The pairwise abstention loss
Labs0-1A given threshold value
γCost
cThe bipartite abstention loss
Labs0-1The conditional L-risk (L-risk)
CL(CL)The minimal conditional L-risk (L-risk)
CL(H, x, x') (CL(H, x, x'))The calibration gap
ΔCL, H (ΔCL, H)The ε-truncation of t
(t)ε
+ +We further refer to $\mathcal{C}_{\mathsf{L}}(h,x,x^{\prime}) - \mathcal{C}_{\mathsf{L}}^{*}(\mathcal{H},x,x^{\prime})$ as the calibration gap and denote it by $\Delta \mathcal{C}_{\mathsf{L},\mathcal{H}}(h,x,x^{\prime})$ + +In bipartite ranking scenario, we denote by $\mathcal{D}$ a distribution over $\mathcal{X} \times \mathcal{Y}$ and by $\mathcal{P}$ a set of such distributions. We further denote by $\eta(x) = \mathcal{D}(Y = 1 | X = x)$ the conditional probability of $Y = 1$ given $X = x$ . The generalization error for a surrogate loss $\widetilde{\mathsf{L}}$ can be rewritten as $\mathcal{R}_{\widetilde{\mathsf{L}}}^{\prime}(h) = \mathbb{E}_X\left[\mathcal{C}_{\widetilde{\mathsf{L}}}^{\prime}(h, x, x')\right]$ , where $\mathcal{C}_{\widetilde{\mathsf{L}}}^{\prime}(h, x, x')$ is the conditional $\widetilde{\mathsf{L}}$ -risk, defined by + +$$ +\mathcal {C} _ {\widetilde {L}} \left(h, x, x ^ {\prime}\right) = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \widetilde {L} \left(h, x, x ^ {\prime}, + 1, - 1\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \widetilde {L} \left(h, x, x ^ {\prime} - 1, + 1\right). +$$ + +We denote by $\mathcal{C}_{\widehat{\mathsf{L}}}^{*}(\mathcal{H},x,x^{\prime}) = \inf_{h\in \mathcal{H}}\mathcal{C}_{\widehat{\mathsf{L}}}^{*}(h,x,x^{\prime})$ the minimal conditional $\widetilde{\mathsf{L}}$ -risk. Then, the minimizability gap can be rewritten as follows: + +$$ +\mathcal {M} _ {\widetilde {\mathsf {L}}} (\mathcal {H}) = \mathcal {R} _ {\widetilde {\mathsf {L}}} ^ {*} (\mathcal {H}) - \mathbb {E} _ {X} \big [ \mathcal {C} _ {\widetilde {\mathsf {L}}} ^ {*} (\mathcal {H}, x) \big ]. +$$ + +We further refer to $\mathcal{C}_{\widetilde{\mathsf{L}}}(\boldsymbol {h},\boldsymbol {x},\boldsymbol{x}^{\prime}) - \mathcal{C}_{\widetilde{\mathsf{L}}}^{*}(\mathcal{H},\boldsymbol {x},\boldsymbol{x}^{\prime})$ as the calibration gap and denote it by $\Delta \mathcal{C}_{\widetilde{\mathsf{L}},\mathcal{H}}(h,x,x')$ . For any $\epsilon >0$ , we will denote by $\langle t\rangle_{\epsilon}$ the $\epsilon$ -truncation of $t\in \mathbb{R}$ defined by $t\mathbb{1}_{t > \epsilon}$ + +We first prove two general results, which provide bounds between any loss functions $\mathsf{L}_1$ and $\mathsf{L}_2$ in both general pairwise ranking scenario and bipartite ranking scenario. + +Theorem C.1. Assume that there exists a convex function $\Psi: \mathbb{R}_+ \to \mathbb{R}$ with $\Psi(0) \geq 0$ and $\epsilon \geq 0$ such that the following holds for all $h \in \mathcal{H}$ , $x \in \mathcal{X}$ , $x' \in \mathcal{X}$ and $\mathcal{D} \in \mathcal{P}$ : + +$$ +\left. \Psi \left(\left\langle \Delta \mathcal {C} _ {\mathrm {L} _ {2}, \mathcal {H}} \left(h, x, x ^ {\prime}\right) \right\rangle_ {\epsilon}\right) \leq \left\langle \Delta \mathcal {C} _ {\mathrm {L} _ {1}, \mathcal {H}} \left(h, x, x ^ {\prime}\right) \right\rangle_ {\epsilon}. \right. \tag {7} +$$ + +Then, the following inequality holds for any $h \in \mathcal{H}$ and $\mathcal{D} \in \mathcal{P}$ : + +$$ +\Psi \left(\mathcal {R} _ {\mathrm {L} _ {2}} (h) - \mathcal {R} _ {\mathrm {L} _ {2}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\mathrm {L} _ {2}} (\mathcal {H})\right) \leq \mathcal {R} _ {\mathrm {L} _ {1}} (h) - \mathcal {R} _ {\mathrm {L} _ {1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\mathrm {L} _ {1}} (\mathcal {H}) + \max \{\Psi (0), \Psi (\epsilon) \}. \tag {8} +$$ + +Proof. By the definition of the generalization error and the minimizability gap, for any $h \in \mathcal{H}$ and $\mathcal{D} \in \mathcal{P}$ , we can write the left hand side of (8) as + +$$ +\Psi \big (\mathcal {R} _ {\mathsf {L} _ {2}} (h) - \mathcal {R} _ {\mathsf {L} _ {2}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\mathsf {L} _ {2}} (\mathcal {H}) \big) = \Psi \big (\mathcal {R} _ {\mathsf {L} _ {2}} (h) - \mathbb {E} _ {(X, X ^ {\prime})} \big [ \mathcal {C} _ {\mathsf {L} _ {2}} ^ {*} (\mathcal {H}, x, x ^ {\prime}) \big ] \big) = \Psi \big (\mathbb {E} _ {(X, X ^ {\prime})} \big [ \Delta \mathcal {C} _ {\mathsf {L} _ {2}, \mathcal {H}} (h, x, x ^ {\prime}) \big ] \big). +$$ + +Since $\Psi$ is convex, by Jensen's inequality, it can be upper bounded by $\mathbb{E}_{(X,X^{\prime})}[\Psi (\Delta \mathcal{C}_{\mathsf{L}_2,\mathcal{H}}(h,x,x^{\prime}))]$ . Due to the decomposition + +$$ +\Delta \mathcal {C} _ {\mathrm {L} _ {2}, \mathcal {H}} (h, x, x ^ {\prime}) = \left\langle \Delta \mathcal {C} _ {\mathrm {L} _ {2}, \mathcal {H}} (h, x, x ^ {\prime}) \right\rangle_ {\epsilon} + \Delta \mathcal {C} _ {\mathrm {L} _ {2}, \mathcal {H}} (h, x, x ^ {\prime}) 1 _ {\Delta \mathcal {C} _ {\mathrm {L} _ {2}, \mathcal {H}} (h, x, x ^ {\prime}) \leq \epsilon}, +$$ + +and the assumption $\Psi (0)\geq 0$ , we have the following inequality: + +$$ +\mathbb {E} _ {(X, X ^ {\prime})} \big [ \Psi \big (\Delta \mathcal {C} _ {\mathsf {L} _ {2}, \mathcal {H}} (h, x, x ^ {\prime}) \big) \big ] \leq \mathbb {E} _ {(X, X ^ {\prime})} \Big [ \Psi \Big (\langle \Delta \mathcal {C} _ {\mathsf {L} _ {2}, \mathcal {H}} (h, x, x ^ {\prime}) \rangle_ {\epsilon} \Big) \Big ] + \mathbb {E} _ {(X, X ^ {\prime})} \Big [ \Psi \Big (\Delta \mathcal {C} _ {\mathsf {L} _ {2}, \mathcal {H}} (h, x, x ^ {\prime}) \mathbb {1} _ {\Delta \mathcal {C} _ {\mathsf {L} _ {2}, \mathcal {H}} (h, x, x ^ {\prime}) \leq \epsilon} \Big) \Big ]. +$$ + +By assumption (7), the first term can be bounded as follows: + +$$ +\mathbb {E} _ {(X, X ^ {\prime})} \Big [ \Psi \big (\big \langle \Delta \mathcal {C} _ {\mathsf {L} _ {2}, \mathcal {H}} (h, x, x ^ {\prime}) \big \rangle_ {\epsilon} \big) \Big ] \leq \mathbb {E} _ {(X, X ^ {\prime})} \big [ \Delta \mathcal {C} _ {\mathsf {L} _ {1}, \mathcal {H}} (h, x, x ^ {\prime}) \big ] = \mathcal {R} _ {\mathsf {L} _ {1}} (h) - \mathcal {R} _ {\mathsf {L} _ {1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\mathsf {L} _ {1}} (\mathcal {H}). +$$ + +Since $\Delta \mathcal{C}_{\mathsf{L}_2,\mathcal{H}}(h,x,x')\mathbb{1}_{\Delta \mathcal{C}_{\mathsf{L}_2,\mathcal{H}}(h,x,x')\leq \epsilon}\in [0,\epsilon ]$ , we can bound $\mathbb{E}_{(X,X^{\prime})}\big[\Psi \big(\Delta \mathcal{C}_{\mathsf{L}_2,\mathcal{H}}(h,x,x')\mathbb{1}_{\Delta \mathcal{C}_{\mathsf{L}_2,\mathcal{H}}(h,x,x')\leq \epsilon}\big)\big]$ by $\sup_{t\in [0,\epsilon ]}\Psi (t)$ , which equals $\max \{\Psi (0),\Psi (\epsilon)\}$ due to the convexity of $\Psi$ + +Theorem C.2. Assume that there exists a non-decreasing concave function $\Gamma: \mathbb{R}_+ \to \mathbb{R}$ and $\epsilon \geq 0$ such that the following holds for all $h \in \mathcal{H}$ , $x \in \mathcal{X}$ , $x' \in \mathcal{X}$ and $\mathcal{D} \in \mathcal{P}$ : + +$$ +\left\langle \Delta \mathcal {C} _ {\mathrm {L} _ {2}, \mathcal {H}} \left(h, x, x ^ {\prime}\right) \right\rangle_ {\epsilon} \leq \Gamma \left(\left\langle \Delta \mathcal {C} _ {\mathrm {L} _ {1}, \mathcal {H}} \left(h, x, x ^ {\prime}\right) \right\rangle_ {\epsilon}\right). \tag {9} +$$ + +Then, the following inequality holds for any $h \in \mathcal{H}$ and $\mathcal{D} \in \mathbb{P}$ : + +$$ +\left. \mathcal {R} _ {\mathrm {L} _ {2}} (h) - \mathcal {R} _ {\mathrm {L} _ {2}} ^ {*} (\mathcal {H}) \leq \Gamma \left(\mathcal {R} _ {\mathrm {L} _ {1}} (h) - \mathcal {R} _ {\mathrm {L} _ {1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\mathrm {L} _ {1}} (\mathcal {H})\right) - \mathcal {M} _ {\mathrm {L} _ {2}} (\mathcal {H}) + \epsilon . \right. \tag {10} +$$ + +Proof. By the definition of the generalization error and the minimizability gap, for any $h \in \mathcal{H}$ and $\mathcal{D} \in \mathcal{P}$ , we can write the left hand side of (10) as + +$$ +\begin{array}{l} \mathcal {R} _ {\mathrm {L} _ {2}} (h) - \mathcal {R} _ {\mathrm {L} _ {2}} ^ {*} (\mathcal {H}) \\ = \mathbb {E} _ {(X, X ^ {\prime})} \left[ \Delta \mathcal {C} _ {\mathrm {L} _ {2}, \mathcal {H}} \left(h, x, x ^ {\prime}\right) \right] - \mathcal {M} _ {\mathrm {L} _ {2}} (\mathcal {H}) \\ = \mathbb {E} _ {(X, X ^ {\prime})} \left[ \left\langle \Delta \mathcal {C} _ {\mathrm {L} _ {2}, \mathcal {H}} (h, x, x ^ {\prime}) \right\rangle_ {\epsilon} \right] + \mathbb {E} _ {(X, X ^ {\prime})} \left[ \Delta \mathcal {C} _ {\mathrm {L} _ {2}, \mathcal {H}} (h, x, x ^ {\prime}) 1 _ {\Delta \mathcal {C} _ {\mathrm {L} _ {2}, \mathcal {H}} (h, x, x ^ {\prime}) \leq \epsilon} \right] - \mathcal {M} _ {\mathrm {L} _ {2}} (\mathcal {H}) \\ \end{array} +$$ + +By assumption (9) and that $\Gamma$ is non-decreasing, the following inequality holds: + +$$ +\mathbb {E} _ {(X, X ^ {\prime})} \left[ \left\langle \Delta \mathcal {C} _ {\mathrm {L} _ {2}, \mathcal {H}} (h, x, x ^ {\prime}) \right\rangle_ {\epsilon} \right] \leq \mathbb {E} _ {(X, X ^ {\prime})} \left[ \Gamma \left(\Delta \mathcal {C} _ {\mathrm {L} _ {1}, \mathcal {H}} (h, x, x ^ {\prime})\right) \right]. +$$ + +Since $\Gamma$ is concave, by Jensen's inequality, + +$$ +\mathbb {E} _ {(X, X ^ {\prime})} \big [ \Gamma \big (\Delta \mathcal {C} _ {\mathsf {L} _ {1}, \mathcal {H}} (h, x, x ^ {\prime}) \big) \big ] \leq \Gamma \big (\mathbb {E} _ {(X, X ^ {\prime})} \big [ \Delta \mathcal {C} _ {\mathsf {L} _ {1}, \mathcal {H}} (h, x, x ^ {\prime}) \big ] \big) = \Gamma \big (\mathcal {R} _ {\mathsf {L} _ {1}} (h) - \mathcal {R} _ {\mathsf {L} _ {1}} ^ {*} (\mathcal {H}) + \mathcal {M} _ {\mathsf {L} _ {1}} (\mathcal {H}) \big). +$$ + +We complete the proof by noting that $\mathbb{E}_{(X,X^{\prime})}\big[\Delta \mathcal{C}_{\mathsf{L}_2,\mathcal{H}}(h,x,x^{\prime})\mathbb{1}_{\Delta \mathcal{C}_{\mathsf{L}_2,\mathcal{H}}(h,x,x^{\prime})\leq \epsilon}\big]\leq \epsilon$ + +# D. Negative results for general pairwise ranking (Proof of Theorem 2.1) + +Theorem 2.1 (Negative results). Assume that $\mathcal{X}$ contains an interior point $x_0$ and that $\mathcal{H}$ is regular for general pairwise ranking, contains 0 and is equicontinuous at $x_0$ . If for some function $f$ that is non-decreasing and continuous at 0, the following bound holds for all $h \in \mathcal{H}$ and any distribution, + +$$ +\mathcal {R} _ {\mathrm {L} _ {0 - 1}} (h) - \mathcal {R} _ {\mathrm {L} _ {0 - 1}} ^ {*} (\mathcal {H}) \leq f \left(\mathcal {R} _ {\mathrm {L} _ {\Phi}} (h) - \mathcal {R} _ {\mathrm {L} _ {\Phi}} ^ {*} (\mathcal {H})\right), +$$ + +then, $f(t) \geq 1$ for any $t \geq 0$ . + +Proof. Assume $x_0 \in \mathcal{X}$ is an interior point and $h_0 = 0 \in \mathcal{H}$ . By the assumption that $x_0$ is an interior point and $\mathcal{H}$ is equicontinuous at $x_0$ , for any $\epsilon > 0$ , we are able to take $x' \neq x_0 \in \mathcal{X}$ such that $|h(x') - h(x_0)| < \epsilon$ for all $h \in \mathcal{H}$ . Consider the distribution that supports on $\{(x_0, x')\}$ with $\eta(x_0, x') = 0$ . Then, for any $h \in \mathcal{H}$ , + +$$ +\mathcal {R} _ {\mathrm {L} _ {0 - 1}} (h) = \mathcal {C} _ {\mathrm {L} _ {0 - 1}} (h, x _ {0}, x ^ {\prime}) = \mathbb {1} _ {h \left(x ^ {\prime}\right) \geq h \left(x _ {0}\right)} \geq 0, +$$ + +where the equality can be achieved for some $h \in \mathcal{H}$ since $\mathcal{H}$ is regular for general pairwise ranking. Therefore, + +$$ +\mathcal {R} _ {\mathsf {L} _ {0 - 1}} ^ {*} (\mathcal {H}) = \mathcal {C} _ {\mathsf {L} _ {0 - 1}} ^ {*} (\mathcal {H}, x _ {0}, x ^ {\prime}) = \inf _ {h \in \mathcal {H}} \mathcal {C} _ {\mathsf {L} _ {0 - 1}} (h, x _ {0}, x ^ {\prime}) = 0. +$$ + +Note $\mathcal{R}_{\mathrm{L}_{0 - 1}}(h_0) = 1$ . For the surrogate loss $\mathsf{L}_{\Phi}$ , for any $h\in \mathcal{H}$ + +$$ +\mathcal {R} _ {\mathsf {L} _ {\Phi}} (h) = \mathcal {C} _ {\mathsf {L} _ {\Phi}} (h, x _ {0}, x ^ {\prime}) = \Phi (h (x _ {0}) - h (x ^ {\prime})) \in [ \Phi (\epsilon), \Phi (- \epsilon) ] +$$ + +since $|h(x') - h(x_0)| < \epsilon$ and $\Phi$ is non-increasing. Therefore, + +$$ +\mathcal {R} _ {\mathsf {L} _ {\Phi}} ^ {*} \left(\mathcal {H}\right) = \mathcal {C} _ {\mathsf {L} _ {\Phi}} ^ {*} \left(\mathcal {H}, x _ {0}, x ^ {\prime}\right) \geq \Phi \big (\epsilon \big). +$$ + +Note $\mathcal{R}_{\mathrm{L},\Phi}(h_0) = \Phi (0)$ . If for some function $f$ that is non-decreasing and continuous at 0, the bound holds, then, we obtain for any $h\in \mathcal{H}$ and $\epsilon >0$ + +$$ +\mathcal {R} _ {\mathrm {L} _ {0 - 1}} (h) - 0 \leq f \big (\mathcal {R} _ {\mathrm {L} _ {\Phi}} (h) - \mathcal {R} _ {\mathrm {L} _ {\Phi}} ^ {*} (\mathcal {H}) \big) \leq f \big (\mathcal {R} _ {\mathrm {L} _ {\Phi}} (h) - \Phi (\epsilon) \big). +$$ + +Let $h = h_0$ , then $f(\Phi(0) - \Phi(\epsilon)) \geq 1$ for any $\epsilon > 0$ . Take $\epsilon \to 0$ , we obtain $f(0) \geq 1$ using the fact that $\Phi$ and $f$ are both continuous at 0. Since $f$ is non-decreasing, for any $t \in [0,1]$ , $f(t) \geq 1$ . + +# E. Positive results for piecewise functions in general pairwise ranking (Proof of Theorem 2.3) + +Theorem 2.3 (Positive results for piecewise functions). Assume that $\Phi$ satisfies $\lim_{u\to +\infty}\Phi (u) = 0$ . Then, for all $h\in \mathcal{H}_{\mathrm{pw}}$ and any deterministic distribution, + +$$ +\mathcal {R} _ {\mathsf {L} _ {0 - 1}} (h) - \mathcal {R} _ {\mathsf {L} _ {0 - 1}} ^ {*} (\mathcal {H} _ {\mathrm {p w}}) \leq \mathcal {R} _ {\mathsf {L} _ {\Phi}} (h) - \mathcal {R} _ {\mathsf {L} _ {\Phi}} ^ {*} (\mathcal {H} _ {\mathrm {p w}}) + \mathcal {M} _ {\mathsf {L} _ {\Phi}} (\mathcal {H} _ {\mathrm {p w}}) - \mathcal {M} _ {\mathsf {L} _ {0 - 1}} (\mathcal {H} _ {\mathrm {p w}}). +$$ + +Proof. For any $x \neq x' \in \mathcal{X}$ , + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {0 - 1}} (h, x, x ^ {\prime}) = \left\{ \begin{array}{l l} \mathbb {1} _ {h (x ^ {\prime}) < h (x)} & \eta (x, x ^ {\prime}) = 1 \\ \mathbb {1} _ {h (x ^ {\prime}) \geq h (x)} & \eta (x, x ^ {\prime}) = 0 \end{array} \right. \\ \geq 0 \\ \end{array} +$$ + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {\Phi}} (h, x, x ^ {\prime}) = \left\{ \begin{array}{l l} \Phi (h (x ^ {\prime}) - h (x)) & \eta (x, x ^ {\prime}) = 1 \\ \Phi (h (x) - h (x ^ {\prime})) & \eta (x, x ^ {\prime}) = 0 \end{array} \right. \\ \geq 0, \\ \end{array} +$$ + +where both equities can be achieved by $h(u) = \pm \alpha \left( \mathbb{1}_{u \neq x \wedge \| u \| > c} - \mathbb{1}_{u \neq x \wedge \| u \| \leq c} \right)$ or $h(u) = \pm \alpha \left( \mathbb{1}_{u \neq x' \wedge \| u \| > c} - \mathbb{1}_{u \neq x' \wedge \| u \| \leq c} \right)$ for $\alpha \to +\infty$ . Therefore, we have $\mathcal{C}_{\mathsf{L}_{0-1}}^* (\mathcal{H}, x, x') = \mathcal{C}_{\mathsf{L}_\Phi}^* (\mathcal{H}, x, x') = 0$ and + +$$ +\Delta \mathcal {C} _ {\mathrm {L}, \mathcal {H}} (h, x, x ^ {\prime}) = \mathcal {C} _ {\mathrm {L} _ {\Phi}} (h, x, x ^ {\prime}) \geq \mathcal {C} _ {\mathrm {L} _ {0 - 1}} (h, x, x ^ {\prime}) = \Delta \mathcal {C} _ {\mathrm {L} _ {0 - 1}, \mathcal {H} f} (h, x, x ^ {\prime}). +$$ + +By Theorem C.1 or Theorem C.2, we complete the proof. + +![](images/4f37936c014176f23a78634a180a64db86bde5c12de2c43c61dfde7688857053.jpg) + +Table 8: ${\mathcal{H}}_{\text{all }}$ -consistency upper bounds for general pairwise abstention. + +
Loss functionHall-consistency upper bound
LΦhingeRLΦhinge(h) - R*LΦhinge(Hall) + MLΦhinge(Hall) - ML0-1(Hall)
LΦρRLΦρ(h) - R*LΦρ(Hall) + MLΦρ(Hall) - ML0-1(Hall)
LΦexp√2(RLΦexp(h) - R*LΦexp(Hall) + MLΦexp(Hall))^1/2 - ML0-1(Hall)
LΦlog√2(RLΦlog(h) - R*LΦlog(Hall) + MLΦlog(Hall))^1/2 - ML0-1(Hall)
LΦsq(RLΦsq(h) - R*LΦsq(Hall) + MLΦsq(Hall))^1/2 - ML0-1(Hall)
LΦsigRLΦsig(h) - R*LΦsig(Hall) + MLΦsig(Hall) - ML0-1(Hall)
+ +# F. $\mathcal{H}_{\mathrm{all}}$ - consistency bounds for pairwise misranking losses + +We first characterize the minimal conditional $\mathsf{L}_{0-1}$ -risk and the calibration gap of the pairwise misranking loss for a broad class of hypothesis sets. We let $\overline{\mathcal{H}}(x, x') = \{h \in \mathcal{H} : \mathrm{sign}(h(x') - h(x))(2\eta(x, x') - 1) \leq 0\}$ for convenience. + +Lemma F.1. Assume that $\mathcal{H}$ is regular for general pairwise ranking. Then, the minimal conditional $\mathsf{L}_{0-1}$ -risk is + +$$ +\mathcal {C} _ {\mathsf {L} _ {0 - 1}} ^ {*} \left(\mathscr {H}, x, x ^ {\prime}\right) = \min \left\{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \right\}. +$$ + +The calibration gap of $\mathsf{L}_{0 - 1}$ can be characterized as + +$$ +\Delta \mathcal {C} _ {\mathrm {L} _ {0 - 1}, \mathcal {H}} (h, x, x ^ {\prime}) = | 2 \eta (x, x ^ {\prime}) - 1 | \mathbb {1} _ {h \in \overline {{\mathcal {H}}} (x, x ^ {\prime})}. +$$ + +Proof. By the definition, the conditional $\mathsf{L}_{0 - 1}$ -risk is + +$$ +\mathbb {C} _ {\mathrm {L} _ {0 - 1}} (h, x, x ^ {\prime}) = \eta (x, x ^ {\prime}) \mathbb {1} _ {h \left(x ^ {\prime}\right) < h (x)} + (1 - \eta (x, x ^ {\prime})) \mathbb {1} _ {h \left(x ^ {\prime}\right) \geq h (x)}. +$$ + +For any $x \in \mathcal{X}$ , $\mathcal{C}_{\mathsf{L}_{0-1}}(h, x, x) = \mathcal{C}_{\mathsf{L}_{0-1}}^*(\mathcal{H}, x, x) = 1 - \eta(x, x) = 1/2$ . For any $x \neq x' \in \mathcal{X}$ , by the assumption, there exists $h^* \in \mathcal{H}$ such that $\mathrm{sign}(h^*(x') - h^*(x)) = \mathrm{sign}(\Delta \eta(x, x'))$ . Therefore, the optimal conditional $\mathsf{L}_{0-1}$ -risk can be characterized as for any $x, x' \in \mathcal{X}$ , + +$$ +\mathcal {C} _ {\mathrm {L} _ {0 - 1}} ^ {*} \left(\mathcal {H}, x, x ^ {\prime}\right) = \mathcal {C} _ {\mathrm {L} _ {0 - 1}} \left(h ^ {*}, x, x ^ {\prime}\right) = \min \left\{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \right\} +$$ + +which proves the first part of lemma. By the definition, + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\mathrm {L} _ {0 - 1}, \mathcal {H}} (h, x, x ^ {\prime}) = \mathcal {C} _ {\mathrm {L} _ {0 - 1}} (h, x, x ^ {\prime}) - \mathcal {C} _ {\mathrm {L} _ {0 - 1}} ^ {*} (\mathcal {H}, x, x ^ {\prime}) \\ = \eta (x, x ^ {\prime}) \mathbb {1} _ {h (x ^ {\prime}) < h (x)} + (1 - \eta (x, x ^ {\prime})) \mathbb {1} _ {h (x ^ {\prime}) \geq h (x)} - \min \left\{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \right\} \\ = \left\{ \begin{array}{l l} | 2 \eta (x, x ^ {\prime}) - 1 |, & h \in \overline {{\mathcal {H}}} (x, x ^ {\prime}), \\ 0, & \text {o t h e r w i s e}. \end{array} \right. \\ \end{array} +$$ + +This leads to + +$$ +\left\langle \Delta \mathcal {C} _ {\mathrm {L} _ {0 - 1}, \mathcal {H}} \left(h, x, x ^ {\prime}\right) \right\rangle_ {\epsilon} = \left\langle \left| 2 \eta \left(x, x ^ {\prime}\right) - 1 \right| \right\rangle_ {\epsilon} \mathbb {1} _ {h \in \overline {{\mathcal {H}}} (x, x ^ {\prime})}. +$$ + +By Lemma F.1, the $(\mathsf{L}_{0 - 1},\mathcal{H}_{\mathrm{all}})$ -minimizability gap is + +$$ +\mathcal {M} _ {\mathrm {L} _ {0 - 1}} \left(\mathcal {H} _ {\text {a l l}}\right) = \mathcal {R} _ {\mathrm {L} _ {0 - 1}} ^ {*} \left(\mathcal {H} _ {\text {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \min \left\{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \right\} \right]. \tag {11} +$$ + +# F.1. Derivation for $\mathsf{L}_{\Phi_{\mathrm{hinge}}}$ + +For the hinge loss function $\Phi_{\mathrm{hinge}}(u)\coloneqq \max \{0,1 - u\}$ , for all $h\in \mathcal{H}_{\mathrm{all}},x\in \mathcal{X},x'\in \mathcal{X}$ and $x\neq x^{\prime}$ : + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {h i n g e}}}} \left(h, x, x ^ {\prime}\right) \\ = \eta (x, x ^ {\prime}) \mathrm {L} _ {\Phi_ {\text {h i n g e}}} \left(h \left(x ^ {\prime}\right) - h (x)\right) + \left(1 - \eta (x, x ^ {\prime})\right) \mathrm {L} _ {\Phi_ {\text {h i n g e}}} \left(h (x) - h \left(x ^ {\prime}\right)\right) \\ = \eta \left(x, x ^ {\prime}\right) \max \left\{0, 1 - h \left(x ^ {\prime}\right) + h (x) \right\} + \left(1 - \eta \left(x, x ^ {\prime}\right)\right) \max \left\{0, 1 + h \left(x ^ {\prime}\right) - h (x) \right\}. \\ \end{array} +$$ + +Then, + +$$ +\mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) = \inf _ {h \in \mathcal {H} _ {\mathrm {a l l}}} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {h i n g e}}}} (h, x, x ^ {\prime}) = 1 - | 2 \eta (x, x ^ {\prime}) - 1 |. +$$ + +The $(\mathsf{L}_{\Phi_{\mathrm{hinge}}},\mathcal{H}_{\mathrm{all}})$ -minimizability gap is + +$$ +\begin{array}{l} \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\text {h i n g e}}}} \left(\mathcal {H} _ {\text {a l l}}\right) = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\text {h i n g e}}}} ^ {*} \left(\mathcal {H} _ {\text {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\text {h i n g e}}}, \mathcal {H} _ {\text {a l l}}} ^ {*} (x, x ^ {\prime}) \right] \tag {12} \\ = \mathcal {R} _ {\mathsf {L} _ {\Phi_ {\mathrm {h i n g e}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \big [ 1 - | 2 \eta (x, x ^ {\prime}) - 1 | \big ]. \\ \end{array} +$$ + +Therefore, $\forall h\in \overline{\mathcal{H}}_{\mathrm{all}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\text {h i n g e}}}, \mathcal {H} _ {\text {a l l}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \overline {{\mathcal {H}}} _ {\text {a l l}} (x, x ^ {\prime})} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\text {h i n g e}}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\text {h i n g e}}}, \mathcal {H} _ {\text {a l l}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x, x ^ {\prime}) \max \{0, 1 - 0 \} + (1 - \eta (x, x ^ {\prime})) \max \{0, 1 + 0 \} - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\text {h i n g e}}}, \mathcal {H} _ {\text {a l l}}} ^ {*} (x, x ^ {\prime}) \\ = 1 - \left[ 1 - \left| 2 \eta \left(x, x ^ {\prime}\right) - 1 \right| \right] \\ = | 2 \eta (x, x ^ {\prime}) - 1 |, \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{all}}$ , $x \in \mathcal{X}$ and $x' \in \mathcal{X}$ , + +$$ +\Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\text {h i n g e}}}, \mathcal {H} _ {\text {a l l}}} (h, x, x ^ {\prime}) \geq \left\langle | 2 \eta (x, x ^ {\prime}) - 1 \right| \rangle_ {0} \mathbb {1} _ {h \in \overline {{\mathcal {H}}} _ {\text {a l l}} (x, x ^ {\prime})} = \Delta \mathcal {C} _ {\mathrm {L} _ {0 - 1}, \mathcal {H} _ {\text {a l l}}} (h, x, x ^ {\prime}). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{all}}$ -consistency bound for $\mathsf{L}_{\Phi_{\mathrm{hinge}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{all}}$ : + +$$ +\mathcal {R} _ {\mathrm {L} _ {0 - 1}} (h) - \mathcal {R} _ {\mathrm {L} _ {0 - 1}} ^ {*} (\mathcal {H} _ {\text {a l l}}) \leq \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\text {h i n g e}}}} (h) - \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\text {h i n g e}}}} ^ {*} (\mathcal {H} _ {\text {a l l}}) + \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\text {h i n g e}}}} (\mathcal {H} _ {\text {a l l}}) - \mathcal {M} _ {\mathrm {L} _ {0 - 1}} (\mathcal {H} _ {\text {a l l}}). \tag {13} +$$ + +# F.2. Derivation for $\mathsf{L}_{\Phi_{\rho}}$ + +For the $\rho$ -margin loss function $\Phi_{\rho}(u) := \min \left\{1, \max \left\{0, 1 - \frac{u}{\rho}\right\} \right\}$ , $\rho > 0$ , for all $h \in \mathcal{H}_{\mathrm{all}}$ , $x \in \mathcal{X}$ , $x' \in \mathcal{X}$ and $x \neq x'$ : + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\rho}}} (h, x, x ^ {\prime}) \\ = \eta (x, x ^ {\prime}) \mathrm {L} _ {\Phi_ {\rho}} (h (x ^ {\prime}) - h (x)) + (1 - \eta (x, x ^ {\prime})) \mathrm {L} _ {\Phi_ {\rho}} (h (x) - h (x ^ {\prime})) \\ = \eta (x, x ^ {\prime}) \min \left\{1, \max \left\{0, 1 - \frac {h \left(x ^ {\prime}\right) - h (x)}{\rho} \right\} \right\} + (1 - \eta (x, x ^ {\prime})) \min \left\{1, \max \left\{0, 1 + \frac {h \left(x ^ {\prime}\right) - h (x)}{\rho} \right\} \right\} \\ \geq \mathcal {C} _ {\mathrm {L} _ {0 - 1}} (h, x, x ^ {\prime}). \\ \end{array} +$$ + +Then, + +$$ +\mathcal {C} _ {\mathsf {L} _ {\Phi_ {\rho}}, \mathcal {H} _ {\text {a l l}}} ^ {*} (x, x ^ {\prime}) = \inf _ {h \in \mathcal {H} _ {\text {a l l}}} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\rho}}} (h, x, x ^ {\prime}) = \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} = \mathcal {C} _ {\mathsf {L} _ {0 - 1}, \mathcal {H} _ {\text {a l l}}} ^ {*} (x, x ^ {\prime}). +$$ + +The $(\mathsf{L}_{\Phi_{\rho}},\mathcal{H}_{\mathrm{all}})$ -minimizability gap is + +$$ +\begin{array}{l} \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\rho}}} \left(\mathcal {H} _ {\text {a l l}}\right) = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\rho}}} ^ {*} \left(\mathcal {H} _ {\text {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\rho}}, \mathcal {H} _ {\text {a l l}}} ^ {*} (x, x ^ {\prime}) \right]. \tag {14} \\ = \mathcal {R} _ {\mathsf {L} _ {\Phi_ {\rho}}} ^ {*} \left(\mathcal {H} _ {\mathrm {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \big [ \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \big ]. \\ \end{array} +$$ + +Therefore, for any $h\in \mathcal{H}_{\mathrm{all}},x\in \mathcal{X}$ and $x^{\prime}\in \mathcal{X}$ + +$$ +\Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\rho}}, \mathcal {H} _ {\text {a l l}}} (h, x, x ^ {\prime}) \geq \left\langle | 2 \eta (x, x ^ {\prime}) - 1 \right\rangle_ {0} \mathbb {1} _ {h \in \overline {{\mathcal {H}}} _ {\text {a l l}} (x, x ^ {\prime})} = \Delta \mathcal {C} _ {\mathrm {L} _ {0 - 1}, \mathcal {H} _ {\text {a l l}}} (h, x, x ^ {\prime}). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{all}}$ -consistency bound for $\mathsf{L}_{\Phi_{\rho}}$ , valid for all $h \in \mathcal{H}_{\mathrm{all}}$ : + +$$ +\mathcal {R} _ {\mathrm {L} _ {0 - 1}} (h) - \mathcal {R} _ {\mathrm {L} _ {0 - 1}} ^ {*} (\mathcal {H} _ {\text {a l l}}) \leq \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\rho}}} (h) - \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\rho}}} ^ {*} (\mathcal {H} _ {\text {a l l}}) + \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\rho}}} (\mathcal {H} _ {\text {a l l}}) - \mathcal {M} _ {\mathrm {L} _ {0 - 1}} (\mathcal {H} _ {\text {a l l}}). \tag {15} +$$ + +# F.3. Derivation for $\mathsf{L}_{\Phi_{\mathrm{exp}}}$ + +For the exponential loss function $\Phi_{\mathrm{exp}}(u)\coloneqq e^{-u}$ , for all $h\in \mathcal{H}_{\mathrm{all}},x\in \mathcal{X},x'\in \mathcal{X}$ and $x\neq x'$ : + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\exp}}} (h, x, x ^ {\prime}) = \eta (x, x ^ {\prime}) \mathrm {L} _ {\Phi_ {\exp}} (h (x ^ {\prime}) - h (x)) + (1 - \eta (x, x ^ {\prime})) \mathrm {L} _ {\Phi_ {\exp}} (h (x) - h (x ^ {\prime})) \\ = \eta (x, x ^ {\prime}) e ^ {- h \left(x ^ {\prime}\right) + h (x)} + \left(1 - \eta (x, x ^ {\prime})\right) e ^ {h \left(x ^ {\prime}\right) - h (x)}. \\ \end{array} +$$ + +Then, + +$$ +\mathcal {C} _ {\mathsf {L} _ {\Phi_ {\exp}}} ^ {*} \left(\mathcal {H} _ {\mathrm {a l l}}\right) (x, x ^ {\prime}) = \inf _ {h \in \mathcal {H} _ {\mathrm {a l l}}} \mathcal {C} _ {\Phi_ {\exp}} (h, x, x ^ {\prime}) = 2 \sqrt {\eta (x , x ^ {\prime}) (1 - \eta (x , x ^ {\prime}))} +$$ + +. The $(\Phi_{\mathrm{exp}},\mathcal{H}_{\mathrm{all}})$ -minimizability gap is: + +$$ +\begin{array}{l} \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\exp}}} \left(\mathcal {H} _ {\text {a l l}}\right) = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\exp}}} ^ {*} \left(\mathcal {H} _ {\text {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\exp}}, \mathcal {H} _ {\text {a l l}}} ^ {*} \left(x, x ^ {\prime}\right) \right] \tag {16} \\ = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\exp}}} ^ {*} \left(\mathcal {H} _ {\text {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ 2 \sqrt {\eta (x , x ^ {\prime}) \left(1 - \eta (x , x ^ {\prime})\right)} \right]. \\ \end{array} +$$ + +Therefore, $\forall h\in \overline{\mathcal{H}}_{\mathrm{all}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\exp}}, \mathcal {H} _ {\text {a l l}}} (h, x, x ^ {\prime}) \geq \inf _ {h \in \overline {{\mathcal {H}}} _ {\text {a l l}} (x, x ^ {\prime})} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\exp}}} (h, x, x ^ {\prime}) - \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\exp}}, \mathcal {H} _ {\text {a l l}}} ^ {*} (x, x ^ {\prime}) \\ = \eta (x, x ^ {\prime}) e ^ {- 0} + (1 - \eta (x, x ^ {\prime})) e ^ {0} - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {e x p}}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) \\ = 1 - 2 \sqrt {\eta \left(x , x ^ {\prime}\right) \left(1 - \eta \left(x , x ^ {\prime}\right)\right)} \\ = \left(\frac {2 \eta \left(x , x ^ {\prime}\right) - 1}{\sqrt {\eta \left(x , x ^ {\prime}\right)} + \sqrt {1 - \eta \left(x , x ^ {\prime}\right)}}\right) ^ {2} \\ \geq \frac {(2 \eta (x , x ^ {\prime}) - 1) ^ {2}}{2}, \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{all}}$ , $x \in \mathcal{X}$ and $x' \in \mathcal{X}$ , + +$$ +\Delta \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {e x p}}}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}) \geq \frac {\left(\Delta \mathcal {C} _ {\mathsf {L} _ {0 - 1} , \mathcal {H} _ {\mathrm {a l l}}} (h , x , x ^ {\prime})\right) ^ {2}}{2}. +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{all}}$ -consistency bound for $\mathsf{L}_{\Phi_{\mathrm{exp}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{all}}$ : + +$$ +\mathcal {R} _ {\mathrm {L} _ {0 - 1}} (h) - \mathcal {R} _ {\mathrm {L} _ {0 - 1}} ^ {*} (\mathcal {H} _ {\text {a l l}}) \leq \sqrt {2} \left(\mathcal {R} _ {\mathrm {L} _ {\Phi_ {\exp}}} (h) - \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\exp}}} ^ {*} (\mathcal {H} _ {\text {a l l}}) + \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\exp}}} (\mathcal {H} _ {\text {a l l}})\right) ^ {\frac {1}{2}} - \mathcal {M} _ {\mathrm {L} _ {0 - 1}} (\mathcal {H} _ {\text {a l l}}). \tag {17} +$$ + +# F.4. Derivation for $\mathsf{L}_{\Phi_{\log}}$ + +For the logistic loss function $\Phi_{\log}(u) \coloneqq \log_2(1 + e^{-u})$ , for all $h \in \mathcal{H}_{\mathrm{all}}$ , $x \in \mathcal{X}$ , $x' \in \mathcal{X}$ and $x \neq x'$ : + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\log}}} (h, x, x ^ {\prime}) = \eta (x, x ^ {\prime}) \mathrm {L} _ {\Phi_ {\log}} (h (x ^ {\prime}) - h (x)) + (1 - \eta (x, x ^ {\prime})) \mathrm {L} _ {\Phi_ {\log}} (h (x) - h (x ^ {\prime})) \\ = \eta (x, x ^ {\prime}) \log_ {2} \left(1 + e ^ {- h (x ^ {\prime}) + h (x)}\right) + \left(1 - \eta (x, x ^ {\prime})\right) \log_ {2} \left(1 + e ^ {h (x ^ {\prime}) - h (x)}\right). \\ \end{array} +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) = \inf _ {h \in \mathcal {H} _ {\mathrm {a l l}}} \mathcal {C} _ {\Phi_ {\log}} (h, x, x ^ {\prime}) \\ = - \eta (x, x ^ {\prime}) \log_ {2} (\eta (x, x ^ {\prime})) - (1 - \eta (x, x ^ {\prime})) \log_ {2} (1 - \eta (x, x ^ {\prime})) \\ \leq 2 \sqrt {\eta (x , x ^ {\prime}) (1 - \eta (x , x ^ {\prime}))} \qquad \qquad (- a \log_ {2} (a) - b \log_ {2} (b) \leq 2 \sqrt {a b}, a, b \in [ 0, 1 ]) \\ \end{array} +$$ + +. The $(\Phi_{\mathrm{log}},\mathcal{H}_{\mathrm{all}})$ -minimizability gap is: + +$$ +\begin{array}{l} \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\log}}} \left(\mathcal {H} _ {\text {a l l}}\right) = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\log}}} ^ {*} \left(\mathcal {H} _ {\text {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\log}}, \mathcal {H} _ {\text {a l l}}} ^ {*} (x, x ^ {\prime}) \right] \tag {18} \\ = \mathcal {R} _ {\mathsf {L} _ {\Phi_ {\log}}} ^ {*} \left(\mathcal {H} _ {\mathrm {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \bigl [ - \eta (x, x ^ {\prime}) \log_ {2} (\eta (x, x ^ {\prime})) - (1 - \eta (x, x ^ {\prime})) \log_ {2} (1 - \eta (x, x ^ {\prime})) \bigr ]. \\ \end{array} +$$ + +Therefore, $\forall h\in \overline{\mathcal{H}}_{\mathrm{all}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \overline {{\mathcal {H}}} _ {\mathrm {a l l}} (x, x ^ {\prime})} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\log}}} (h, x, x ^ {\prime}) - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) \\ = \eta (x, x ^ {\prime}) \log_ {2} (1 + e ^ {- 0}) + (1 - \eta (x, x ^ {\prime})) \log_ {2} (1 + e ^ {0}) - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) \\ \geq 1 - 2 \sqrt {\eta (x , x ^ {\prime}) \left(1 - \eta (x , x ^ {\prime})\right)} \\ = \left(\frac {2 \eta \left(x , x ^ {\prime}\right) - 1}{\sqrt {\eta \left(x , x ^ {\prime}\right)} + \sqrt {1 - \eta \left(x , x ^ {\prime}\right)}}\right) ^ {2} \\ \geq \frac {\left(2 \eta \left(x , x ^ {\prime}\right) - 1\right) ^ {2}}{2}, \\ \end{array} +$$ + +which implies that for any $h\in \mathcal{H}_{\mathrm{all}}$ $x\in \mathcal{X}$ and $x^{\prime}\in \mathcal{X}$ + +$$ +\Delta \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}) \geq \frac {\left(\Delta \mathcal {C} _ {\mathsf {L} _ {0 - 1} , \mathcal {H} _ {\mathrm {a l l}}} (h , x , x ^ {\prime})\right) ^ {2}}{2}. +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{all}}$ -consistency bound for $\mathsf{L}_{\Phi_{\mathrm{log}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{all}}$ : + +$$ +\mathcal {R} _ {\mathrm {L} _ {0 - 1}} (h) - \mathcal {R} _ {\mathrm {L} _ {0 - 1}} ^ {*} (\mathcal {H} _ {\text {a l l}}) \leq \sqrt {2} \left(\mathcal {R} _ {\mathrm {L} _ {\Phi_ {\log}}} (h) - \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\log}}} ^ {*} (\mathcal {H} _ {\text {a l l}}) + \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\log}}} (\mathcal {H} _ {\text {a l l}})\right) ^ {\frac {1}{2}} - \mathcal {M} _ {\mathrm {L} _ {0 - 1}} (\mathcal {H} _ {\text {a l l}}). \tag {19} +$$ + +# F.5. Derivation for $\mathsf{L}_{\Phi_{\mathrm{sq}}}$ + +For the squared hinge loss function $\Phi_{\mathrm{sq}}(u)\coloneqq (1 - u)^2\mathbb{1}_{u\leq 1}$ , for all $h\in \mathcal{H}_{\mathrm{all}},x\in \mathcal{X},x'\in \mathcal{X}$ and $x\neq x'$ : + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}} (h, x, x ^ {\prime}) \\ = \eta (x, x ^ {\prime}) \mathsf {L} _ {\Phi_ {\mathrm {s q}}} \big (h (x ^ {\prime}) - h (x) \big) + (1 - \eta (x, x ^ {\prime})) \mathsf {L} _ {\Phi_ {\mathrm {s q}}} \big (h (x) - h (x ^ {\prime}) \big) \\ = \eta (x, x ^ {\prime}) (1 - h (x ^ {\prime}) + h (x)) ^ {2} \mathbb {1} _ {h (x ^ {\prime}) - h (x) \leq 1} + (1 - \eta (x, x ^ {\prime})) (1 + h (x ^ {\prime}) - h (x)) ^ {2} \mathbb {1} _ {h (x ^ {\prime}) - h (x) \geq - 1}. \\ \end{array} +$$ + +Then, + +$$ +\mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) = \inf _ {h \in \mathcal {H} _ {\mathrm {a l l}}} \mathcal {C} _ {\Phi_ {\mathrm {s q}}} (h, x, x ^ {\prime}) = 4 \eta (x, x ^ {\prime}) (1 - \eta (x, x ^ {\prime})). +$$ + +The $(\Phi_{\mathrm{sq}},\mathcal{H}_{\mathrm{all}})$ -minimizability gap is: + +$$ +\begin{array}{l} \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}} \left(\mathcal {H} _ {\text {a l l}}\right) = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}} ^ {*} \left(\mathcal {H} _ {\text {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\text {a l l}}} ^ {*} \left(x, x ^ {\prime}\right) \right] \tag {20} \\ = \mathcal {R} _ {\mathsf {L} _ {\Phi_ {\mathrm {s q}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \big [ 4 \eta (x, x ^ {\prime}) \big (1 - \eta (x, x ^ {\prime}) \big) \big ]. \\ \end{array} +$$ + +Therefore, $\forall h\in \overline{\mathcal{H}}_{\mathrm{all}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}) \geq \inf _ {h \in \overline {{\mathcal {H}}} _ {\mathrm {a l l}} (x, x ^ {\prime})} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}} (h, x, x ^ {\prime}) - \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) \\ = \eta (x, x ^ {\prime}) + (1 - \eta (x, x ^ {\prime})) - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) \\ = 1 - 4 \eta \left(x, x ^ {\prime}\right) \left(1 - \eta \left(x, x ^ {\prime}\right)\right) \\ = \left(2 \eta \left(x, x ^ {\prime}\right) - 1\right) ^ {2}, \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{all}}$ , $x \in \mathcal{X}$ and $x' \in \mathcal{X}$ , + +$$ +\Delta \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}) \geq \left(\Delta \mathcal {C} _ {\mathsf {L} _ {0 - 1}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime})\right) ^ {2}. +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{all}}$ -consistency bound for $\mathsf{L}_{\Phi_{\mathrm{sq}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{all}}$ : + +$$ +\mathcal {R} _ {\mathrm {L} _ {0 - 1}} (h) - \mathcal {R} _ {\mathrm {L} _ {0 - 1}} ^ {*} (\mathcal {H} _ {\text {a l l}}) \leq \left(\mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}} (h) - \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}} ^ {*} (\mathcal {H} _ {\text {a l l}}) + \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}} (\mathcal {H} _ {\text {a l l}})\right) ^ {\frac {1}{2}} - \mathcal {M} _ {\mathrm {L} _ {0 - 1}} (\mathcal {H} _ {\text {a l l}}). \tag {21} +$$ + +# F.6. Derivation for $\mathsf{L}_{\Phi_{\mathrm{sig}}}$ + +For the sigmoid loss function $\Phi_{\mathrm{sig}}(u)\coloneqq 1 - \tanh (ku),k > 0$ , for all $h\in \mathcal{H}_{\mathrm{all}},x\in \mathcal{X},x^{\prime}\in \mathcal{X}$ and $x\neq x^{\prime}$ .. + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}} \left(h, x, x ^ {\prime}\right) \\ = \eta (x, x ^ {\prime}) \mathrm {L} _ {\Phi_ {\mathrm {s i g}}} \left(h \left(x ^ {\prime}\right) - h (x)\right) + \left(1 - \eta (x, x ^ {\prime})\right) \mathrm {L} _ {\Phi_ {\mathrm {s i g}}} \left(h (x) - h \left(x ^ {\prime}\right)\right) \\ = \eta (x, x ^ {\prime}) (1 - \tanh (k [ h (x ^ {\prime}) - h (x) ])) + (1 - \eta (x, x ^ {\prime})) (1 + \tanh (k [ h (x ^ {\prime}) - h (x) ])). \\ \end{array} +$$ + +Then, + +$$ +\mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) = \inf _ {h \in \mathcal {H} _ {\mathrm {a l l}}} \mathcal {C} _ {\Phi_ {\mathrm {s i g}}} (h, x, x ^ {\prime}) = 1 - | 1 - 2 \eta (x, x ^ {\prime}) |. +$$ + +The $(\Phi_{\mathrm{sig}},\mathcal{H}_{\mathrm{all}})$ -minimizability gap is: + +$$ +\begin{array}{l} \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}} \left(\mathcal {H} _ {\text {a l l}}\right) = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}} ^ {*} \left(\mathcal {H} _ {\text {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\text {a l l}}} ^ {*} (x, x ^ {\prime}) \right] \tag {22} \\ = \mathcal {R} _ {\mathsf {L} _ {\Phi_ {\mathrm {s i g}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \big [ 1 - | 1 - 2 \eta (x, x ^ {\prime}) | \big ]. \\ \end{array} +$$ + +Therefore, $\forall h\in \overline{\mathcal{H}}_{\mathrm{all}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\text {s i g}}}, \mathcal {H} _ {\text {a l l}}} \left(h, x, x ^ {\prime}\right) \geq \inf _ {h \in \overline {{\mathcal {H}}} _ {\text {a l l}} \left(x, x ^ {\prime}\right)} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\text {s i g}}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\text {s i g}}}, \mathcal {H} _ {\text {a l l}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = 1 - \left| 1 - 2 \eta (x, x ^ {\prime}) \right| \tanh (0) - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) \\ = \left| 1 - 2 \eta \left(x, x ^ {\prime}\right) \right|, \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{all}}$ , $x \in \mathcal{X}$ and $x' \in \mathcal{X}$ , + +$$ +\Delta \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}) \geq \Delta \mathcal {C} _ {\mathsf {L} _ {0 - 1}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{all}}$ -consistency bound for $\mathsf{L}_{\Phi_{\mathrm{sig}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{all}}$ : + +$$ +\mathcal {R} _ {\mathrm {L} _ {0 - 1}} (h) - \mathcal {R} _ {\mathrm {L} _ {0 - 1}} ^ {*} (\mathcal {H} _ {\text {a l l}}) \leq \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\text {s i g}}}} (h) - \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\text {s i g}}}} ^ {*} (\mathcal {H} _ {\text {a l l}}) + \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\text {s i g}}}} (\mathcal {H} _ {\text {a l l}}) - \mathcal {M} _ {\mathrm {L} _ {0 - 1}} (\mathcal {H} _ {\text {a l l}}). \tag {23} +$$ + +# G. Minimizability gaps can be non-zero for $\mathcal{H} = \mathcal{H}_{\mathrm{all}}$ in the general pairwise misranking case. + +Consider the uniform distribution that supports on three pairs $\{(x,x'),(x',x''),(x,x'')\}$ . Let $\eta (x,x') = \eta (x',x'') = 1$ and $\eta (x,x'') = 0$ . Note for any $h\in \mathcal{H}_{\mathrm{all}}$ , at least one of three difference $h(x^{\prime}) - h(x),h(x^{\prime \prime}) - h(x^{\prime}),h(x) - h(x^{\prime \prime})$ is less than + +or equal to 0 since the sum of all the difference is 0. Therefore, + +$$ +\begin{array}{l} \mathcal {R} _ {\mathrm {L} _ {0 - 1}} (h) = \frac {1}{3} \mathcal {C} _ {\mathrm {L} _ {0 - 1}} (h, x, x ^ {\prime}) + \frac {1}{3} \mathcal {C} _ {\mathrm {L} _ {0 - 1}} (h, x ^ {\prime}, x ^ {\prime \prime}) + \frac {1}{3} \mathcal {C} _ {\mathrm {L} _ {0 - 1}} (h, x, x ^ {\prime \prime}) \\ = \frac {1}{3} \mathbb {1} _ {h (x ^ {\prime}) < h (x)} + \frac {1}{3} \mathbb {1} _ {h (x ^ {\prime \prime}) < h (x ^ {\prime})} + \frac {1}{3} \mathbb {1} _ {h (x ^ {\prime \prime}) \geq h (x)} \\ \geq \frac {1}{3} \\ \end{array} +$$ + +$$ +\begin{array}{l} \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\rho}}} (h) = \frac {1}{3} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\rho}}} (h, x, x ^ {\prime}) + \frac {1}{3} \mathcal {C} _ {\mathrm {L} _ {0 - 1}} (h, x ^ {\prime}, x ^ {\prime \prime}) + \frac {1}{3} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {h i n g e}}}} (h, x, x ^ {\prime \prime}) \\ = \frac {1}{3} \Phi_ {\rho} \left(h \left(x ^ {\prime}\right) - h (x)\right) + \frac {1}{3} \Phi_ {\rho} \left(h \left(x ^ {\prime \prime}\right) - h \left(x ^ {\prime}\right)\right) + \frac {1}{3} \Phi_ {\rho} \left(h (x) - h \left(x ^ {\prime \prime}\right)\right) \\ \geq \frac {1}{3} \quad (\Phi_ {\rho} (u) = 1, u \leq 0) \\ \end{array} +$$ + +$$ +\begin{array}{l} \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\text {c o n v e x}}}} (h) = \frac {1}{3} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\text {c o n v e x}}}} (h, x, x ^ {\prime}) + \frac {1}{3} \mathcal {C} _ {\mathrm {L} _ {0 - 1}} (h, x ^ {\prime}, x ^ {\prime \prime}) + \frac {1}{3} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\text {c o n v e x}}}} (h, x, x ^ {\prime \prime}) \\ = \frac {1}{3} \Phi_ {\text {c o n v e x}} \left(h \left(x ^ {\prime}\right) - h (x)\right) + \frac {1}{3} \Phi_ {\text {c o n v e x}} \left(h \left(x ^ {\prime \prime}\right) - h \left(x ^ {\prime}\right)\right) + \frac {1}{3} \Phi_ {\text {c o n v e x}} \left(h (x) - h \left(x ^ {\prime \prime}\right)\right) \\ \geq \Phi_ {\text {c o n v e x}} \left(\frac {1}{3} \left[ h \left(x ^ {\prime}\right) - h (x) + h \left(x ^ {\prime \prime}\right) - h \left(x ^ {\prime}\right) + h (x) - h \left(x ^ {\prime \prime}\right) \right]\right) \quad (\text {c o n v e x i t y}) \\ = \Phi_ {\text {c o n v e x}} (0) \\ \geq 1 \\ \end{array} +$$ + +where all equality can be achieved by $h = 0$ . Thus, using the fact that $\eta(x,x') = \eta(x',x'') = 1$ and $\eta(x,x'') = 0$ , we obtain + +$$ +\begin{array}{l} \mathcal {M} _ {\mathrm {L} _ {0 - 1}} \left(\mathcal {H} _ {\text {a l l}}\right) = \mathcal {R} _ {\mathrm {L} _ {0 - 1}} ^ {*} \left(\mathcal {H} _ {\text {a l l}}\right) = \frac {1}{3} \neq 0 \\ \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\rho}}} \left(\mathcal {H} _ {\mathrm {a l l}}\right) = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\rho}}} ^ {*} \left(\mathcal {H} _ {\mathrm {a l l}}\right) = \frac {1}{3} \neq 0 \\ \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\text {c o n v e x}}}} \left(\mathcal {H} _ {\text {a l l}}\right) = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\text {c o n v e x}}}} ^ {*} \left(\mathcal {H} _ {\text {a l l}}\right) \geq 1 \neq 0. \\ \end{array} +$$ + +# H. Characterization of distribution order and minimizability gap (Proof of Theorem 2.5, Theorem 2.6 and Theorem 2.7) + +Theorem 2.5 (Characterization of distribution order). The distribution order is transitive and there exists a dense countable subset $\bar{\mathcal{X}}\subset \mathcal{X}$ with respect to the distribution order if and only if there exists $h\in \mathcal{H}_{\mathrm{all}}$ inducing the distribution order: + +Proof. $\Longleftarrow$ : Assume there exists $h \in \mathcal{H}_{\mathrm{all}}$ inducing the distribution order. For all $x, x', x'' \in \mathcal{X}$ such that $x \stackrel{\mathcal{D}}{\leq} x'$ and $x' \stackrel{\mathcal{D}}{\leq} x'$ , by definition, we have $h(x) \leq h(x')$ and $h(x') \leq h(x'')$ , which implies that $h(x) \leq h(x'')$ . Then, by definition, we obtain $x \stackrel{\mathcal{D}}{\leq} x''$ and conclude the distribution is transitive. Furthermore, we can construct the countable set $\bar{\mathfrak{X}}$ using the following procedure: for any open interval $(h(x), h(x'))$ , $x, x' \in \mathcal{X}$ such that $h^{-1}((h(x), h(x'))) = \emptyset$ , we pick $x, x'$ in $\bar{\mathfrak{X}}$ . Those intervals are countable since any of them contain a rational number and any two of them are disjoint. For any open interval with rational endpoints $m, n$ such that $h^{-1}((m, n)) \neq \emptyset$ , pick any $x \in h^{-1}((m, n))$ in $\bar{\mathfrak{X}}$ . Again, those open intervals are also countable since rational numbers are countable. Thus, $\bar{\mathfrak{X}}$ is countable. Next, we verify that $\bar{\mathfrak{X}}$ is dense. Indeed, for any $x, x' \in \mathcal{X}$ that satisfy $x \stackrel{\mathcal{D}}{\leq} x'$ and not $x' \stackrel{\mathcal{D}}{\leq} x$ , we have $h(x) < h(x')$ . If $h^{-1}((h(x), h(x'))) = \emptyset$ , by the procedure, we know there exists $\bar{x} \in \bar{\mathfrak{X}}$ such that $x \stackrel{\mathcal{D}}{\leq} \bar{x} \stackrel{\mathcal{D}}{\leq} x'$ ; if $h^{-1}((h(x), h(x'))) \neq \emptyset$ and assume $x^* \in h^{-1}((h(x), h(x')))$ , then, we can take rational numbers $m, n$ such that $h(x) < m < h(x^*) < n < h(x')$ , by the procedure, we know there exists $\bar{x} \in \bar{\mathfrak{X}}$ such that $x \stackrel{\mathcal{D}}{\leq} \bar{x} \stackrel{\mathcal{D}}{\leq} x'$ . In conclusion, $\bar{\mathfrak{X}}$ is dense countable. + +$\Longrightarrow$ : Let $\widetilde{\mathcal{X}} = \{\bar{x}_1,\bar{x}_2,\bar{x}_3,\ldots \}$ . For any $x\in \mathcal{X}$ , we use the following notation for convenience: $\widetilde{\mathsf{L}} (x) =$ + +$\left\{n \in \mathbb{N} : \bar{x}_n \stackrel{\mathcal{D}}{\leq} x \text{ and not } x \stackrel{\mathcal{D}}{\leq} \bar{x}_n \right\}$ and $\mathcal{R}(x) = \left\{n \in \mathbb{N} : x \stackrel{\mathcal{D}}{\leq} \bar{x}_n \text{ and not } \bar{x}_n \stackrel{\mathcal{D}}{\leq} x \right\}$ . Then, take + +$$ +h ^ {*} (x) = \sum_ {n \in \widetilde {\mathsf {L}} (x)} \frac {1}{2 ^ {n}} - \sum_ {n \in \mathcal {R} (x)} \frac {1}{2 ^ {n}}. +$$ + +Next, we verify that $h^*$ induces the distribution order. Indeed, for any $x \stackrel{\mathcal{D}}{\preceq} x'$ , by the transitivity of distribution order, we have + +$$ +\mathsf {L} (x) \subseteq \mathsf {L} (x ^ {\prime}), \quad \mathsf {R} (x ^ {\prime}) \subseteq \mathsf {R} (x). +$$ + +which implies that $h(x) \leq h(x')$ . Also, for any $x' \stackrel{\mathcal{D}}{\leq} x$ and $x \stackrel{\mathcal{D}}{\leq} x'$ , we have $h(x) = h(x')$ . Moreover, for any $x' \stackrel{\mathcal{D}}{\leq} x$ and not $x \stackrel{\mathcal{D}}{\leq} x'$ , there exists $\bar{x} \in \bar{\mathcal{X}}$ such that $x' \stackrel{\mathcal{D}}{\leq} \bar{x} \stackrel{\mathcal{D}}{\leq} x$ . Therefore, $\bar{x}$ belongs to at least one of $\mathsf{L}(x)$ and $\mathcal{R}(x')$ . Since $\bar{x} \notin \mathsf{L}(x')$ and $\bar{x} \notin \mathcal{R}(x)$ , we obtain + +$$ +\text {e i t h e r} \mathsf {L} \left(x ^ {\prime}\right) \subset \mathsf {L} (x) \text {o r} \mathcal {R} (x) \subset \mathcal {R} \left(x ^ {\prime}\right), +$$ + +which implies that $h(x) > h(x')$ . Therefore, $h^*$ induces the distribution order. + +Theorem 2.6. Assume that the distribution order is a total order and $\eta(x, x')$ is continuous on $\mathcal{X} \times \mathcal{X}$ . Then, there exists $h \in \mathcal{H}_{\mathrm{all}}$ inducing the distribution order. + +Proof. Define $\mathcal{D}$ to be the relation associated with $\mathcal{D}$ that defined as $x \prec x'$ if $x \leq x'$ and $x \neq x'$ . Let $f(x, x') = \eta(x, x') - \eta(x', x)$ . By the assumption, for all $x, x' \in \mathcal{X}$ , + +$x\stackrel {\mathcal{D}}{\prec}x^{\prime}\Longleftrightarrow f(x,x^{\prime}) > 0$ +- $f(x, x')$ is continuous on both $x$ and $x' \Rightarrow \{ \bar{x} \in \mathcal{X} \mid f(x, \bar{x}) > 0 \} = \{ \bar{x} \in \mathcal{X} \mid x \stackrel{\mathcal{D}}{\prec} \bar{x} \}$ is open in $\mathcal{X}$ . +- $f(x, x') = -f(x', x)$ . In particular, $f(x, x) = 0$ . + +Now let's assume $x \stackrel{\mathcal{D}}{\prec} x'$ , i.e., $f(x, x') > 0$ so $f(x', x) < 0$ . Consider the following continuous functions on $[0,1]$ , + +$$ +g _ {x} (t) = f \left(x, t x + (1 - t) x ^ {\prime}\right) +$$ + +$$ +g _ {x ^ {\prime}} (t) = f \left(x ^ {\prime}, t x + (1 - t) x ^ {\prime}\right). +$$ + +We know that + +- $g_{x}(0) > 0$ , $g_{x}(1) = 0 \Rightarrow g_{x}(t) > 0$ when $t \in (0,1)$ (because $t = 1$ is the only zero point of $g_{x}(t)$ ) +- $g_{x'}(0) = 0$ , $g_{x'}(1) < 0 \Rightarrow g_{x'}(t) < 0$ when $t \in (0,1)$ (because $t = 0$ is the only zero point of $g_{x'}(t)$ ). + +Therefore, $\left\{\bar{x} \in \mathcal{X} \mid x \prec \bar{x} \prec x'\right\} \neq \emptyset$ . Note $\left\{\bar{x} \in \mathcal{X} \mid x \prec \bar{x} \prec x'\right\} = \left\{\bar{x} \in \mathcal{X} \mid x \prec x'\right\} \cup \left\{\bar{x} \in \mathcal{X} \mid x \prec \bar{x}\right\}$ , the intersection of two open subsets, which is also open. Any nonempty open set includes at least one rational point, pick such point in $\bar{\mathcal{X}}$ and we obtain that $\bar{\mathcal{X}}$ is dense countable. By Theorem 2.5, we conclude that there exists $h \in \mathcal{H}_{\mathrm{all}}$ inducing the distribution order. + +Theorem 2.7. Assume that for all $x, x' \in \mathcal{X}$ , $\eta(x, x') + \eta(x', x) = 1$ . Then, for any hypothesis set $\mathcal{H}$ , if there exists $h \in \mathcal{H}$ inducing the distribution order, the minimizability gap of the pairwise misranking loss is null, $\mathcal{M}_{\mathrm{L}_{0-1}}(\mathcal{H}) = 0$ . + +Proof. Assume that $h^* \in \mathcal{H}$ induces the distribution order. Then, + +$$ +\begin{array}{l} \mathcal {R} _ {\mathrm {L} _ {0 - 1}} \left(h ^ {*}\right) = \underset {(x, x ^ {\prime}) \sim \mathcal {D}} {\mathbb {E}} \left[ \mathrm {L} _ {0 - 1} \left(h, x, x ^ {\prime}, y\right) \right] \\ = \underset {(X, X ^ {\prime})} {\mathbb {E}} \left[ \eta (x, x ^ {\prime}) \mathbb {1} _ {h ^ {*} \left(x ^ {\prime}\right) < h ^ {*} (x)} + (1 - \eta (x, x ^ {\prime})) \mathbb {1} _ {h ^ {*} \left(x ^ {\prime}\right) \geq h ^ {*} (x)} \right] \\ = \underset {(X, X ^ {\prime})} {\mathbb {E}} \left[ \eta (x, x ^ {\prime}) \mathbb {1} _ {\eta (x ^ {\prime}, x) > \eta (x, x ^ {\prime})} + (1 - \eta (x, x ^ {\prime})) \mathbb {1} _ {\eta (x, x ^ {\prime}) \geq \eta (x ^ {\prime}, x)} \right] \quad \left(h ^ {*} \in \mathcal {H} \text {i n d u c e s t h e d i s t r i b u t i o n o r d e r .}\right) \\ = \underset {(X, X ^ {\prime})} {\mathbb {E}} \left[ \eta (x, x ^ {\prime}) \mathbb {1} _ {1 - \eta (x, x ^ {\prime}) > \eta (x, x ^ {\prime})} + (1 - \eta (x, x ^ {\prime})) \mathbb {1} _ {\eta (x, x ^ {\prime}) \geq 1 - \eta (x, x ^ {\prime})} \right] \quad (\eta (x, x ^ {\prime}) + \eta (x ^ {\prime}, x) = 1) \\ = \underset {(X, X ^ {\prime})} {\mathbb {E}} \left[ \min \left\{\eta \left(x, x ^ {\prime}\right), 1 - \eta \left(x, x ^ {\prime}\right) \right\} \right] \\ = \underset {(X, X ^ {\prime})} {\mathbb {E}} \left[ \mathcal {C} _ {\mathrm {L} _ {0 - 1}} ^ {*} \left(\mathcal {H}, x, x ^ {\prime}\right) \right]. \\ \end{array} +$$ + +Therefore, $\mathcal{M}_{\mathsf{L}_{0 - 1}}(\mathcal{H}) = \mathcal{R}_{\mathsf{L}_{0 - 1}}^* (\mathcal{H}) - \mathbb{E}_{(X,X')}\big[\mathcal{C}_{\mathsf{L}_{0 - 1}}^* (\mathcal{H},x,x')\big] = 0.$ + +# I. Negative results for bipartite ranking (Proof of Theorem 4.1) + +Theorem 4.1 (Negative results for bipartite ranking). Assume that $\mathcal{X}$ contains an interior point $x_0$ and that $\mathcal{H}$ is regular for bipartite ranking, contains 0 and is equicontinuous at $x_0$ . If for some function $f$ that is non-decreasing and continuous at 0, the following bound holds for all $h \in \mathcal{H}$ and any distribution, + +$$ +\left. \mathcal {R} _ {\widetilde {\mathcal {L}} _ {0 - 1}} (h) - \mathcal {R} _ {\widetilde {\mathcal {L}} _ {0 - 1}} ^ {*} (\mathcal {H}) \leq f \left(\mathcal {R} _ {\widetilde {\mathcal {L}} _ {\Phi}} (h) - \mathcal {R} _ {\widetilde {\mathcal {L}} _ {\Phi}} ^ {*} (\mathcal {H})\right), \right. +$$ + +then, $f(t) \geq \frac{1}{2}$ for any $t \geq 0$ . + +Proof. Assume $x_0 \in \mathcal{X}$ is an interior point and $h_0 = 0 \in \mathcal{H}$ . By the assumption that $x_0$ is an interior point and $\mathcal{H}$ is equicontinuous at $x_0$ , for any $\epsilon > 0$ , we are able to take $x' \neq x_0 \in \mathcal{X}$ such that $|h(x') - h(x_0)| < \epsilon$ for all $h \in \mathcal{H}$ . Consider the distribution that supports on $\{x_0, x'\}$ with $\eta(x_0) = 1$ and $\eta(x') = 0$ . Then, for any $h \in \mathcal{H}$ , + +$$ +\mathcal {R} _ {\widetilde {\mathsf {L}} _ {0 - 1}} (h) = \mathcal {C} _ {\widetilde {\mathsf {L}} _ {0 - 1}} (h, x _ {0}, x ^ {\prime}) = \mathbb {1} _ {h (x _ {0}) < h (x ^ {\prime})} + \frac {1}{2} \mathbb {1} _ {h (x _ {0}) = h (x ^ {\prime})} \geq 0, +$$ + +where the equality can be achieved for some $h \in \mathcal{H}$ since $\mathcal{H}$ is regular for bipartite ranking. Therefore, + +$$ +\mathcal {R} _ {\mathsf {L} _ {0 - 1}} ^ {*} (\mathcal {H}) = \mathcal {C} _ {\mathsf {L} _ {0 - 1}} ^ {*} (\mathcal {H}, x _ {0}, x ^ {\prime}) = \inf _ {h \in \mathcal {H}} \mathcal {C} _ {\mathsf {L} _ {0 - 1}} (h, x _ {0}, x ^ {\prime}) = 0. +$$ + +Note $\mathcal{R}_{\mathrm{L}_{0 - 1}}(h_0) = \frac{1}{2}$ . For the surrogate loss $\mathsf{L}_{\Phi}$ , for any $h\in \mathcal{H}$ + +$$ +\mathcal {R} _ {\mathrm {L} _ {\Phi}} (h) = \mathcal {C} _ {\mathrm {L} _ {\Phi}} \left(h, x _ {0}, x ^ {\prime}\right) = \Phi \left(h \left(x _ {0}\right) - h \left(x ^ {\prime}\right)\right) \in [ \Phi (\epsilon), \Phi (- \epsilon) ] +$$ + +since $|h(x') - h(x_0)| < \epsilon$ and $\Phi$ is non-increasing. Therefore, + +$$ +\mathcal {R} _ {\mathsf {L} _ {\Phi}} ^ {*} (\mathcal {H}) = \mathcal {C} _ {\mathsf {L} _ {\Phi}} ^ {*} (\mathcal {H}, x _ {0}, x ^ {\prime}) \geq \Phi (\epsilon). +$$ + +Note $\mathcal{R}_{\mathrm{L}_{\Phi}}(h_0) = \Phi (0)$ . If for some function $f$ that is non-decreasing and continuous at 0, the bound holds, then, we obtain for any $h\in \mathcal{H}$ and $\epsilon >0$ + +$$ +\mathcal {R} _ {\mathsf {L} _ {0 - 1}} (h) - 0 \leq f \left(\mathcal {R} _ {\mathsf {L} _ {\Phi}} (h) - \mathcal {R} _ {\mathsf {L} _ {\Phi}} ^ {*} (\mathcal {H})\right) \leq f \left(\mathcal {R} _ {\mathsf {L} _ {\Phi}} (h) - \Phi (\epsilon)\right). +$$ + +Let $h = h_0$ , then $f(\Phi(0) - \Phi(\epsilon)) \geq \frac{1}{2}$ for any $\epsilon > 0$ . Take $\epsilon \to 0$ , we obtain $f(0) \geq \frac{1}{2}$ using the fact that $\Phi$ and $f$ are both continuous at 0. Since $f$ is non-decreasing, for any $t \in [0,1]$ , $f(t) \geq \frac{1}{2}$ . + +# J. $\mathcal{H}_{\mathrm{all}}$ - consistency bounds for bipartite misranking losses + +We first characterize the minimal conditional $\widetilde{\mathsf{L}}_{0 - 1}$ -risk and the calibration gap of the bipartite misranking loss for a broad class of hypothesis sets. We let $\tilde{\mathcal{H}}(x, x') = \{h \in \mathcal{H} : (h(x) - h(x'))(\eta(x) - \eta(x')) < 0\}$ and $\tilde{\mathcal{H}}(x, x') = \{h \in \mathcal{H} : h(x) = h(x')\}$ for convenience. + +Lemma J.1. Assume that $\mathcal{H}$ is regular for bipartite ranking. Then, the minimal conditional $\widetilde{\mathsf{L}}_{0 - 1}$ -risk is + +$$ +\mathcal {C} _ {\widetilde {L} _ {0 - 1}} ^ {*} \left(\mathscr {H}, x, x ^ {\prime}\right) = \min \{\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right), \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \}. +$$ + +The calibration gap of $\widetilde{\mathcal{L}}_{0 - 1}$ can be characterized as + +$$ +\Delta \mathcal {C} _ {\widetilde {\mathbb {L}} _ {0 - 1}, \mathcal {H}} (h, x, x ^ {\prime}) = | \eta (x) - \eta (x ^ {\prime}) | \mathbb {1} _ {h \in \widetilde {\mathcal {H}} (x, x ^ {\prime})} + \frac {1}{2} | \eta (x) - \eta (x ^ {\prime}) | \mathbb {1} _ {h \in \widetilde {\mathcal {H}} (x, x ^ {\prime})}. +$$ + +Proof. By the definition, the conditional $\widetilde{\mathsf{L}}_{0 - 1}$ -risk is + +$$ +\mathcal {C} _ {\widetilde {\mathsf {L}} _ {0 - 1}} (h, x, x ^ {\prime}) = \eta (x) (1 - \eta (x ^ {\prime})) \Big [ \mathbb {1} _ {h (x) - h (x ^ {\prime}) < 0} + \frac {1}{2} \mathbb {1} _ {h (x) = h (x ^ {\prime})} \Big ] + \eta (x ^ {\prime}) (1 - \eta (x)) \Big [ \mathbb {1} _ {h (x) - h (x ^ {\prime}) > 0} + \frac {1}{2} \mathbb {1} _ {h (x) = h (x ^ {\prime})} \Big ]. +$$ + +For any $x \neq x' \in \mathcal{X}$ , by the assumption, there exists $h^* \in \mathcal{H}$ such that + +$$ +\left(h ^ {*} (x) - h ^ {*} \left(x ^ {\prime}\right)\right) \left(\eta (x) - \eta \left(x ^ {\prime}\right)\right) \mathbb {1} _ {\eta (x) \neq \eta \left(x ^ {\prime}\right)} > 0. +$$ + +Therefore, the optimal conditional $\widetilde{\mathsf{L}}_{0 - 1}$ -risk can be characterized as for any $x\neq x^{\prime}\in \mathcal{X}$ + +$$ +\mathcal {C} _ {\widetilde {\mathcal {L}} _ {0 - 1}} ^ {*} \left(\mathcal {H}, x, x ^ {\prime}\right) = \mathcal {C} _ {\widetilde {\mathcal {L}} _ {0 - 1}} \left(h ^ {*}, x, x ^ {\prime}\right) = \min \left\{\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right), \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \right\} +$$ + +which proves the first part of lemma. By the definition, + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {0 - 1}, \mathcal {H}} (h, x, x ^ {\prime}) = \mathcal {C} _ {\mathsf {L} _ {0 - 1}} (h, x, x ^ {\prime}) - \mathcal {C} _ {\mathsf {L} _ {0 - 1}} ^ {*} (\mathcal {H}, x, x ^ {\prime}) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \left[ \mathbb {1} _ {h (x) - h \left(x ^ {\prime}\right) < 0} + \frac {1}{2} \mathbb {1} _ {h (x) = h \left(x ^ {\prime}\right)} \right] \\ + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \left[ \mathbb {1} _ {h (x) - h \left(x ^ {\prime}\right) > 0} + \frac {1}{2} \mathbb {1} _ {h (x) = h \left(x ^ {\prime}\right)} \right] \\ - \min \{\eta (x) (1 - \eta (x ^ {\prime})), \eta (x ^ {\prime}) (1 - \eta (x)) \} \\ = \left\{ \begin{array}{l l} | \eta (x) (1 - \eta (x ^ {\prime})) - \eta (x ^ {\prime}) (1 - \eta (x)) |, & h \in \widetilde {\mathcal {H}} (x, x ^ {\prime}), \\ \frac {1}{2} | \eta (x) (1 - \eta (x ^ {\prime})) - \eta (x ^ {\prime}) (1 - \eta (x)) |, & h \in \dot {\mathcal {H}} (x, x ^ {\prime}), \\ 0, & \text {o t h e r w i s e}. \end{array} \right. \\ = \left\{ \begin{array}{l l} | \eta (x) - \eta (x ^ {\prime}) |, & h \in \widetilde {\mathcal {H}} (x, x ^ {\prime}), \\ \frac {1}{2} | \eta (x) - \eta (x ^ {\prime}) |, & h \in \mathring {\mathcal {H}} (x, x ^ {\prime}), \\ 0, & \text {o t h e r w i s e}. \end{array} \right. \\ \end{array} +$$ + +Table 9: ${\mathcal{H}}_{\text{all }}$ -consistency upper bounds for bipartite abstention losses. + +
Loss functionHall-consistency upper bound
LΦhingeRlΦhinge(h) - RlΦhinge*(Hall) + MlΦhinge(Hall)
LΦρRlΦρ(h) - RlΦρ*(Hall)
LΦexpRl0-1(h) - Rl0-1*(Hall) ≤ (RlΦexp(h) - RlΦexp*(Hall))1/2
LΦlog(RlΦlog(h) - RlΦlog*(Hall))1/2
LΦsq(RlΦsq(h) - RlΦsq*(Hall) + MLΦsq(Hall))1/2
LΦsigRlΦsig(h) - RlΦsig*(Hall)
+ +![](images/c4d208dbd0cd70bf376397c89f801a15c5dd7cab8cef4910252f175067e4846d.jpg) + +Note that for $h_{\widetilde{\mathcal{O}}_{0 - 1},\mathcal{H}_{\mathrm{all}}}^{\mathrm{Bayes}}(x)\coloneqq \eta (x)$ + +$$ +\mathcal {R} _ {\widetilde {\mathsf {L}} _ {0 - 1}} \left(h _ {\widetilde {\mathsf {L}} _ {0 - 1}, \mathcal {H} _ {\mathrm {a l l}} ^ {\mathrm {B a y e s}}}\right) = \mathbb {E} _ {(X, X ^ {\prime})} \big [ \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \big ]. +$$ + +Therefore, by Lemma J.1, the $(\widetilde{\mathcal{L}}_{0 - 1},\mathcal{H}_{\mathrm{all}})$ -minimizability gap is + +$$ +\mathcal {M} _ {\widetilde {\mathsf {L}} _ {0 - 1}} \left(\mathcal {H} _ {\mathrm {a l l}}\right) = \mathcal {R} _ {\widetilde {\mathsf {L}} _ {0 - 1}} ^ {*} \left(\mathcal {H} _ {\mathrm {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \min \left\{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \right\} \right] = 0. \tag {24} +$$ + +# J.1. Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{hinge}}}$ + +For the hinge loss function $\Phi_{\mathrm{hinge}}(u)\coloneqq \max \{0,1 - u\}$ , for all $h\in \mathcal{H}_{\mathrm{all}}$ , $x\in \mathcal{X}$ , $x^{\prime}\in \mathcal{X}$ and $x\neq x^{\prime}$ : + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {h i n g e}}}} \left(h, x, x ^ {\prime}\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \Phi_ {\text {h i n g e}} \left(h (x) - h \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \Phi_ {\text {h i n g e}} \left(h \left(x ^ {\prime}\right) - h (x)\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \max \left\{0, 1 - h (x) + h \left(x ^ {\prime}\right) \right\} + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \max \left\{0, 1 + h (x) - h \left(x ^ {\prime}\right) \right\}. \\ \end{array} +$$ + +Then, + +$$ +\mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) = \inf _ {h \in \mathcal {H} _ {\mathrm {a l l}}} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {h i n g e}}}} (h, x, x ^ {\prime}) = 2 \min \{\eta (x) (1 - \eta (x ^ {\prime})), \eta (x ^ {\prime}) (1 - \eta (x)) \}. +$$ + +The $\left(\widetilde{\mathsf{L}}_{\Phi_{\mathrm{hinge}}},\mathcal{H}_{\mathrm{all}}\right)$ -minimizability gap is + +$$ +\begin{array}{l} \mathcal {M} _ {\widetilde {L} _ {\Phi_ {\text {h i n g e}}}} \left(\mathcal {H} _ {\text {a l l}}\right) = \mathcal {R} _ {\widetilde {L} _ {\Phi_ {\text {h i n g e}}}} ^ {*} \left(\mathcal {H} _ {\text {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\widetilde {L} _ {\Phi_ {\text {h i n g e}}}, \mathcal {H} _ {\text {a l l}}} ^ {*} \left(x, x ^ {\prime}\right) \right] \tag {25} \\ = \mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {h i n g e}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \big [ 2 \min \{\eta (x) (1 - \eta (x ^ {\prime})), \eta (x ^ {\prime}) (1 - \eta (x)) \} \big ]. \\ \end{array} +$$ + +Therefore, $\forall h\in \widetilde{\mathcal{H}}_{\mathrm{all}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \widetilde {\mathcal {H}} _ {\text {a l l}} (x, x ^ {\prime})} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\text {h i n g e}}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\text {h i n g e}}}, \mathcal {H} _ {\text {a l l}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x) (1 - \eta (x ^ {\prime})) \max \{0, 1 - 0 \} + \eta (x ^ {\prime}) (1 - \eta (x)) \max \{0, 1 + 0 \} - \mathcal {C} _ {\mathbb {L} _ {\Phi_ {\mathrm {h i n g e}}, \mathcal {H} _ {\mathrm {a l l}}}} ^ {*} (x, x ^ {\prime}) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) - 2 \min \left\{\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right), \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \right\} \\ = \left| \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) - \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \right|. \\ \end{array} +$$ + +Similarly, $\forall h\in \mathring{\mathcal{H}}_{\mathrm{all}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \mathcal {\hat {H}} _ {\mathrm {a l l}} (x, x ^ {\prime})} \mathcal {C} _ {\mathbb {L} _ {\Phi_ {\mathrm {h i n g e}}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\mathbb {L} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x) (1 - \eta (x ^ {\prime})) \max \{0, 1 - 0 \} + \eta (x ^ {\prime}) (1 - \eta (x)) \max \{0, 1 + 0 \} - \mathcal {C} _ {\mathbb {L} _ {\Phi_ {\mathrm {h i n g e}}, \mathcal {H} _ {\mathrm {a l l}}}} ^ {*} (x, x ^ {\prime}) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) - 2 \min \left\{\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right), \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \right\} \\ = \left| \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) - \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \right|, \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{all}}$ , $x \in \mathcal{X}$ and $x' \in \mathcal{X}$ , + +$$ +\Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}) \geq \Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {0 - 1}, \mathcal {H}} (h, x, x ^ {\prime}). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{all}}$ -consistency bound for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{hinge}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{all}}$ : + +$$ +\mathcal {R} _ {\widetilde {\mathsf {L}} _ {0 - 1}} (h) - \mathcal {R} _ {\widetilde {\mathsf {L}} _ {0 - 1}} ^ {*} (\mathcal {H} _ {\text {a l l}}) \leq \mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\text {h i n g e}}}} (h) - \mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\text {h i n g e}}}} ^ {*} (\mathcal {H} _ {\text {a l l}}) + \mathcal {M} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\text {h i n g e}}}} (\mathcal {H} _ {\text {a l l}}). \tag {26} +$$ + +# J.2. Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\rho}}$ + +For the $\rho$ -margin loss function $\Phi_{\rho}(u) := \min \left\{1, \max \left\{0, 1 - \frac{u}{\rho}\right\} \right\}$ , $\rho > 0$ , for all $h \in \mathcal{H}_{\mathrm{all}}$ , $x \in \mathcal{X}$ , $x' \in \mathcal{X}$ and $x \neq x'$ : + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}} \left(h, x, x ^ {\prime}\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \Phi_ {\rho} \left(h (x) - h \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \Phi_ {\rho} \left(h \left(x ^ {\prime}\right) - h (x)\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \min \left\{1, \max \left\{0, 1 - \frac {h (x) - h \left(x ^ {\prime}\right)}{\rho} \right\} \right\} \\ + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \min \left\{1, \max \left\{0, 1 + \frac {h (x) - h \left(x ^ {\prime}\right)}{\rho} \right\} \right\} \\ \end{array} +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) = \inf _ {h \in \mathcal {H} _ {\mathrm {a l l}}} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}} (h, x, x ^ {\prime}) \\ = \min \left\{\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right), \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \right\} \\ = \mathcal {C} _ {\mathsf {L} _ {0 - 1}, \mathcal {H} _ {\mathrm {c a l l}}} ^ {*} (x, x ^ {\prime}). \\ \end{array} +$$ + +Note that for $h_{\mathsf{L}_{\Phi_{\rho}}, \mathcal{H}_{\mathrm{all}}}^{\alpha}(x) \coloneqq \alpha \eta(x), \alpha > 0$ , by the Lebesgue's dominated convergence theorem, + +$$ +\operatorname * {l i m i n f} _ {\alpha \to + \infty} \mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\rho}}} \left(h _ {\widetilde {\mathrm {L}} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {a l l}} ^ {-}} ^ {\alpha}\right) = \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) \right] \geq \mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\rho}}} ^ {*} (\mathcal {H} _ {\mathrm {a l l}}) \geq \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) \right]. +$$ + +Therefore, the $(\widetilde{\mathsf{L}}_{\Phi_p},\mathcal{H}_{\mathrm{all}})$ -minimizability gap is + +$$ +\mathcal {M} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\rho}}} \left(\mathcal {H} _ {\text {a l l}}\right) = \mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\rho}}} ^ {*} \left(\mathcal {H} _ {\text {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\rho}}, \mathcal {H} _ {\text {a l l}}} ^ {*} \left(x, x ^ {\prime}\right) \right] = 0. \tag {27} +$$ + +Furthermore, for any $h\in \mathcal{H}_{\mathrm{all}},x\in \mathcal{X}$ and $x^{\prime}\in \mathcal{X}$ + +$$ +\Delta \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}) \geq \Delta \mathcal {C} _ {\widetilde {\mathrm {L}} _ {0 - 1}, \mathcal {H}} (h, x, x ^ {\prime}). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{all}}$ -consistency bound for $\widetilde{\mathsf{L}}_{\Phi_\rho}$ , valid for all $h \in \mathcal{H}_{\mathrm{all}}$ : + +$$ +\mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1}} (h) - \mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1}} ^ {*} (\mathcal {H} _ {\text {a l l}}) \leq \mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\rho}}} (h) - \mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\rho}}} ^ {*} (\mathcal {H} _ {\text {a l l}}). \tag {28} +$$ + +# J.3. Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{exp}}}$ + +For the exponential loss function $\Phi_{\mathrm{exp}}(u)\coloneqq e^{-u}$ , for all $h\in \mathcal{H}_{\mathrm{all}},x\in \mathcal{X},x'\in \mathcal{X}$ and $x\neq x'$ : + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {L} _ {\Phi_ {\mathrm {e x p}}}} \left(h, x, x ^ {\prime}\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \Phi_ {\exp} \left(h (x) - h \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \Phi_ {\exp} \left(h \left(x ^ {\prime}\right) - h (x)\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) e ^ {- h (x) + h \left(x ^ {\prime}\right)} + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) e ^ {h (x) - h \left(x ^ {\prime}\right)}. \\ \end{array} +$$ + +Then, + +$$ +\mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\exp}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) = \inf _ {h \in \mathcal {H} _ {\mathrm {a l l}}} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\exp}}} (h, x, x ^ {\prime}) = 2 \sqrt {\eta (x) \eta (x ^ {\prime}) (1 - \eta (x)) (1 - \eta (x ^ {\prime}))}. +$$ + +Note that for $h_{\mathsf{L}_{\Phi_{\mathrm{exp}}}, \mathcal{H}_{\mathrm{all}}}^{\mathrm{Bayes}}(x) := \frac{1}{2} \log \frac{\eta(x)(1 - \eta(x^{*}))}{\eta(x^{*})(1 - \eta(x))}$ with fixed $x^{*} \in \mathcal{X}$ , + +$$ +\mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {e x p}}}} \left(h _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {e x p}}}, \mathcal {H} _ {\mathrm {a l l}} ^ {\mathrm {B a y e s}}}\right) = \mathbb {E} _ {(X, X ^ {\prime})} \Big [ 2 \sqrt {\eta (x) \eta (x ^ {\prime}) (1 - \eta (x)) (1 - \eta (x ^ {\prime}))} \Big ]. +$$ + +Therefore, the $(\widetilde{\mathsf{L}}_{\Phi_{\mathrm{exp}}},\mathcal{H}_{\mathrm{all}})$ -minimizability gap is + +$$ +\mathcal {M} _ {\widetilde {\mathcal {L}} _ {\Phi_ {\exp}}} \left(\mathcal {H} _ {\text {a l l}}\right) = \mathcal {R} _ {\widetilde {\mathcal {L}} _ {\Phi_ {\exp}}} ^ {*} \left(\mathcal {H} _ {\text {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\widetilde {\mathcal {L}} _ {\Phi_ {\exp}}, \mathcal {H} _ {\text {a l l}}} ^ {*} \left(x, x ^ {\prime}\right) \right] = 0. \tag {29} +$$ + +Furthermore, $\forall h\in \widetilde{\mathcal{H}}_{\mathrm{all}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {e x p}}, \mathcal {H} _ {\mathrm {a l l}}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \widetilde {\mathcal {H}} _ {\mathrm {a l l}} (x, x ^ {\prime})} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\exp}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\exp}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x) (1 - \eta (x ^ {\prime})) e ^ {- 0} + \eta (x ^ {\prime}) (1 - \eta (x)) e ^ {0} - \mathcal {C} _ {\tilde {\mathrm {L}} _ {\Phi_ {\exp}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) - 2 \sqrt {\eta (x) \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \left(1 - \eta \left(x ^ {\prime}\right)\right)} \\ = \left(\frac {\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) - \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)}{\sqrt {\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right)} + \sqrt {\eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)}}\right) ^ {2} \\ \geq (\eta (x) (1 - \eta (x ^ {\prime})) - \eta (x ^ {\prime}) (1 - \eta (x))) ^ {2} \\ \end{array} +$$ + +Similarly, $\forall h\in \mathring{\mathcal{H}}_{\mathrm{all}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\exp}}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \mathcal {\hat {H}} _ {\mathrm {a l l}} (x, x ^ {\prime})} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\exp}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\exp}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x) (1 - \eta (x ^ {\prime})) e ^ {- 0} + \eta (x ^ {\prime}) (1 - \eta (x)) e ^ {0} - \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\exp}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) - 2 \sqrt {\eta (x) \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \left(1 - \eta \left(x ^ {\prime}\right)\right)} \\ \geq \left(\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) - \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)\right) ^ {2}, \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{all}}$ , $x \in \mathcal{X}$ and $x' \in \mathcal{X}$ , + +$$ +\Delta \mathcal {C} _ {\widetilde {\mathcal {L}} _ {\Phi_ {\exp}}, \mathcal {H} _ {\text {a l l}}} (h, x, x ^ {\prime}) \geq \left(\Delta \mathcal {C} _ {\widetilde {\mathcal {L}} _ {0 - 1}, \mathcal {H}} (h, x, x ^ {\prime})\right) ^ {2}. +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{all}}$ -consistency bound for $\widetilde{\mathcal{L}}_{\Phi_{\mathrm{exp}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{all}}$ : + +$$ +\mathcal {R} _ {\widetilde {\mathsf {L}} _ {0 - 1}} (h) - \mathcal {R} _ {\widetilde {\mathsf {L}} _ {0 - 1}} ^ {*} (\mathcal {H} _ {\mathrm {a l l}}) \leq \left(\mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\exp}}} (h) - \mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\exp}}} ^ {*} (\mathcal {H} _ {\mathrm {a l l}})\right) ^ {\frac {1}{2}}. \tag {30} +$$ + +# J.4. Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\log}}$ + +For the logistic loss function $\Phi_{\log}(u) \coloneqq \log_2(1 + e^{-u})$ , for all $h \in \mathcal{H}_{\mathrm{all}}$ , $x \in \mathcal{X}$ , $x' \in \mathcal{X}$ and $x \neq x'$ : + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\log}}} \left(h, x, x ^ {\prime}\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \Phi_ {\log} \left(h (x) - h \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \Phi_ {\log} \left(h \left(x ^ {\prime}\right) - h (x)\right) \\ = \eta (x) (1 - \eta (x ^ {\prime})) \log_ {2} \left(1 + e ^ {- h (x) + h (x ^ {\prime})}\right) + \eta (x ^ {\prime}) (1 - \eta (x)) \log_ {2} \left(1 + e ^ {h (x) - h (x ^ {\prime})}\right). \\ \end{array} +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) \\ = \inf _ {h \in \mathcal {H} _ {\text {a l l}}} \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\log}}} \left(h, x, x ^ {\prime}\right) \\ = - \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \log_ {2} (\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right)) - \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \log_ {2} \left(\eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)\right) \\ \leq 2 \sqrt {\eta (x) \eta (x ^ {\prime}) (1 - \eta (x)) (1 - \eta (x ^ {\prime}))} (- a \log_ {2} (a) - b \log_ {2} (b) \leq 2 \sqrt {a b}, a, b \in [ 0, 1 ]) \\ \end{array} +$$ + +Note that for $h_{\mathcal{L}_{\Phi_{\log}}, \mathcal{H}_{\mathrm{all}}}^{\mathrm{Bayes}}(x) \coloneqq \log \frac{\eta(x)(1 - \eta(x^{*}))}{\eta(x^{*})(1 - \eta(x))}$ with fixed $x^{*} \in \mathcal{X}$ , + +$$ +\mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}} \left(h _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {a l l}} ^ {\mathrm {B a y e s}}}\right) = \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) \right]. +$$ + +Therefore, the $\left(\widetilde{L}_{\Phi_{\log}},\mathcal{H}_{\mathrm{all}}\right)$ -minimizability gap is + +$$ +\mathcal {M} _ {\widetilde {\mathcal {L}} _ {\Phi_ {\log}}} \left(\mathcal {H} _ {\text {a l l}}\right) = \mathcal {R} _ {\widetilde {\mathcal {L}} _ {\Phi_ {\log}}} ^ {*} \left(\mathcal {H} _ {\text {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\widetilde {\mathcal {L}} _ {\Phi_ {\log}}, \mathcal {H} _ {\text {a l l}}} ^ {*} (x, x ^ {\prime}) \right] = 0. \tag {31} +$$ + +Furthermore, $\forall h\in \widetilde{\mathcal{H}}_{\mathrm{all}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \widetilde {\mathcal {H}} _ {\text {a l l}} (x, x ^ {\prime})} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}, \mathcal {H} _ {\text {a l l}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x) (1 - \eta (x ^ {\prime})) \log_ {2} (1 + e ^ {- 0}) + \eta (x ^ {\prime}) (1 - \eta (x)) \log_ {2} (1 + e ^ {0}) - \mathcal {C} _ {\widehat {\mathsf {L}} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) \\ \geq \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) - 2 \sqrt {\eta (x) \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \left(1 - \eta \left(x ^ {\prime}\right)\right)} \\ = \left(\frac {\eta (x) (1 - \eta \left(x ^ {\prime}\right)) - \eta \left(x ^ {\prime}\right) (1 - \eta (x))}{\sqrt {\eta (x) (1 - \eta \left(x ^ {\prime}\right))} + \sqrt {\eta \left(x ^ {\prime}\right) (1 - \eta (x))}}\right) ^ {2} \\ \geq \left(\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) - \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)\right) ^ {2} \\ \end{array} +$$ + +Similarly, $\forall h\in \mathring{\mathcal{H}}_{\mathrm{all}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \mathring {\mathcal {H}} _ {\mathrm {a l l}} (x, x ^ {\prime})} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x) (1 - \eta (x ^ {\prime})) e ^ {- 0} + \eta (x ^ {\prime}) (1 - \eta (x)) e ^ {0} - \mathcal {C} _ {\widetilde {\Lambda} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) \\ \geq \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) - 2 \sqrt {\eta (x) \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \left(1 - \eta \left(x ^ {\prime}\right)\right)} \\ \geq \left(\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) - \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)\right) ^ {2}, \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{all}}$ , $x \in \mathcal{X}$ and $x' \in \mathcal{X}$ , + +$$ +\Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}) \geq \left(\Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {0 - 1}, \mathcal {H}} (h, x, x ^ {\prime})\right) ^ {2}. +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{all}}$ -consistency bound for $\widetilde{\mathsf{L}}_{\Phi_{\log}}$ , valid for all $h \in \mathcal{H}_{\mathrm{all}}$ : + +$$ +\mathcal {R} _ {\widetilde {\mathsf {L}} _ {0 - 1}} (h) - \mathcal {R} _ {\widetilde {\mathsf {L}} _ {0 - 1}} ^ {*} (\mathcal {H} _ {\mathrm {a l l}}) \leq \left(\mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}} (h) - \mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}} ^ {*} (\mathcal {H} _ {\mathrm {a l l}})\right) ^ {\frac {1}{2}}. \tag {32} +$$ + +# J.5. Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sq}}}$ + +For the squared hinge loss function $\Phi_{\mathrm{sq}}(u)\coloneqq (1 - u)^2\mathbb{1}_{u\leq 1}$ , for all $h\in \mathcal{H}_{\mathrm{all}}$ , $x\in \mathcal{X}$ , $x^{\prime}\in \mathcal{X}$ and $x\neq x^{\prime}$ : + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {L} _ {\Phi_ {\mathrm {s q}}}} \left(h, x, x ^ {\prime}\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \Phi_ {\mathrm {s q}} \left(h (x) - h \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \Phi_ {\mathrm {s q}} \left(h \left(x ^ {\prime}\right) - h (x)\right) \\ = \eta (x) (1 - \eta (x ^ {\prime})) (1 - h (x) + h (x ^ {\prime})) ^ {2} \mathbb {1} _ {h (x) - h (x ^ {\prime}) \leq 1} + \eta (x ^ {\prime}) (1 - \eta (x)) (1 + h (x) - h (x ^ {\prime})) ^ {2} \mathbb {1} _ {h (x) - h (x ^ {\prime}) \geq - 1}. \\ \end{array} +$$ + +Then, + +$$ +\mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) = \inf _ {h \in \mathcal {H} _ {\mathrm {a l l}}} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}} (h, x, x ^ {\prime}) = 4 \frac {\eta (x) \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \left(1 - \eta \left(x ^ {\prime}\right)\right)}{\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)}. +$$ + +The $(\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sq}}},\mathcal{H}_{\mathrm{all}})$ -minimizability gap is + +$$ +\mathcal {M} _ {\widetilde {L} _ {\Phi_ {\mathrm {s q}}}} \left(\mathcal {H} _ {\text {a l l}}\right) = \mathcal {R} _ {\widetilde {L} _ {\Phi_ {\mathrm {s q}}}} ^ {*} \left(\mathcal {H} _ {\text {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\widetilde {L} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\text {a l l}}} ^ {*} (x, x ^ {\prime}) \right]. \tag {33} +$$ + +Therefore, $\forall h\in \widetilde{\mathcal{H}}_{\mathrm{all}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \widetilde {\mathcal {H}} _ {\mathrm {a l l}} (x, x ^ {\prime})} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) - \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) - 4 \frac {\eta (x) \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \left(1 - \eta \left(x ^ {\prime}\right)\right)}{\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)} \\ = \frac {\left(\eta (x) (1 - \eta (x)) - \eta \left(x ^ {\prime}\right) \left(1 - \eta \left(x ^ {\prime}\right)\right)\right) ^ {2}}{\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)} \\ \geq \left(\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) - \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)\right) ^ {2} \quad (a + b - 2 a b \leq 1, a, b \in [ 0, 1 ]) \\ \end{array} +$$ + +Similarly, $\forall h\in \mathring{\mathcal{H}}_{\mathrm{all}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \mathcal {\hat {H}} _ {\mathrm {a l l}} (x, x ^ {\prime})} \mathcal {C} _ {\mathcal {\widetilde {L}} _ {\Phi_ {\mathrm {s q}}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\mathcal {\widetilde {L}} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x)) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) - 4 \frac {\eta (x) \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \left(1 - \eta \left(x ^ {\prime}\right)\right)}{\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)} \\ \geq \left(\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) - \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)\right) ^ {2}, \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{all}}$ , $x \in \mathcal{X}$ and $x' \in \mathcal{X}$ , + +$$ +\Delta \mathcal {C} _ {\widetilde {\Lambda} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}) \geq \left(\Delta \mathcal {C} _ {\widetilde {\Lambda} _ {0 - 1}, \mathcal {H}} (h, x, x ^ {\prime})\right) ^ {2}. +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{all}}$ -consistency bound for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sq}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{all}}$ : + +$$ +\mathcal {R} _ {\widetilde {\mathsf {L}} _ {0 - 1}} (h) - \mathcal {R} _ {\widetilde {\mathsf {L}} _ {0 - 1}} ^ {*} (\mathcal {H} _ {\mathrm {a l l}}) \leq \left(\mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}} (h) - \mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}} ^ {*} (\mathcal {H} _ {\mathrm {a l l}}) + \mathcal {M} _ {\mathsf {L} _ {\Phi_ {\mathrm {s q}}}} (\mathcal {H} _ {\mathrm {a l l}})\right) ^ {\frac {1}{2}}. \tag {34} +$$ + +# J.6. Derivation for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sig}}}$ + +For the sigmoid loss function $\Phi_{\mathrm{sig}}(u)\coloneqq 1 - \tanh (ku),k > 0,$ for all $h\in \mathcal{H}_{\mathrm{all}},x\in \mathcal{X},x^{\prime}\in \mathcal{X}$ and $x\neq x^{\prime}$ + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {L} _ {\Phi_ {\mathrm {s i g}}}} (h, x, x ^ {\prime}) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \Phi_ {\mathrm {s i g}} \left(h (x) - h \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \Phi_ {\mathrm {s i g}} \left(h \left(x ^ {\prime}\right) - h (x)\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \left(1 - \tanh \left(k \left[ h (x) - h \left(x ^ {\prime}\right) \right]\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \left(1 + \tanh \left(k \left[ h (x) - h \left(x ^ {\prime}\right) \right]\right)\right) \\ \end{array} +$$ + +Then, + +$$ +\mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) = \inf _ {h \in \mathcal {H} _ {\mathrm {a l l}}} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s i g}}}} (h, x, x ^ {\prime}) = 2 \min \{\eta (x) (1 - \eta (x ^ {\prime})), \eta (x ^ {\prime}) (1 - \eta (x)) \}. +$$ + +Note that for $h_{\mathsf{L}_{\Phi_{\mathrm{sig}}}, \mathcal{H}_{\mathrm{all}}}^{\alpha}(x) := \alpha \eta(x), \alpha > 0$ , by the Lebesgue's dominated convergence theorem, + +$$ +\operatorname * {l i m i n f} _ {\alpha \to + \infty} \mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s i g}}}} \left(h _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {a l l}} ^ {\alpha}} ^ {\alpha}\right) = \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) \right] \geq \mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s i g}}}} ^ {*} (\mathcal {H} _ {\mathrm {a l l}}) \geq \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) \right]. +$$ + +Therefore, the $(\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sig}}},\mathcal{H}_{\mathrm{all}})$ -minimizability gap is + +$$ +\mathcal {M} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s i g}}}} \left(\mathcal {H} _ {\mathrm {a l l}}\right) = \mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s i g}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {a l l}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) \right] = 0. \tag {35} +$$ + +Furthermore, $\forall h\in \widetilde{\mathcal{H}}_{\mathrm{all}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \mathcal {\widetilde {H}} _ {\mathrm {a l l}} (x, x ^ {\prime})} \mathcal {C} _ {\mathcal {\widetilde {L}} _ {\Phi_ {\mathrm {s i g}}}} (h, x, x ^ {\prime}) - \mathcal {C} _ {\mathcal {\widetilde {L}} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) \\ = \eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x)) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s i g}}, \mathcal {H} _ {\mathrm {a l l}}}} ^ {*} (x, x ^ {\prime}) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) - 2 \min \left\{\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right), \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \right\} \\ = \left| \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) - \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \right|. \\ \end{array} +$$ + +Similarly, $\forall h\in \mathring{\mathcal{H}}_{\mathrm{all}}(x,x^{\prime})$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathcal {L}} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \mathcal {H} _ {\text {a l l}} (x, x ^ {\prime})} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\text {s i g}}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\text {s i g}}}, \mathcal {H} _ {\text {a l l}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x) \big (1 - \eta (x ^ {\prime}) \big) + \eta (x ^ {\prime}) \big (1 - \eta (x) \big) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {a l l}}} ^ {*} (x, x ^ {\prime}) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) - 2 \min \left\{\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right), \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \right\} \\ = \left| \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) - \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \right|, \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{all}}$ , $x \in \mathcal{X}$ and $x' \in \mathcal{X}$ , + +$$ +\Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {a l l}}} (h, x, x ^ {\prime}) \geq \Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {0 - 1}, \mathcal {H}} (h, x, x ^ {\prime}). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{all}}$ -consistency bound for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sig}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{all}}$ : + +$$ +\mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1}} (h) - \mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1}} ^ {*} (\mathcal {H} _ {\text {a l l}}) \leq \mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\text {s i g}}}} (h) - \mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\text {s i g}}}} ^ {*} (\mathcal {H} _ {\text {a l l}}). \tag {36} +$$ + +# K. $\mathcal{H}$ - consistency bounds for pairwise abstention loss + +We first characterize the minimal conditional $\mathsf{L}_{0 - 1}^{\mathrm{abs}}$ -risk and the calibration gap of $\mathsf{L}_{0 - 1}^{\mathrm{abs}}$ for a broad class of hypothesis sets. We let $\overline{\mathcal{H}} (x,x^{\prime}) = \{h\in \mathcal{H}\colon \mathrm{sign}(h(x^{\prime}) - h(x))(2\eta (x,x^{\prime}) - 1)\leq 0\}$ for convenience. + +Lemma K.1. Assume that $\mathcal{H}$ is regular for general pairwise ranking. Then, the minimal conditional $\mathsf{L}_{0 - 1}^{\mathrm{abs}}$ -risk is + +$$ +\mathcal{C}^{*}_{\mathrm{L}^{\mathrm{abs}}_{0 - 1}}(\mathcal{H},x,x^{\prime}) = \min \{\eta (x,x^{\prime}),1 - \eta (x,x^{\prime})\} \mathbb{1}_{\| x - x^{\prime}\| >\gamma} + c\mathbb{1}_{|x - x^{\prime}|\leq \gamma}. +$$ + +The calibration gap of $\mathsf{L}_{0 - 1}^{\mathrm{abs}}$ can be characterized as + +$$ +\Delta \mathcal {C} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H}} (h, x, x ^ {\prime}) = | 2 \eta (x, x ^ {\prime}) - 1 | \mathbb {1} _ {h \in \overline {{\mathcal {H}}} (x, x ^ {\prime})} \mathbb {1} _ {\| x - x ^ {\prime} \| > \gamma}. +$$ + +Proof. By the definition, the conditional $\mathsf{L}_{0 - 1}^{\mathrm{abs}}$ -risk is + +$$ +\mathcal {C} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} (h, x, x ^ {\prime}) = \big (\eta (x, x ^ {\prime}) \mathbb {1} _ {h (x ^ {\prime}) < h (x)} + (1 - \eta (x, x ^ {\prime})) \mathbb {1} _ {h (x ^ {\prime}) \geq h (x)} \big) \mathbb {1} _ {\| x - x ^ {\prime} \| > \gamma} + c \mathbb {1} _ {| x - x ^ {\prime} | \leq \gamma}. +$$ + +For any $(x,x^{\prime})$ such that $\| x - x^{\prime}\| \leq \gamma$ and $h\in \mathcal{H},\mathcal{C}_{\mathsf{L}_{0 - 1}^{\mathrm{abs}}}(h,x,x) = \mathcal{C}_{\mathsf{L}_{0 - 1}^{\mathrm{abs}}}^{*}(\mathcal{H},x,x) = c.$ For any $(x,x^{\prime})$ such that $\| x - x^{\prime}\| >\gamma$ by the assumption, there exists $h^*\in \mathcal{H}$ such that $\mathrm{sign}(h^{*}(x^{\prime}) - h^{*}(x)) = \mathrm{sign}(2\eta (x,x^{\prime}) - 1)$ . Therefore, the optimal conditional $\mathsf{L}_{0 - 1}^{\mathrm{abs}}$ risk can be characterized as for any $x,x^{\prime}\in \mathcal{X}$ , + +$$ +\mathcal {C} _ {\mathsf {L} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} \big (\mathcal {H}, x, x ^ {\prime} \big) = \mathcal {C} _ {\mathsf {L} _ {0 - 1} ^ {\mathrm {a b s}}} \big (h ^ {*}, x, x ^ {\prime} \big) = \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \mathbb {1} _ {\| x - x ^ {\prime} \| > \gamma} + c \mathbb {1} _ {| x - x ^ {\prime} | \leq \gamma}. +$$ + +which proves the first part of lemma. By the definition, for any $(x,x^{\prime})$ such that $\| x - x^{\prime}\| \leq \gamma$ and $h\in \mathcal{H}$ $\Delta \mathcal{C}_{\mathsf{L}_{0 - 1}^{\mathrm{abs}},\mathcal{H}}(h,x,x^{\prime}) = \mathcal{C}_{\mathsf{L}_{0 - 1}^{\mathrm{abs}}}(h,x,x^{\prime}) - \mathcal{C}_{\mathsf{L}_{0 - 1}^{\mathrm{abs}}}^{*}(\mathcal{H},x,x^{\prime}) = 0.$ . For any $(x,x^{\prime})$ such that $\| x - x^{\prime}\| >\gamma$ and $h\in \mathcal{H}$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\mathsf {L} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H}} (h, x, x ^ {\prime}) = \mathcal {C} _ {\mathsf {L} _ {0 - 1} ^ {\mathrm {a b s}}} (h, x, x ^ {\prime}) - \mathcal {C} _ {\mathsf {L} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H}, x, x ^ {\prime}) \\ = \eta (x, x ^ {\prime}) \mathbb {1} _ {h (x ^ {\prime}) < h (x)} + (1 - \eta (x, x ^ {\prime})) \mathbb {1} _ {h (x ^ {\prime}) \geq h (x)} - \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \\ = \left\{ \begin{array}{l l} | 2 \eta (x, x ^ {\prime}) - 1 |, & h \in \overline {{\mathcal {H}}} (x, x ^ {\prime}), \\ 0, & \text {o t h e r w i s e}. \end{array} \right. \\ \end{array} +$$ + +This leads to + +$$ +\Delta \mathcal {C} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H}} (h, x, x ^ {\prime}) = | 2 \eta (x, x ^ {\prime}) - 1 | \mathbb {1} _ {h \in \overline {{\mathcal {H}}} (x, x ^ {\prime})} \mathbb {1} _ {\| x - x ^ {\prime} \| > \gamma}. +$$ + +![](images/00fe74d382d85f0a6e2e4a9e1957eec3a245375ac17c2a5dbd66273895c223bd.jpg) + +# K.1. Linear Hypotheses + +Since $\mathcal{H}_{\mathrm{lin}}$ satisfies the condition of Lemma K.1, by Lemma K.1 the $(\mathsf{L}_{0 - 1}^{\mathrm{abs}},\mathcal{H}_{\mathrm{lin}})$ -minimizability gap can be expressed as follows: + +$$ +\mathcal {M} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} \left(\mathcal {H} _ {\mathrm {l i n}}\right) = \mathcal {R} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} \left(\mathcal {H} _ {\mathrm {l i n}}\right) - \mathbb {E} _ {\left(X, X ^ {\prime}\right)} \left[ \min \left\{\eta \left(x, x ^ {\prime}\right), 1 - \eta \left(x, x ^ {\prime}\right) \right\} \mathbb {1} _ {\| x - x ^ {\prime} \| > \gamma} + c \mathbb {1} _ {\left| x - x ^ {\prime} \right| \leq \gamma} \right]. \tag {37} +$$ + +By the definition of $\mathcal{H}_{\mathrm{lin}}$ , for any $(x,x^{\prime})\in \mathcal{X}\times \mathcal{X}$ , $\left\{h(x^{\prime}) - h(x)\mid h\in \mathcal{H}_{\mathrm{lin}}\right\} = \left[-W\| x - x^{\prime}\|_{p},W\| x - x^{\prime}\|_{p}\right]$ . + +# K.1.1. DERIVATION FOR $\mathsf{L}_{\Phi_{\mathrm{hinge}}}$ + +For the hinge loss function $\Phi_{\mathrm{hinge}}(u)\coloneqq \max \{0,1 - u\}$ , for all $h\in \mathcal{H}_{\mathrm{lin}}$ and $(x,x^{\prime})$ such that $\| x - x^{\prime}\| _p > \gamma$ + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {h i n g e}}}} \left(h, x, x ^ {\prime}\right) \\ = \eta (x, x ^ {\prime}) \mathrm {L} _ {\Phi_ {\text {h i n g e}}} \left(h \left(x ^ {\prime}\right) - h (x)\right) + \left(1 - \eta (x, x ^ {\prime})\right) \mathrm {L} _ {\Phi_ {\text {h i n g e}}} \left(h (x) - h \left(x ^ {\prime}\right)\right) \\ = \eta \left(x, x ^ {\prime}\right) \max \left\{0, 1 - h \left(x ^ {\prime}\right) + h (x) \right\} + \left(1 - \eta \left(x, x ^ {\prime}\right)\right) \max \left\{0, 1 + h \left(x ^ {\prime}\right) - h (x) \right\}. \\ \end{array} +$$ + +Then, + +$$ +\mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) = \inf _ {h \in \mathcal {H} _ {\mathrm {l i n}}} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {h i n g e}}}} (h, x, x ^ {\prime}) = 1 - | 2 \eta (x, x ^ {\prime}) - 1 | \min \big \{W \| x - x ^ {\prime} \| _ {p}, 1 \big \}. +$$ + +The $\left(L_{\Phi_{\mathrm{hinge}}},\mathcal{H}_{\mathrm{lin}}\right)$ -minimizability gap is + +$$ +\begin{array}{l} \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\text {h i n g e}}}} \left(\mathcal {H} _ {\text {l i n}}\right) = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\text {h i n g e}}}} ^ {*} \left(\mathcal {H} _ {\text {l i n}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\text {h i n g e}}}, \mathcal {H} _ {\text {l i n}}} ^ {*} (x, x ^ {\prime}) \right] \tag {38} \\ = \mathcal {R} _ {\mathsf {L} _ {\Phi_ {\mathrm {h i n g e}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {l i n}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \Big [ 1 - | 2 \eta (x, x ^ {\prime}) - 1 | \min \big \{W \| x - x ^ {\prime} \| _ {p}, 1 \big \} \Big ]. \\ \end{array} +$$ + +Therefore, $\forall h\in \overline{\mathcal{H}}_{\mathrm{lin}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\text {h i n g e}}}, \mathcal {H} _ {\text {l i n}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \overline {{\mathcal {H}}} _ {\mathrm {l i n}} (x, x ^ {\prime})} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {h i n g e}}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x, x ^ {\prime}) \max \{0, 1 - 0 \} + (1 - \eta (x, x ^ {\prime})) \max \{0, 1 + 0 \} - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \\ = 1 - \left[ 1 - \left| 2 \eta \left(x, x ^ {\prime}\right) - 1 \right| \min \left\{W \| x - x ^ {\prime} \| _ {p}, 1 \right\} \right] \\ = \left| 2 \eta \left(x, x ^ {\prime}\right) - 1 \right| \min \left\{W \| x - x ^ {\prime} \| _ {p}, 1 \right\} \\ \geq \left| 2 \eta \left(x, x ^ {\prime}\right) - 1 \right| \min \left\{W \gamma , 1 \right\} \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{lin}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\text {h i n g e}}}, \mathcal {H} _ {\text {l i n}}} (h, x, x ^ {\prime}) \geq \min \{W \gamma , 1 \} \langle | 2 \eta (x, x ^ {\prime}) - 1 | \rangle_ {0} \mathbb {1} _ {h \in \overline {{\mathcal {H}}} _ {\text {l i n}} (x, x ^ {\prime})} = \Delta \mathcal {C} _ {\mathrm {L} _ {0 - 1} ^ {\text {a b s}}, \mathcal {H} _ {\text {l i n}}} (h, x, x ^ {\prime}). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{lin}}$ -consistency bound for $\mathsf{L}_{\Phi_{\mathrm{hinge}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{lin}}$ : + +$$ +\mathcal {R} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) \leq \frac {\mathcal {R} _ {\mathrm {L} _ {\Phi_ {\text {h i n g e}}} (h)} - \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\text {h i n g e}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}})} + \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\text {h i n g e}}} (\mathcal {H} _ {\mathrm {l i n}})}}{\min \{W \gamma , 1 \}} - \mathcal {M} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}} (\mathcal {H} _ {\mathrm {l i n}})}. \tag {39} +$$ + +# K.1.2. DERIVATION FOR $\mathsf{L}_{\Phi_{\rho}}$ + +For the $\rho$ -margin loss function $\Phi_{\rho}(u) := \min \left\{1, \max \left\{0, 1 - \frac{u}{\rho}\right\} \right\}$ , $\rho > 0$ , for all $h \in \mathcal{H}_{\mathrm{lin}}$ and $(x, x')$ such that $\|x - x'\|_p > \gamma$ , + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\rho}}} (h, x, x ^ {\prime}) \\ = \eta \left(x, x ^ {\prime}\right) \mathrm {L} _ {\Phi_ {\rho}} \left(h \left(x ^ {\prime}\right) - h (x)\right) + \left(1 - \eta \left(x, x ^ {\prime}\right)\right) \mathrm {L} _ {\Phi_ {\rho}} \left(h (x) - h \left(x ^ {\prime}\right)\right) \\ = \eta (x, x ^ {\prime}) \min \left\{1, \max \left\{0, 1 - \frac {h \left(x ^ {\prime}\right) - h (x)}{\rho} \right\} \right\} + (1 - \eta (x, x ^ {\prime})) \min \left\{1, \max \left\{0, 1 + \frac {h \left(x ^ {\prime}\right) - h (x)}{\rho} \right\} \right\}. \\ \end{array} +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) = \inf _ {h \in \mathcal {H} _ {\mathrm {l i n}}} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\rho}}} (h, x, x ^ {\prime}) \\ = \min \left\{\eta \left(x, x ^ {\prime}\right), 1 - \eta \left(x, x ^ {\prime}\right) \right\} + \max \left\{\eta \left(x, x ^ {\prime}\right), 1 - \eta \left(x, x ^ {\prime}\right) \right\} \left(1 - \frac {\min \left\{W \| x - x ^ {\prime} \| _ {p} , \rho \right\}}{\rho}\right) \\ \end{array} +$$ + +The $(\mathsf{L}_{\Phi_{\rho}},\mathcal{H}_{\mathrm{lin}})$ -minimizability gap is + +$$ +\begin{array}{l} \mathcal {M} _ {\mathrm {L} _ {\Phi \rho}} (\mathcal {H} _ {\mathrm {l i n}}) \\ = \mathcal {R} _ {\mathsf {L} _ {\Phi_ {\rho}}} ^ {*} \left(\mathcal {H} _ {\mathrm {l i n}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \right]. \\ = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\rho}}} ^ {*} \left(\mathcal {H} _ {\operatorname {l i n}}\right) - \mathbb {E} _ {\left(X, X ^ {\prime}\right)} \left[ \min \left\{\eta \left(x, x ^ {\prime}\right), 1 - \eta \left(x, x ^ {\prime}\right) \right\} + \max \left\{\eta \left(x, x ^ {\prime}\right), 1 - \eta \left(x, x ^ {\prime}\right) \right\} \left(1 - \frac {\min \left\{W \| x - x ^ {\prime} \| _ {p} , \rho \right\}}{\rho}\right) \right]. \tag {40} \\ \end{array} +$$ + +Therefore, $\forall h\in \overline{\mathcal{H}}_{\mathrm{lin}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {p}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \overline {{\mathcal {H}}} _ {\mathrm {l i n}} (x, x ^ {\prime})} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\rho}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \max \left\{\eta \left(x, x ^ {\prime}\right), 1 - \eta \left(x, x ^ {\prime}\right) \right\} + \min \left\{\eta \left(x, x ^ {\prime}\right), 1 - \eta \left(x, x ^ {\prime}\right) \right\} \left(1 - \frac {\min \left\{W \| x - x ^ {\prime} \| _ {p} , \rho \right\}}{\rho}\right) - \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \\ = \left| 2 \eta \left(x, x ^ {\prime}\right) - 1 \right| \frac {\min \left\{W \| x - x ^ {\prime} \| _ {p} , \rho \right\}}{\rho} \\ \geq \left| 2 \eta \left(x, x ^ {\prime}\right) - 1 \right| \frac {\operatorname* {m i n} \left\{W \gamma , \rho \right\}}{\rho} \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{lin}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \geq \frac {\min \{W \gamma , \rho \}}{\rho} \langle | 2 \eta (x, x ^ {\prime}) - 1 | \rangle_ {0} \mathbb {1} _ {h \in \overline {{\mathcal {H}}} _ {\mathrm {l i n}} (x, x ^ {\prime})} = \Delta \mathcal {C} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{lin}}$ -consistency bound for $\mathsf{L}_{\Phi_{\rho}}$ , valid for all $h \in \mathcal{H}_{\mathrm{lin}}$ : + +$$ +\mathcal {R} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) \leq \frac {\rho \left(\mathcal {R} _ {\mathrm {L} _ {\Phi_ {\rho}}} (h) - \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\rho}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) + \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\rho}}} (\mathcal {H} _ {\mathrm {l i n}})\right)}{\min \{W \gamma , \rho \}} - \mathcal {M} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {l i n}}). \tag {41} +$$ + +# K.1.3. DERIVATION FOR $\mathsf{L}_{\Phi_{\mathrm{exp}}}$ + +For the exponential loss function $\Phi_{\mathrm{exp}}(u)\coloneqq e^{-u}$ , for all $h\in \mathcal{H}_{\mathrm{lin}}$ and $(x,x^{\prime})$ such that $\| x - x^{\prime}\| _p > \gamma$ + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\exp}}} (h, x, x ^ {\prime}) = \eta (x, x ^ {\prime}) \mathrm {L} _ {\Phi_ {\exp}} (h (x ^ {\prime}) - h (x)) + (1 - \eta (x, x ^ {\prime})) \mathrm {L} _ {\Phi_ {\exp}} (h (x) - h (x ^ {\prime})) \\ = \eta (x, x ^ {\prime}) e ^ {- h \left(x ^ {\prime}\right) + h (x)} + (1 - \eta (x, x ^ {\prime})) e ^ {h \left(x ^ {\prime}\right) - h (x)}. \\ \end{array} +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {e x p}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \\ = \inf _ {h \in \mathcal {H} _ {\mathrm {l i n}}} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\exp}}} \left(h, x, x ^ {\prime}\right) \\ = \left\{ \begin{array}{l l} 2 \sqrt {\eta (x , x ^ {\prime}) (1 - \eta (x , x ^ {\prime}))} & \frac {1}{2} \Big | \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \Big | \leq W \| x - x ^ {\prime} \| _ {p} \\ \max \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} e ^ {- W \| x - x ^ {\prime} \| _ {p}} + \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} e ^ {W \| x - x ^ {\prime} \| _ {p}} & \frac {1}{2} \Big | \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \Big | > W \| x - x ^ {\prime} \| _ {p}. \end{array} \right. \\ \end{array} +$$ + +The $(\mathsf{L}_{\Phi_{\mathrm{exp}}},\mathcal{H}_{\mathrm{lin}})$ -minimizability gap is: + +$$ +\begin{array}{l} \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\mathrm {e x p}}}} \left(\mathcal {H} _ {\mathrm {l i n}}\right) = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {e x p}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {l i n}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {e x p}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} \left(x, x ^ {\prime}\right) \right] \\ = \mathcal {R} _ {\mathsf {L} _ {\Phi_ {\exp}}} ^ {*} \left(\mathcal {H} _ {\mathrm {l i n}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ 2 \sqrt {\eta (x , x ^ {\prime}) (1 - \eta (x , x ^ {\prime}))} \mathbb {1} _ {\frac {1}{2} \left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| \leq W \| x - x ^ {\prime} \| _ {p}} \right] \\ - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \max \left\{\eta \left(x, x ^ {\prime}\right), 1 - \eta \left(x, x ^ {\prime}\right) \right\} e ^ {- W \left\| x - x ^ {\prime} \right\| _ {p}} \mathbb {1} _ {\frac {1}{2}} \left| \log \frac {\eta \left(x , x ^ {\prime}\right)}{1 - \eta \left(x , x ^ {\prime}\right)} \right| > W \| x - x ^ {\prime} \| _ {p} \right] \tag {42} \\ - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} e ^ {W \| x - x ^ {\prime} \| _ {p}} \mathbf {1} _ {\frac {1}{2}} \left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| > W \| x - x ^ {\prime} \| _ {p} \right]. \\ \end{array} +$$ + +Therefore, $\forall h\in \overline{\mathcal{H}}_{\mathrm{lin}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\exp}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \overline {{\mathcal {H}}} _ {\mathrm {l i n}} (x, x ^ {\prime})} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\exp}}} (h, x, x ^ {\prime}) - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\exp}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \\ = \eta (x, x ^ {\prime}) e ^ {- 0} + (1 - \eta (x, x ^ {\prime})) e ^ {0} - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\exp}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \\ = \left\{ \begin{array}{l l} 1 - 2 \sqrt {\eta (x , x ^ {\prime}) (1 - \eta (x , x ^ {\prime}))} & \frac {1}{2} \left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| \leq W \| x - x ^ {\prime} \| _ {p} \\ 1 - \max \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} e ^ {- W \| x - x ^ {\prime} \| _ {p}} - \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} e ^ {W \| x - x ^ {\prime} \| _ {p}} & \frac {1}{2} \left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| > W \| x - x ^ {\prime} \| _ {p} \end{array} \right. \\ \geq \left\{ \begin{array}{l l} 1 - 2 \sqrt {\eta (x , x ^ {\prime}) (1 - \eta (x , x ^ {\prime}))} & \frac {1}{2} \Big | \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \Big | \leq W \gamma \\ 1 - \max \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} e ^ {- W \gamma} - \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} e ^ {W \gamma} & \frac {1}{2} \Big | \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \Big | > W \gamma \end{array} \right. \\ = \Psi_ {\exp} \left(\left| 2 \eta (x, x ^ {\prime}) - 1 \right|\right), \\ \end{array} +$$ + +where $\Psi_{\mathrm{exp}}$ is the increasing and convex function on $[0,1]$ defined by + +$$ +\forall t \in [ 0, 1 ], \quad \Psi_ {\exp} (t) = \left\{ \begin{array}{l l} 1 - \sqrt {1 - t ^ {2}}, & t \leq \frac {e ^ {2 W \gamma} - 1}{e ^ {2 W \gamma} + 1} \\ 1 - \frac {t + 1}{2} e ^ {- W \gamma} - \frac {1 - t}{2} e ^ {W \gamma}, & t > \frac {e ^ {2 W \gamma} - 1}{e ^ {2 W \gamma} + 1} \end{array} \right. +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{lin}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {e x p}}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \geq \Psi_ {\mathrm {e x p}} \Big (\Delta \mathcal {C} _ {\mathsf {L} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \Big). +$$ + +To simplify the expression, using the fact that + +$$ +1 - \sqrt {1 - t ^ {2}} \geq \frac {t ^ {2}}{2}, +$$ + +$$ +1 - \frac {t + 1}{2} e ^ {- W \gamma} - \frac {1 - t}{2} e ^ {W \gamma} = 1 - \frac {e ^ {W \gamma}}{2} - \frac {e ^ {- W \gamma}}{2} + \frac {e ^ {W \gamma} - e ^ {- W \gamma}}{2} t, +$$ + +$\Psi_{\mathrm{exp}}$ can be lower bounded by + +$$ +\widetilde {\Psi} _ {\exp} (t) = \left\{ \begin{array}{l l} \frac {t ^ {2}}{2}, & t \leq \frac {e ^ {2 W \gamma} - 1}{e ^ {2 W \gamma} + 1} \\ \frac {1}{2} \Big (\frac {e ^ {2 W \gamma} - 1}{e ^ {2 W \gamma} + 1} \Big) t, & t > \frac {e ^ {2 W \gamma} - 1}{e ^ {2 W \gamma} + 1}. \end{array} \right. +$$ + +Thus, we adopt an upper bound of $\Psi^{-1}$ as follows: + +$$ +\begin{array}{l} \Gamma_ {\exp} (t) = \widetilde {\Psi} _ {\exp} ^ {- 1} (t) = \left\{ \begin{array}{l l} \sqrt {2 t}, & t \leq \frac {1}{2} \left(\frac {e ^ {2 W \gamma} - 1}{e ^ {2 W \gamma} + 1}\right) ^ {2} \\ 2 \left(\frac {e ^ {2 W \gamma} + 1}{e ^ {2 W \gamma} - 1}\right) t, & t > \frac {1}{2} \left(\frac {e ^ {2 W \gamma} - 1}{e ^ {2 W \gamma} + 1}\right) ^ {2} \end{array} \right. \\ = \max \left\{\sqrt {2 t}, 2 \left(\frac {e ^ {2 W \gamma} + 1}{e ^ {2 W \gamma} - 1}\right) t \right\}. \\ \end{array} +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{lin}}$ -consistency bound for $\mathsf{L}_{\Phi_{\mathrm{exp}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{lin}}$ : + +$$ +\mathcal {R} _ {\mathsf {L} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\mathsf {L} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) \leq \Gamma_ {\exp} \left(\mathcal {R} _ {\mathsf {L} _ {\Phi_ {\exp}}} (h) - \mathcal {R} _ {\mathsf {L} _ {\Phi_ {\exp}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) + \mathcal {M} _ {\mathsf {L} _ {\Phi_ {\exp}}} (\mathcal {H} _ {\mathrm {l i n}})\right) - \mathcal {M} _ {\mathsf {L} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {l i n}}). \tag {43} +$$ + +where $\Gamma_{\exp}(t) = \max \left\{\sqrt{2t}, 2\left(\frac{e^{2W\gamma} + 1}{e^{2W\gamma} - 1}\right)t\right\}$ . + +# K.1.4. DERIVATION FOR $\mathsf{L}_{\Phi_{\log}}$ + +For the logistic loss function $\Phi_{\log}(u) := \log_2(1 + e^{-u})$ , for all $h \in \mathcal{H}_{\mathrm{lin}}$ and $(x, x')$ such that $\|x - x'\|_p > \gamma$ , + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\log}}} (h, x, x ^ {\prime}) = \eta (x, x ^ {\prime}) \mathrm {L} _ {\Phi_ {\log}} (h (x ^ {\prime}) - h (x)) + (1 - \eta (x, x ^ {\prime})) \mathrm {L} _ {\Phi_ {\log}} (h (x) - h (x ^ {\prime})) \\ = \eta (x, x ^ {\prime}) \log_ {2} \Big (1 + e ^ {- h (x ^ {\prime}) + h (x)} \Big) + (1 - \eta (x, x ^ {\prime})) \log_ {2} \Big (1 + e ^ {h (x ^ {\prime}) - h (x)} \Big). \\ \end{array} +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \\ = \inf _ {h \in \mathcal {H} _ {\mathrm {l i n}}} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\log}}} \left(h, x, x ^ {\prime}\right) \\ = \left\{ \begin{array}{l} - \eta (x, x ^ {\prime}) \log_ {2} (\eta (x, x ^ {\prime})) - (1 - \eta (x, x ^ {\prime})) \log_ {2} (1 - \eta (x, x ^ {\prime})) \\ \quad \text {i f} \left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| \leq W \| x - x ^ {\prime} \| _ {p} \\ \max \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \log_ {2} \Big (1 + e ^ {- W \| x - x ^ {\prime} \| _ {p}} \Big) + \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \log_ {2} \Big (1 + e ^ {W \| x - x ^ {\prime} \| _ {p}} \Big) \\ \quad \text {i f} \left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| > W \| x - x ^ {\prime} \| _ {p}. \end{array} \right. \\ \end{array} +$$ + +The $(\mathsf{L}_{\Phi_{\log}},\mathcal{H}_{\mathrm{lin}})$ -minimizability gap is: + +$$ +\begin{array}{l} \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\log}}} \left(\mathcal {H} _ {\mathrm {l i n}}\right) \\ = \mathcal {R} _ {\mathsf {L} _ {\Phi_ {\log}}} ^ {*} \left(\mathcal {H} _ {\mathrm {l i n}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \right] \\ = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\log}}} ^ {*} (\mathcal {H} _ {\operatorname {l i n}}) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ - \eta (x, x ^ {\prime}) \log_ {2} (\eta (x, x ^ {\prime})) - (1 - \eta (x, x ^ {\prime})) \log_ {2} (1 - \eta (x, x ^ {\prime})) \mathbb {1} _ {\left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| \leq W \| x - x ^ {\prime} \| _ {p}} \right] \tag {44} \\ - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \max \left\{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \right\} \log_ {2} \left(1 + e ^ {- W \| x - x ^ {\prime} \| _ {p}}\right) \mathbb {1} _ {\left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| > W \| x - x ^ {\prime} \| _ {p}} \right] \\ - \mathbb {E} _ {(X, X ^ {\prime})} \Big [ \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \log_ {2} \Big (1 + e ^ {W \| x - x ^ {\prime} \| _ {p}} \Big) \mathbb {1} _ {\left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| > W \| x - x ^ {\prime} \| _ {p}} \Big ]. \\ \end{array} +$$ + +Therefore, $\forall h\in \overline{\mathcal{H}}_{\mathrm{lin}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \overline {{\mathcal {H}}} _ {\mathrm {l i n}} (x, x ^ {\prime})} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\log}}} (h, x, x ^ {\prime}) - \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \\ = \eta (x, x ^ {\prime}) \log_ {2} (1 + e ^ {- 0}) + (1 - \eta (x, x ^ {\prime})) \log_ {2} (1 + e ^ {0}) - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \\ = \left\{ \begin{array}{l} 1 + \eta (x, x ^ {\prime}) \log_ {2} (\eta (x, x ^ {\prime})) + (1 - \eta (x, x ^ {\prime})) \log_ {2} (1 - \eta (x, x ^ {\prime})) \\ \quad \text {i f} \left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| \leq W \| x - x ^ {\prime} \| _ {p} \\ 1 - \max \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \log_ {2} \left(1 + e ^ {- W \| x - x ^ {\prime} \| _ {p}}\right) - \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \log_ {2} \left(1 + e ^ {W \| x - x ^ {\prime} \| _ {p}}\right) \\ \quad \text {i f} \left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| > W \| x - x ^ {\prime} \| _ {p} \end{array} \right. \\ \geq \left\{ \begin{array}{l} 1 + \eta (x, x ^ {\prime}) \log_ {2} (\eta (x, x ^ {\prime})) + (1 - \eta (x, x ^ {\prime})) \log_ {2} (1 - \eta (x, x ^ {\prime})) \\ \quad \text {i f} \left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| \leq W \gamma \\ 1 - \max \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \log_ {2} \big (1 + e ^ {- W \gamma} \big) - \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \log_ {2} \big (1 + e ^ {W \gamma} \big) \\ \quad \text {i f} \left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| > W \gamma \end{array} \right. \\ = \Psi_ {\log} \left(| 2 \eta (x, x ^ {\prime}) - 1 |\right) \\ \end{array} +$$ + +where $\Psi_{\mathrm{log}}$ is the increasing and convex function on $[0,1]$ defined by + +$$ +\forall t \in [ 0, 1 ], \quad \mathfrak {T} (t) = \left\{ \begin{array}{l l} \frac {t + 1}{2} \log_ {2} (t + 1) + \frac {1 - t}{2} \log_ {2} (1 - t), & t \leq \frac {e ^ {W \gamma} - 1}{e ^ {W \gamma} + 1} \\ 1 - \frac {t + 1}{2} \log_ {2} (1 + e ^ {- W \gamma}) - \frac {1 - t}{2} \log_ {2} (1 + e ^ {W \gamma}), & t > \frac {e ^ {W \gamma} - 1}{e ^ {W \gamma} + 1} \end{array} \right. +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{lin}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \geq \Psi_ {\log} \Big (\Delta \mathcal {C} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \Big). +$$ + +To simplify the expression, using the fact that + +$$ +\begin{array}{l} \frac {t + 1}{2} \log_ {2} (t + 1) + \frac {1 - t}{2} \log_ {2} (1 - t) = 1 - \left(- \frac {t + 1}{2} \log_ {2} \left(\frac {t + 1}{2}\right) - \frac {1 - t}{2} \log_ {2} \left(\frac {1 - t}{2}\right)\right) \\ \geq 1 - \sqrt {4 \frac {1 - t}{2} \frac {t + 1}{2}} \\ = 1 - \sqrt {1 - t ^ {2}} \\ \geq \frac {t ^ {2}}{2}, \\ \end{array} +$$ + +$$ +1 - \frac {t + 1}{2} \log_ {2} (1 + e ^ {- W \gamma}) - \frac {1 - t}{2} \log_ {2} (1 + e ^ {W \gamma}) = \frac {1}{2} \log_ {2} \left(\frac {4}{2 + e ^ {- W \gamma} + e ^ {W \gamma}}\right) + \frac {1}{2} \log_ {2} \left(\frac {1 + e ^ {W \gamma}}{1 + e ^ {- W \gamma}}\right) t, +$$ + +$\Psi_{\mathrm{log}}$ can be lower bounded by + +$$ +\widetilde {\Psi} _ {\log} (t) = \left\{ \begin{array}{l l} \frac {t ^ {2}}{2}, & t \leq \frac {e ^ {W \gamma} - 1}{e ^ {W \gamma} + 1} \\ \frac {1}{2} \Big (\frac {e ^ {W \gamma} - 1}{e ^ {W \gamma} + 1} \Big) t, & t > \frac {e ^ {W \gamma} - 1}{e ^ {W \gamma} + 1} \end{array} \right. +$$ + +Thus, we adopt an upper bound of $\Psi_{\log}^{-1}$ as follows: + +$$ +\begin{array}{l} \Gamma_ {\log} (t) = \widetilde {\Psi} _ {\log} ^ {- 1} (t) = \left\{ \begin{array}{l l} \sqrt {2 t}, & t \leq \frac {1}{2} \Big (\frac {e ^ {W \gamma} - 1}{e ^ {W \gamma} + 1} \Big) ^ {2} \\ 2 \Big (\frac {e ^ {W \gamma} + 1}{e ^ {W \gamma} - 1} \Big) t, & t > \frac {1}{2} \Big (\frac {e ^ {W \gamma} - 1}{e ^ {W \gamma} + 1} \Big) ^ {2} \end{array} \right. \\ = \max \left\{\sqrt {2 t}, 2 \left(\frac {e ^ {W \gamma} + 1}{e ^ {W \gamma} - 1}\right) t \right\}. \\ \end{array} +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{lin}}$ -consistency bound for $\mathsf{L}_{\Phi_{\log}}$ , valid for all $h \in \mathcal{H}_{\mathrm{lin}}$ : + +$$ +\mathcal {R} _ {\mathsf {L} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\mathsf {L} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) \leq \Gamma_ {\log} \left(\mathcal {R} _ {\mathsf {L} _ {\Phi_ {\log}}} (h) - \mathcal {R} _ {\mathsf {L} _ {\Phi_ {\log}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) + \mathcal {M} _ {\mathsf {L} _ {\Phi_ {\log}}} (\mathcal {H} _ {\mathrm {l i n}})\right) - \mathcal {M} _ {\mathsf {L} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {l i n}}). \tag {45} +$$ + +where $\Gamma_{\log}(t) = \max \left\{\sqrt{2t}, 2\left(\frac{e^{W\gamma} + 1}{e^{W\gamma} - 1}\right)t\right\}$ . + +# K.1.5. DERIVATION FOR $\mathsf{L}_{\Phi_{\mathrm{sq}}}$ . + +For the squared hinge loss function $\Phi_{\mathrm{sq}}(u)\coloneqq (1 - u)^2\mathbb{1}_{u\leq 1}$ , for all $h\in \mathcal{H}_{\mathrm{lin}}$ and $(x,x^{\prime})$ such that $\| x - x^{\prime}\| _p > \gamma$ + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {\Phi \mathrm {s q}}} (h, x, x ^ {\prime}) \\ = \eta (x, x ^ {\prime}) \mathsf {L} _ {\Phi_ {\mathrm {s q}}} \big (h (x ^ {\prime}) - h (x) \big) + (1 - \eta (x, x ^ {\prime})) \mathsf {L} _ {\Phi_ {\mathrm {s q}}} \big (h (x) - h (x ^ {\prime}) \big) \\ = \eta (x, x ^ {\prime}) (1 - h (x ^ {\prime}) + h (x)) ^ {2} \mathbb {1} _ {h (x ^ {\prime}) - h (x) \leq 1} + (1 - \eta (x, x ^ {\prime})) (1 + h (x ^ {\prime}) - h (x)) ^ {2} \mathbb {1} _ {h (x ^ {\prime}) - h (x) \geq - 1}. \\ \end{array} +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \inf _ {h \in \mathcal {H} _ {\mathrm {l i n}}} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}} \left(h, x, x ^ {\prime}\right) \\ = \left\{ \begin{array}{l} 4 \eta (x, x ^ {\prime}) (1 - \eta (x, x ^ {\prime})) \\ \quad \text {i f} | 2 \eta (x, x ^ {\prime}) - 1 | \leq W \| x - x ^ {\prime} \| _ {p} \\ \max \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \big (1 - W \| x - x ^ {\prime} \| _ {p} \big) ^ {2} + \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \big (1 + W \| x - x ^ {\prime} \| _ {p} \big) ^ {2} \\ \quad \text {i f} | 2 \eta (x, x ^ {\prime}) - 1 | > W \| x - x ^ {\prime} \| _ {p}. \end{array} \right. \\ \end{array} +$$ + +The $(\mathsf{L}_{\Phi_{\mathrm{sq}}},\mathcal{H}_{\mathrm{lin}})$ -minimizability gap is: + +$$ +\begin{array}{l} \mathcal {M} _ {\mathsf {L} _ {\Phi_ {\mathrm {s q}}}} \left(\mathcal {H} _ {\mathrm {l i n}}\right) = \mathcal {R} _ {\mathsf {L} _ {\Phi_ {\mathrm {s q}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {l i n}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \right] \\ = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {l i n}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ 4 \eta (x, x ^ {\prime}) \left(1 - \eta \left(x, x ^ {\prime}\right)\right) \mathbb {1} _ {| 2 \eta \left(x, x ^ {\prime}\right) - 1 | \leq W \| x - x ^ {\prime} \| _ {p}} \right] \tag {46} \\ - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \max \left\{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \right\} \left(1 - W \| x - x ^ {\prime} \| _ {p}\right) ^ {2} \mathbb {1} _ {| 2 \eta (x, x ^ {\prime}) - 1 | > W \| x - x ^ {\prime} \| _ {p}} \right] \\ - \mathbb {E} _ {(X, X ^ {\prime})} \Big [ \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \big (1 + W \| x - x ^ {\prime} \| _ {p} \big) ^ {2} \mathbb {1} _ {| 2 \eta (x, x ^ {\prime}) - 1 | > W \| x - x ^ {\prime} \| _ {p}} \Big ]. \\ \end{array} +$$ + +Therefore, $\forall h\in \overline{\mathcal{H}}_{\mathrm{lin}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {l i n}}} \left(h, x, x ^ {\prime}\right) \\ \geq \inf _ {h \in \overline {{\mathcal {H}}} _ {\mathrm {l i n}} (x, x ^ {\prime})} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {s q}}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x, x ^ {\prime}) + (1 - \eta (x, x ^ {\prime})) - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \\ = \left\{ \begin{array}{l} 1 - 4 \eta (x, x ^ {\prime}) (1 - \eta (x, x ^ {\prime})) \\ \quad \text {i f} | 2 \eta (x, x ^ {\prime}) - 1 | \leq W \| x - x ^ {\prime} \| _ {p} \\ 1 - \max \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \big (1 - W \| x - x ^ {\prime} \| _ {p} \big) ^ {2} - \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \big (1 + W \| x - x ^ {\prime} \| _ {p} \big) ^ {2} \\ \quad \text {i f} | 2 \eta (x, x ^ {\prime}) - 1 | > W \| x - x ^ {\prime} \| _ {p} \end{array} \right. \\ \geq \left\{ \begin{array}{l l} 1 - 4 \eta (x, x ^ {\prime}) (1 - \eta (x, x ^ {\prime})) & | 2 \eta (x, x ^ {\prime}) - 1 | \leq W \gamma \\ 1 - \max \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} (1 - W \gamma) ^ {2} - \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} (1 + W \gamma) ^ {2} & | 2 \eta (x, x ^ {\prime}) - 1 | > W \gamma \end{array} \right. \\ = \Psi_ {\mathrm {s q}} \left(\left| 2 \eta \left(x, x ^ {\prime}\right) - 1 \right|\right), \\ \end{array} +$$ + +where $\Psi_{\mathrm{sq}}$ is the increasing and convex function on $[0,1]$ defined by + +$$ +\forall t \in [ 0, 1 ], \quad \Psi_ {\mathrm {s q}} (t) = \left\{ \begin{array}{l l} t ^ {2} & t \leq W \gamma \\ 2 W \gamma t - (W \gamma) ^ {2} & t > W \gamma \end{array} \right. +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{lin}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \geq \Psi_ {\mathrm {s q}} \Big (\Delta \mathcal {C} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \Big). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{lin}}$ -consistency bound for $\mathsf{L}_{\Phi_{\mathrm{sq}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{lin}}$ : + +$$ +\mathcal {R} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) \leq \Gamma_ {\mathrm {s q}} \left(\mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}} (h) - \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) + \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}} (\mathcal {H} _ {\mathrm {l i n}})\right) - \mathcal {M} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {l i n}}) \tag {47} +$$ + +where $\Gamma_{\mathrm{sq}}(t) = \Phi_{\mathrm{sq}}^{-1}(t) = \left\{ \begin{array}{ll}\sqrt{t}, & t\leq (W\gamma)^2\\ \frac{t}{2W\gamma} +\frac{W\gamma}{2}, & t > (W\gamma)^2 \end{array} \right. = \max \Bigl \{\sqrt{t},\frac{t}{2W\gamma} +\frac{W\gamma}{2}\Bigr \} .$ + +# K.1.6. DERIVATION FOR $\mathsf{L}_{\Phi_{\mathrm{sig}}}$ + +For the sigmoid loss function $\Phi_{\mathrm{sig}}(u)\coloneqq 1 - \tanh (ku), k > 0,$ for all $h\in \mathcal{H}_{\mathrm{lin}}$ and $(x,x^{\prime})$ such that $\| x - x^{\prime}\| _p > \gamma$ + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}} \left(h, x, x ^ {\prime}\right) \\ = \eta (x, x ^ {\prime}) \mathrm {L} _ {\Phi_ {\mathrm {s i g}}} \left(h \left(x ^ {\prime}\right) - h (x)\right) + \left(1 - \eta (x, x ^ {\prime})\right) \mathrm {L} _ {\Phi_ {\mathrm {s i g}}} \left(h (x) - h \left(x ^ {\prime}\right)\right) \\ = \eta (x, x ^ {\prime}) (1 - \tanh (k [ h (x ^ {\prime}) - h (x) ])) + (1 - \eta (x, x ^ {\prime})) (1 + \tanh (k [ h (x ^ {\prime}) - h (x) ])). \\ \end{array} +$$ + +Then, + +$$ +\mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {s i g}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {l i n}}\right) \left(x, x ^ {\prime}\right) = \inf _ {h \in \mathcal {H} _ {\mathrm {l i n}}} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {s i g}}}} \left(h, x, x ^ {\prime}\right) = 1 - \left| 1 - 2 \eta \left(x, x ^ {\prime}\right) \right| \tanh \left(k W \| x - x ^ {\prime} \| _ {p}\right). +$$ + +The $\left(\mathsf{L}_{\Phi_{\mathrm{sig}}},\mathcal{H}_{\mathrm{lin}}\right)$ -minimizability gap is: + +$$ +\begin{array}{l} \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}} \left(\mathcal {H} _ {\mathrm {l i n}}\right) = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {l i n}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \right] \tag {48} \\ = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {l i n}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \Big [ 1 - | 1 - 2 \eta (x, x ^ {\prime}) | \tanh \big (k W \| x - x ^ {\prime} \| _ {p} \big) \Big ]. \\ \end{array} +$$ + +Therefore, $\forall h\in \overline{\mathcal{H}}_{\mathrm{lin}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \geq \inf _ {h \in \overline {{\mathcal {H}}} _ {\mathrm {l i n}} (x, x ^ {\prime})} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}} (h, x, x ^ {\prime}) - \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \\ = 1 - \left| 1 - 2 \eta (x, x ^ {\prime}) \right| \tanh (0) - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \\ = \left| 1 - 2 \eta (x, x ^ {\prime}) \right| \tanh \left(k W \| x - x ^ {\prime} \| _ {p}\right) \\ \geq \left| 1 - 2 \eta \left(x, x ^ {\prime}\right) \right| \tanh (k W \gamma) \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{lin}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \geq \tanh (k W \gamma) \Delta \mathcal {C} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{lin}}$ -consistency bound for $\mathsf{L}_{\Phi_{\mathrm{sig}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{lin}}$ : + +$$ +\mathcal {R} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) \leq \frac {\mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}} (h) - \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) + \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}} (\mathcal {H} _ {\mathrm {l i n}})}{\tanh (k W \gamma)} - \mathcal {M} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {l i n}}). \tag {49} +$$ + +# K.2. One-Hidden-Layer ReLU Neural Networks + +Since $\mathcal{H}_{\mathrm{NN}}$ satisfies the condition of Lemma K.1, by Lemma K.1 the $(L_{0 - 1}^{\mathrm{abs}},\mathcal{H}_{\mathrm{NN}})$ -minimizability gap can be expressed as follows: + +$$ +\mathcal {M} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} \left(\mathcal {H} _ {\mathrm {N N}}\right) = \mathcal {R} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \min \left\{\eta \left(x, x ^ {\prime}\right), 1 - \eta \left(x, x ^ {\prime}\right) \right\} \mathbb {1} _ {\| x - x ^ {\prime} \| > \gamma} + c \mathbb {1} _ {| x - x ^ {\prime} | \leq \gamma}. \right]. \tag {50} +$$ + +By the definition of $\mathcal{H}_{\mathrm{NN}}$ , for any $(x,x^{\prime})\in \mathcal{X}\times \mathcal{X}$ , $\left\{h(x^{\prime}) - h(x)\mid h\in \mathcal{H}_{\mathrm{NN}}\right\} = \left[-\Lambda W\| x - x^{\prime}\|_{p},\Lambda W\| x - x^{\prime}\|_{p}\right]$ . + +# K.2.1. DERIVATION FOR $\mathsf{L}_{\Phi_{\mathrm{hinge}}}$ + +For the hinge loss function $\Phi_{\mathrm{hinge}}(u)\coloneqq \max \{0,1 - u\}$ , for all $h\in \mathcal{H}_{\mathrm{NN}}$ and $(x,x^{\prime})$ such that $\| x - x^{\prime}\| _p > \gamma$ + +$$ +\begin{array}{l} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {h i n g e}}}} \left(h, x, x ^ {\prime}\right) \\ = \eta (x, x ^ {\prime}) \mathrm {L} _ {\Phi_ {\text {h i n g e}}} \left(h \left(x ^ {\prime}\right) - h (x)\right) + \left(1 - \eta (x, x ^ {\prime})\right) \mathrm {L} _ {\Phi_ {\text {h i n g e}}} \left(h (x) - h \left(x ^ {\prime}\right)\right) \\ = \eta \left(x, x ^ {\prime}\right) \max \left\{0, 1 - h \left(x ^ {\prime}\right) + h (x) \right\} + \left(1 - \eta \left(x, x ^ {\prime}\right)\right) \max \left\{0, 1 + h \left(x ^ {\prime}\right) - h (x) \right\}. \\ \end{array} +$$ + +Then, + +$$ +\mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) = \inf _ {h \in \mathcal {H} _ {\mathrm {N N}}} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {h i n g e}}}} (h, x, x ^ {\prime}) = 1 - | 2 \eta (x, x ^ {\prime}) - 1 | \min \bigl \{\Lambda W \| x - x ^ {\prime} \| _ {p}, 1 \bigr \}. +$$ + +The $\left(\mathsf{L}_{\Phi_{\mathrm{hinge}}},\mathcal{H}_{\mathrm{NN}}\right)$ -minimizability gap is + +$$ +\begin{array}{l} \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\text {h i n g e}}}} \left(\mathcal {H} _ {\mathrm {N N}}\right) = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\text {h i n g e}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\text {h i n g e}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \right] \tag {51} \\ = \mathcal {R} _ {\mathsf {L} _ {\Phi_ {\mathrm {h i n g e}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \big [ 1 - | 2 \eta (x, x ^ {\prime}) - 1 | \min \big \{\Lambda W \| x - x ^ {\prime} \| _ {p}, 1 \big \} \big ]. \\ \end{array} +$$ + +Therefore, $\forall h\in \overline{\mathcal{H}}_{\mathrm{NN}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \overline {{\mathcal {H}}} _ {\mathrm {N N}} (x, x ^ {\prime})} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {h i n g e}}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x, x ^ {\prime}) \max \{0, 1 - 0 \} + (1 - \eta (x, x ^ {\prime})) \max \{0, 1 + 0 \} - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = 1 - \left[ 1 - \left| 2 \eta \left(x, x ^ {\prime}\right) - 1 \right| \min \left\{\Lambda W \| x - x ^ {\prime} \| _ {p}, 1 \right\} \right] \\ = \left| 2 \eta \left(x, x ^ {\prime}\right) - 1 \right| \min \left\{\Lambda W \| x - x ^ {\prime} \| _ {p}, 1 \right\} \\ \geq \left| 2 \eta \left(x, x ^ {\prime}\right) - 1 \right| \min \left\{\Lambda W \gamma , 1 \right\} \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{NN}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\mathtt {L} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \geq \min \{\Lambda W \gamma , 1 \} \langle | 2 \eta (x, x ^ {\prime}) - 1 | \rangle_ {0} \mathbb {1} _ {h \in \overline {{\mathcal {H}}} _ {\mathrm {N N}} (x, x ^ {\prime})} = \Delta \mathcal {C} _ {\mathtt {L} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{NN}}$ -consistency bound for $\mathsf{L}_{\Phi_{\mathrm{hinge}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{NN}}$ : + +$$ +\mathcal {R} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) \leq \frac {\mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {h i n g e}}} (h)} - \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {h i n g e}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}})} + \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\mathrm {h i n g e}}} (\mathcal {H} _ {\mathrm {N N}})}}{\min \left\{\Lambda W \gamma , 1 \right\}} - \mathcal {M} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}} (\mathcal {H} _ {\mathrm {N N}})}. \tag {52} +$$ + +# K.2.2. DERIVATION FOR $\mathsf{L}_{\Phi_{\rho}}$ + +For the $\rho$ -margin loss function $\Phi_{\rho}(u) := \min \left\{1, \max \left\{0, 1 - \frac{u}{\rho}\right\} \right\}$ , $\rho > 0$ , for all $h \in \mathcal{H}_{\mathrm{NN}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\rho}}} (h, x, x ^ {\prime}) \\ = \eta (x, x ^ {\prime}) \mathrm {L} _ {\Phi_ {\rho}} \left(h \left(x ^ {\prime}\right) - h (x)\right) + \left(1 - \eta \left(x, x ^ {\prime}\right)\right) \mathrm {L} _ {\Phi_ {\rho}} \left(h (x) - h \left(x ^ {\prime}\right)\right) \\ = \eta (x, x ^ {\prime}) \min \left\{1, \max \left\{0, 1 - \frac {h (x ^ {\prime}) - h (x)}{\rho} \right\} \right\} + (1 - \eta (x, x ^ {\prime})) \min \left\{1, \max \left\{0, 1 + \frac {h (x ^ {\prime}) - h (x)}{\rho} \right\} \right\}. \\ \end{array} +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) = \inf _ {h \in \mathcal {H} _ {\mathrm {N N}}} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\rho}}} (h, x, x ^ {\prime}) \\ = \min \left\{\eta \left(x, x ^ {\prime}\right), 1 - \eta \left(x, x ^ {\prime}\right) \right\} + \max \left\{\eta \left(x, x ^ {\prime}\right), 1 - \eta \left(x, x ^ {\prime}\right) \right\} \left(1 - \frac {\min \left\{\Lambda W \| x - x ^ {\prime} \| _ {p} , \rho \right\}}{\rho}\right) \\ \end{array} +$$ + +The $(\mathsf{L}_{\Phi_{\rho}},\mathcal{H}_{\mathrm{NN}})$ -minimizability gap is + +$$ +\begin{array}{l} \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\rho}}} (\mathcal {H} _ {\mathrm {N N}}) \\ = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\rho}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \right]. \\ = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\rho}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \min \left\{\eta \left(x, x ^ {\prime}\right), 1 - \eta \left(x, x ^ {\prime}\right) \right\} + \max \left\{\eta \left(x, x ^ {\prime}\right), 1 - \eta \left(x, x ^ {\prime}\right) \right\} \left(1 - \frac {\min \left\{\Lambda W \| x - x ^ {\prime} \| _ {p} , \rho \right\}}{\rho}\right) \right]. \tag {53} \\ \end{array} +$$ + +Therefore, $\forall h\in \overline{\mathcal{H}}_{\mathrm{NN}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {N N}}} \left(h, x, x ^ {\prime}\right) \\ \geq \inf _ {h \in \mathcal {H} _ {\mathrm {N N}} (x, x ^ {\prime})} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\rho}}} (h, x, x ^ {\prime}) - \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = \max \left\{\eta \left(x, x ^ {\prime}\right), 1 - \eta \left(x, x ^ {\prime}\right) \right\} + \min \left\{\eta \left(x, x ^ {\prime}\right), 1 - \eta \left(x, x ^ {\prime}\right) \right\} \left(1 - \frac {\min \left\{\Lambda W \| x - x ^ {\prime} \| _ {p} , \rho \right\}}{\rho}\right) - \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \left| 2 \eta \left(x, x ^ {\prime}\right) - 1 \right| \frac {\min \left\{\Lambda W \| x - x ^ {\prime} \| _ {p} , \rho \right\}}{\rho} \\ \geq | 2 \eta (x, x ^ {\prime}) - 1 | \frac {\min \left\{\Lambda W \gamma , \rho \right\}}{\rho} \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{NN}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \geq \frac {\min \{\Lambda W \gamma , \rho \}}{\rho} \langle | 2 \eta (x, x ^ {\prime}) - 1 | \rangle_ {0} \mathbb {1} _ {h \in \overline {{\mathcal {H}}} _ {\mathrm {N N}} (x, x ^ {\prime})} = \Delta \mathcal {C} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{NN}}$ -consistency bound for $\mathsf{L}_{\Phi_{\rho}}$ , valid for all $h \in \mathcal{H}_{\mathrm{NN}}$ : + +$$ +\mathcal {R} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) \leq \frac {\rho \left(\mathcal {R} _ {\mathrm {L} _ {\Phi_ {\rho}}} (h) - \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\rho}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) + \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\rho}}} \left(\mathcal {H} _ {\mathrm {N N}}\right)\right)}{\min \left\{\Lambda W \gamma , \rho \right\}} - \mathcal {M} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} \left(\mathcal {H} _ {\mathrm {N N}}\right). \tag {54} +$$ + +# K.2.3. DERIVATION FOR $\mathsf{L}_{\Phi_{\mathrm{exp}}}$ + +For the exponential loss function $\Phi_{\exp}(u) := e^{-u}$ , for all $h \in \mathcal{H}_{\mathrm{NN}}$ and $(x, x')$ such that $\|x - x'\|_p > \gamma$ , + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\exp}}} (h, x, x ^ {\prime}) = \eta (x, x ^ {\prime}) \mathrm {L} _ {\Phi_ {\exp}} (h (x ^ {\prime}) - h (x)) + (1 - \eta (x, x ^ {\prime})) \mathrm {L} _ {\Phi_ {\exp}} (h (x) - h (x ^ {\prime})) \\ = \eta (x, x ^ {\prime}) e ^ {- h \left(x ^ {\prime}\right) + h (x)} + \left(1 - \eta (x, x ^ {\prime})\right) e ^ {h \left(x ^ {\prime}\right) - h (x)}. \\ \end{array} +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\exp}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = \inf _ {h \in \mathcal {H} _ {\mathrm {N N}}} \mathcal {C} _ {\Phi_ {\exp}} \left(h, x, x ^ {\prime}\right) \\ = \left\{ \begin{array}{l l} 2 \sqrt {\eta (x , x ^ {\prime}) (1 - \eta (x , x ^ {\prime}))} \\ \max \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} e ^ {- \Lambda W \| x - x ^ {\prime} \| _ {p}} + \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} e ^ {\Lambda W \| x - x ^ {\prime} \| _ {p}} & \frac {1}{2} \left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| \leq \Lambda W \| x - x ^ {\prime} \| _ {p} \\ \frac {1}{2} \left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| > \Lambda W \| x - x ^ {\prime} \| _ {p} \end{array} \right. \\ \end{array} +$$ + +The $(\Phi_{\mathrm{exp}},\mathcal{H}_{\mathrm{NN}})$ -minimizability gap is: + +$$ +\begin{array}{l} \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\exp}}} \left(\mathcal {H} _ {\mathrm {N N}}\right) = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\exp}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\exp}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \right] \\ = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\exp}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ 2 \sqrt {\eta (x , x ^ {\prime}) (1 - \eta (x , x ^ {\prime}))} \mathbb {1} _ {\frac {1}{2} \left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| \leq \Lambda W \| x - x ^ {\prime} \| _ {p}} \right] \\ - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \max \left\{\eta \left(x, x ^ {\prime}\right), 1 - \eta \left(x, x ^ {\prime}\right) \right\} e ^ {- \Lambda W \| x - x ^ {\prime} \| _ {p}} \mathbb {1} _ {\frac {1}{2}} \left| \log \frac {\eta \left(x , x ^ {\prime}\right)}{1 - \eta \left(x , x ^ {\prime}\right)} \right| > \Lambda W \| x - x ^ {\prime} \| _ {p} \right] \tag {55} \\ - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} e ^ {\Lambda W \| x - x ^ {\prime} \| _ {p}} \mathbb {1} _ {\frac {1}{2} \left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| > \Lambda W \| x - x ^ {\prime} \| _ {p}} \right] \\ \end{array} +$$ + +Therefore, $\forall h\in \overline{\mathcal{H}}_{\mathrm{NN}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\exp}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \overline {{\mathcal {H}}} _ {\mathrm {N N}} (x, x ^ {\prime})} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\exp}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\exp}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x, x ^ {\prime}) e ^ {- 0} + (1 - \eta (x, x ^ {\prime})) e ^ {0} - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {e x p}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = \left\{ \begin{array}{l} 1 - 2 \sqrt {\eta (x , x ^ {\prime}) (1 - \eta (x , x ^ {\prime}))} \\ \quad \text {i f} \frac {1}{2} \Big | \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \Big | \leq \Lambda W \| x - x ^ {\prime} \| _ {p} \\ 1 - \max \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} e ^ {- \Lambda W \| x - x ^ {\prime} \| _ {p}} - \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} e ^ {\Lambda W \| x - x ^ {\prime} \| _ {p}} \\ \quad \text {i f} \frac {1}{2} \Big | \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \Big | > \Lambda W \| x - x ^ {\prime} \| _ {p} \end{array} \right. \\ \geq \left\{ \begin{array}{l l} 1 - 2 \sqrt {\eta (x , x ^ {\prime}) (1 - \eta (x , x ^ {\prime}))} & \frac {1}{2} \Bigg | \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \Bigg | \leq \Lambda W \gamma \\ 1 - \max \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} e ^ {- \Lambda W \gamma} - \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} e ^ {\Lambda W \gamma} & \frac {1}{2} \Bigg | \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \Bigg | > \Lambda W \gamma \end{array} \right. \\ = \Psi_ {\exp} \left(\left| 2 \eta (x, x ^ {\prime}) - 1 \right|\right), \\ \end{array} +$$ + +where $\Psi_{\mathrm{exp}}$ is the increasing and convex function on $[0,1]$ defined by + +$$ +\forall t \in [ 0, 1 ], \quad \Psi_ {\exp} (t) = \left\{ \begin{array}{l l} 1 - \sqrt {1 - t ^ {2}}, & t \leq \frac {e ^ {2 W \gamma} - 1}{e ^ {2 W \gamma} + 1} \\ 1 - \frac {t + 1}{2} e ^ {- \Lambda W \gamma} - \frac {1 - t}{2} e ^ {\Lambda W \gamma}, & t > \frac {e ^ {2 W \gamma} - 1}{e ^ {2 W \gamma} + 1} \end{array} \right. +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{NN}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {e x p}}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \geq \Psi_ {\mathrm {e x p}} \Big (\Delta \mathcal {C} _ {\mathsf {L} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \Big). +$$ + +To simplify the expression, using the fact that + +$$ +1 - \sqrt {1 - t ^ {2}} \geq \frac {t ^ {2}}{2}, +$$ + +$$ +1 - \frac {t + 1}{2} e ^ {- \Lambda W \gamma} - \frac {1 - t}{2} e ^ {\Lambda W \gamma} = 1 - \frac {e ^ {\Lambda W \gamma}}{2} - \frac {e ^ {- \Lambda W \gamma}}{2} + \frac {e ^ {\Lambda W \gamma} - e ^ {- \Lambda W \gamma}}{2} t, +$$ + +$\Psi_{\mathrm{exp}}$ can be lower bounded by + +$$ +\widetilde {\Psi} _ {\mathrm {e x p}} (t) = \left\{ \begin{array}{l l} \frac {t ^ {2}}{2}, & t \leq \frac {e ^ {2 W \gamma} - 1}{e ^ {2 W \gamma} + 1} \\ \frac {1}{2} \bigg (\frac {e ^ {2 W \gamma} - 1}{e ^ {2 W \gamma} + 1} \bigg) t, & t > \frac {e ^ {2 W \gamma} - 1}{e ^ {2 W \gamma} + 1}. \end{array} \right. +$$ + +Thus, we adopt an upper bound of $\Psi^{-1}$ as follows: + +$$ +\begin{array}{l} \Gamma_ {\exp} (t) = \widetilde {\Psi} _ {\exp} ^ {- 1} (t) = \left\{ \begin{array}{l l} \sqrt {2 t}, & t \leq \frac {1}{2} \Big (\frac {e ^ {2 W \gamma} - 1}{e ^ {2 W \gamma} + 1} \Big) ^ {2} \\ 2 \Big (\frac {e ^ {2 W \gamma} + 1}{e ^ {2 W \gamma} - 1} \Big) t, & t > \frac {1}{2} \Big (\frac {e ^ {2 W \gamma} - 1}{e ^ {2 W \gamma} + 1} \Big) ^ {2} \end{array} \right. \\ = \max \left\{\sqrt {2 t}, 2 \left(\frac {e ^ {2 W \gamma} + 1}{e ^ {2 W \gamma} - 1}\right) t \right\}. \\ \end{array} +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{NN}}$ -consistency bound for $\mathsf{L}_{\Phi_{\mathrm{exp}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{NN}}$ : + +$$ +\mathcal {R} _ {\mathsf {L} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\mathsf {L} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) \leq \Gamma_ {\exp} \left(\mathcal {R} _ {\mathsf {L} _ {\Phi_ {\exp}}} (h) - \mathcal {R} _ {\mathsf {L} _ {\Phi_ {\exp}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) + \mathcal {M} _ {\mathsf {L} _ {\Phi_ {\exp}}} (\mathcal {H} _ {\mathrm {N N}})\right) - \mathcal {M} _ {\mathsf {L} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {N N}}). \tag {56} +$$ + +where $\Gamma_{\exp}(t) = \max \left\{\sqrt{2t}, 2\left(\frac{e^{2W\gamma} + 1}{e^{2W\gamma} - 1}\right)t\right\}$ . + +# K.2.4. DERIVATION FOR $\mathsf{L}_{\Phi_{\log}}$ + +For the logistic loss function $\Phi_{\log}(u) \coloneqq \log_2(1 + e^{-u})$ , for all $h \in \mathcal{H}_{\mathrm{NN}}$ and $(x, x')$ such that $\|x - x'\|_p > \gamma$ , + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\log}}} \left(h, x, x ^ {\prime}\right) = \eta \left(x, x ^ {\prime}\right) \mathrm {L} _ {\Phi_ {\log}} \left(h \left(x ^ {\prime}\right) - h (x)\right) + \left(1 - \eta \left(x, x ^ {\prime}\right)\right) \mathrm {L} _ {\Phi_ {\log}} \left(h (x) - h \left(x ^ {\prime}\right)\right) \\ = \eta (x, x ^ {\prime}) \log_ {2} \left(1 + e ^ {- h (x ^ {\prime}) + h (x)}\right) + (1 - \eta (x, x ^ {\prime})) \log_ {2} \left(1 + e ^ {h (x ^ {\prime}) - h (x)}\right). \\ \end{array} +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = \inf _ {h \in \mathcal {H} _ {\mathrm {N N}}} \mathcal {C} _ {\Phi_ {\log}} \left(h, x, x ^ {\prime}\right) \\ = \left\{ \begin{array}{l} - \eta (x, x ^ {\prime}) \log_ {2} (\eta (x, x ^ {\prime})) - (1 - \eta (x, x ^ {\prime})) \log_ {2} (1 - \eta (x, x ^ {\prime})) \\ \quad \text {i f} \left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| \leq \Lambda W \| x - x ^ {\prime} \| _ {p} \\ \max \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \log_ {2} \Big (1 + e ^ {- \Lambda W \| x - x ^ {\prime} \| _ {p}} \Big) + \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \log_ {2} \Big (1 + e ^ {\Lambda W \| x - x ^ {\prime} \| _ {p}} \Big) \\ \quad \text {i f} \left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| > \Lambda W \| x - x ^ {\prime} \| _ {p} \end{array} \right. \\ \end{array} +$$ + +The $(\Phi_{\log},\mathcal{H}_{\mathrm{NN}})$ -minimizability gap is: + +$$ +\begin{array}{l} \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\log}}} (\mathcal {H} _ {\mathrm {N N}}) \\ = \mathcal {R} _ {\mathsf {L} _ {\Phi_ {\log}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \right] \\ = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\log}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ - \eta (x, x ^ {\prime}) \log_ {2} \left(\eta \left(x, x ^ {\prime}\right)\right) - \left(1 - \eta \left(x, x ^ {\prime}\right)\right) \log_ {2} \left(1 - \eta \left(x, x ^ {\prime}\right)\right) 1 _ {\left| \log \frac {\eta \left(x , x ^ {\prime}\right)}{1 - \eta \left(x , x ^ {\prime}\right)} \right| \leq \Lambda W \| x - x ^ {\prime} \| _ {p}} \right] \tag {57} \\ - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \max \left\{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \right\} \log_ {2} \left(1 + e ^ {- \Lambda W \| x - x ^ {\prime} \| _ {p}}\right) \mathbb {1} _ {\left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| > \Lambda W \| x - x ^ {\prime} \| _ {p}} \right] \\ - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \min \left\{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \right\} \log_ {2} \left(1 + e ^ {\Lambda W \| x - x ^ {\prime} \| _ {p}}\right) \mathbb {1} _ {\left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| > \Lambda W \| x - x ^ {\prime} \| _ {p}} \right]. \\ \end{array} +$$ + +Therefore, $\forall h\in \overline{\mathcal{H}}_{\mathrm{NN}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \overline {{\mathcal {H}}} _ {\mathrm {N N}} (x, x ^ {\prime})} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\log}}} (h, x, x ^ {\prime}) - \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = \eta (x, x ^ {\prime}) \log_ {2} \left(1 + e ^ {- 0}\right) + \left(1 - \eta (x, x ^ {\prime})\right) \log_ {2} \left(1 + e ^ {0}\right) - \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = \left\{ \begin{array}{l} 1 + \eta (x, x ^ {\prime}) \log_ {2} (\eta (x, x ^ {\prime})) + (1 - \eta (x, x ^ {\prime})) \log_ {2} (1 - \eta (x, x ^ {\prime})) \\ \quad \text {i f} \left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| \leq \Lambda W \| x - x ^ {\prime} \| _ {p} \\ 1 - \max \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \log_ {2} \Big (1 + e ^ {- \Lambda W \| x - x ^ {\prime} \| _ {p}} \Big) - \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \log_ {2} \Big (1 + e ^ {\Lambda W \| x - x ^ {\prime} \| _ {p}} \Big) \\ \quad \text {i f} \left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| > \Lambda W \| x - x ^ {\prime} \| _ {p} \end{array} \right. \\ \geq \left\{ \begin{array}{l} 1 + \eta (x, x ^ {\prime}) \log_ {2} (\eta (x, x ^ {\prime})) + (1 - \eta (x, x ^ {\prime})) \log_ {2} (1 - \eta (x, x ^ {\prime})) \\ \quad \text {i f} \left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| \leq \Lambda W \gamma \\ 1 - \max \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \log_ {2} \big (1 + e ^ {- \Lambda W \gamma} \big) - \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \log_ {2} \big (1 + e ^ {\Lambda W \gamma} \big) \\ \quad \text {i f} \left| \log \frac {\eta (x , x ^ {\prime})}{1 - \eta (x , x ^ {\prime})} \right| > \Lambda W \gamma \end{array} \right. \\ = \Psi_ {\log} \left(| 2 \eta (x, x ^ {\prime}) - 1 |\right) \\ \end{array} +$$ + +where $\Psi_{\mathrm{log}}$ is the increasing and convex function on $[0,1]$ defined by + +$$ +\forall t \in [ 0, 1 ], \quad \mathfrak {T} (t) = \left\{ \begin{array}{l l} \frac {t + 1}{2} \log_ {2} (t + 1) + \frac {1 - t}{2} \log_ {2} (1 - t), & t \leq \frac {e ^ {\Lambda W \gamma} - 1}{e ^ {\Lambda W \gamma} + 1} \\ 1 - \frac {t + 1}{2} \log_ {2} (1 + e ^ {- \Lambda W \gamma}) - \frac {1 - t}{2} \log_ {2} (1 + e ^ {\Lambda W \gamma}), & t > \frac {e ^ {\Lambda W \gamma} - 1}{e ^ {\Lambda W \gamma} + 1} \end{array} \right. +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{NN}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \geq \Psi_ {\log} \Big (\Delta \mathcal {C} _ {\mathsf {L} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \Big). +$$ + +To simplify the expression, using the fact that + +$$ +\begin{array}{l} \frac {t + 1}{2} \log_ {2} (t + 1) + \frac {1 - t}{2} \log_ {2} (1 - t) = 1 - \left(- \frac {t + 1}{2} \log_ {2} \left(\frac {t + 1}{2}\right) - \frac {1 - t}{2} \log_ {2} \left(\frac {1 - t}{2}\right)\right) \\ \geq 1 - \sqrt {4 \frac {1 - t}{2} \frac {t + 1}{2}} \\ = 1 - \sqrt {1 - t ^ {2}} \\ \geq \frac {t ^ {2}}{2}, \\ \end{array} +$$ + +$$ +1 - \frac {t + 1}{2} \log_ {2} (1 + e ^ {- \Lambda W \gamma}) - \frac {1 - t}{2} \log_ {2} (1 + e ^ {\Lambda W \gamma}) = \frac {1}{2} \log_ {2} \left(\frac {4}{2 + e ^ {- \Lambda W \gamma} + e ^ {\Lambda W \gamma}}\right) + \frac {1}{2} \log_ {2} \left(\frac {1 + e ^ {\Lambda W \gamma}}{1 + e ^ {- \Lambda W \gamma}}\right) t, +$$ + +$\Psi_{\log}$ can be lower bounded by + +$$ +\widetilde {\Psi} _ {\log} (t) = \left\{ \begin{array}{l l} \frac {t ^ {2}}{2}, & t \leq \frac {e ^ {\Lambda W \gamma} - 1}{e ^ {\Lambda W \gamma} + 1} \\ \frac {1}{2} \left(\frac {e ^ {\Lambda W \gamma} - 1}{e ^ {\Lambda W \gamma} + 1}\right) t, & t > \frac {e ^ {\Lambda W \gamma} - 1}{e ^ {\Lambda W \gamma} + 1} \end{array} \right. +$$ + +Thus, we adopt an upper bound of $\Psi_{\log}^{-1}$ as follows: + +$$ +\begin{array}{l} \Gamma_ {\log} (t) = \widetilde {\Psi} _ {\log} ^ {- 1} (t) = \left\{ \begin{array}{l l} \sqrt {2 t}, & t \leq \frac {1}{2} \Big (\frac {e ^ {\Lambda W \gamma} - 1}{e ^ {\Lambda W \gamma} + 1} \Big) ^ {2} \\ 2 \Big (\frac {e ^ {\Lambda W \gamma} + 1}{e ^ {\Lambda W \gamma} - 1} \Big) t, & t > \frac {1}{2} \Big (\frac {e ^ {\Lambda W \gamma} - 1}{e ^ {\Lambda W \gamma} + 1} \Big) ^ {2} \end{array} \right. \\ = \max \left\{\sqrt {2 t}, 2 \left(\frac {e ^ {\Lambda W \gamma} + 1}{e ^ {\Lambda W \gamma} - 1}\right) t \right\}. \\ \end{array} +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{NN}}$ -consistency bound for $\mathsf{L}_{\Phi_{\log}}$ , valid for all $h \in \mathcal{H}_{\mathrm{NN}}$ : + +$$ +\mathcal {R} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) \leq \Gamma_ {\log} \left(\mathcal {R} _ {\mathrm {L} _ {\Phi_ {\log}}} (h) - \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\log}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) + \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\log}}} (\mathcal {H} _ {\mathrm {N N}})\right) - \mathcal {M} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {N N}}). \tag {58} +$$ + +where $\Gamma_{\log}(t) = \max \left\{\sqrt{2t}, 2\left(\frac{e^{\Lambda W\gamma} + 1}{e^{\Lambda W\gamma} - 1}\right)t\right\}$ . + +# K.2.5. DERIVATION FOR $\mathsf{L}_{\Phi_{\mathrm{sq}}}$ + +For the squared hinge loss function $\Phi_{\mathrm{sq}}(u)\coloneqq (1 - u)^2\mathbb{1}_{u\leq 1}$ , for all $h\in \mathcal{H}_{\mathrm{NN}}$ and $(x,x^{\prime})$ such that $\| x - x^{\prime}\| _p > \gamma$ + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}} (h, x, x ^ {\prime}) \\ = \eta (x, x ^ {\prime}) \mathrm {L} _ {\Phi_ {\mathrm {s q}}} \left(h \left(x ^ {\prime}\right) - h (x)\right) + \left(1 - \eta (x, x ^ {\prime})\right) \mathrm {L} _ {\Phi_ {\mathrm {s q}}} \left(h (x) - h \left(x ^ {\prime}\right)\right) \\ = \eta (x, x ^ {\prime}) (1 - h (x ^ {\prime}) + h (x)) ^ {2} \mathbb {1} _ {h (x ^ {\prime}) - h (x) \leq 1} + (1 - \eta (x, x ^ {\prime})) (1 + h (x ^ {\prime}) - h (x)) ^ {2} \mathbb {1} _ {h (x ^ {\prime}) - h (x) \geq - 1}. \\ \end{array} +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {C} _ {\Phi_ {\mathrm {s q}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = \inf _ {h \in \mathcal {H} _ {\mathrm {N N}}} \mathcal {C} _ {\Phi_ {\mathrm {s q}}} (h, x, x ^ {\prime}) \\ = \left\{ \begin{array}{l} 4 \eta (x, x ^ {\prime}) (1 - \eta (x, x ^ {\prime})) \\ \quad \text {i f} | 2 \eta (x, x ^ {\prime}) - 1 | \leq \Lambda W \| x - x ^ {\prime} \| _ {p} \\ \max \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \big (1 - \Lambda W \| x - x ^ {\prime} \| _ {p} \big) ^ {2} + \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \big (1 + \Lambda W \| x - x ^ {\prime} \| _ {p} \big) ^ {2} \\ \quad \text {i f} | 2 \eta (x, x ^ {\prime}) - 1 | > \Lambda W \| x - x ^ {\prime} \| _ {p}. \end{array} \right. \\ \end{array} +$$ + +The $(\Phi_{\mathrm{sq}},\mathcal{H}_{\mathrm{NN}})$ -minimizability gap is: + +$$ +\begin{array}{l} \mathcal {M} _ {\mathsf {L} _ {\Phi_ {\mathrm {s q}}}} \left(\mathcal {H} _ {\mathrm {N N}}\right) = \mathcal {R} _ {\mathsf {L} _ {\Phi_ {\mathrm {s q}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \right] \\ = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {\left(X, X ^ {\prime}\right)} \left[ 4 \eta \left(x, x ^ {\prime}\right) \left(1 - \eta \left(x, x ^ {\prime}\right)\right) \mathbb {1} _ {| 2 \eta \left(x, x ^ {\prime}\right) - 1 | \leq \Lambda W \| x - x ^ {\prime} \| _ {p}} \right] \tag {59} \\ - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \max \left\{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \right\} \left(1 - \Lambda W \| x - x ^ {\prime} \| _ {p}\right) ^ {2} \mathbb {1} _ {| 2 \eta (x, x ^ {\prime}) - 1 | > \Lambda W \| x - x ^ {\prime} \| _ {p}} \right] \\ - \mathbb {E} _ {(X, X ^ {\prime})} \Big [ \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \big (1 + \Lambda W \| x - x ^ {\prime} \| _ {p} \big) ^ {2} \mathbb {1} _ {| 2 \eta (x, x ^ {\prime}) - 1 | > \Lambda W \| x - x ^ {\prime} \| _ {p}} \Big ]. \\ \end{array} +$$ + +Therefore, $\forall h\in \overline{\mathcal{H}}_{\mathrm{NN}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {q g}}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \overline {{\mathcal {H}}} _ {\mathrm {N N}} (x, x ^ {\prime})} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {s q}}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x, x ^ {\prime}) + \left(1 - \eta (x, x ^ {\prime})\right) - \mathcal {C} _ {L _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = \left\{ \begin{array}{l} 1 - 4 \eta (x, x ^ {\prime}) (1 - \eta (x, x ^ {\prime})) \\ \quad \text {i f} | 2 \eta (x, x ^ {\prime}) - 1 | \leq \Lambda W \| x - x ^ {\prime} \| _ {p} \\ 1 - \max \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \big (1 - \Lambda W \| x - x ^ {\prime} \| _ {p} \big) ^ {2} - \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} \big (1 + \Lambda W \| x - x ^ {\prime} \| _ {p} \big) ^ {2} \\ \quad \text {i f} | 2 \eta (x, x ^ {\prime}) - 1 | > \Lambda W \| x - x ^ {\prime} \| _ {p} \end{array} \right. \\ \geq \left\{ \begin{array}{l l} 1 - 4 \eta (x, x ^ {\prime}) (1 - \eta (x, x ^ {\prime})) & | 2 \eta (x, x ^ {\prime}) - 1 | \leq \Lambda W \gamma \\ 1 - \max \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} (1 - \Lambda W \gamma) ^ {2} - \min \{\eta (x, x ^ {\prime}), 1 - \eta (x, x ^ {\prime}) \} (1 + \Lambda W \gamma) ^ {2} & | 2 \eta (x, x ^ {\prime}) - 1 | > \Lambda W \gamma \end{array} \right. \\ = \Psi_ {\mathrm {s q}} \left(| 2 \eta (x, x ^ {\prime}) - 1 |\right), \\ \end{array} +$$ + +where $\Psi_{\mathrm{sq}}$ is the increasing and convex function on $[0,1]$ defined by + +$$ +\forall t \in [ 0, 1 ], \quad \Psi_ {\mathrm {s q}} (t) = \left\{ \begin{array}{l l} t ^ {2} & t \leq \Lambda W \gamma \\ 2 \Lambda W \gamma t - (\Lambda W \gamma) ^ {2} & t > \Lambda W \gamma \end{array} \right. +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{NN}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \geq \Psi_ {\mathrm {s q}} \Big (\Delta \mathcal {C} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \Big). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{NN}}$ -consistency bound for $\mathsf{L}_{\Phi_{\mathrm{sq}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{NN}}$ : + +$$ +\left. \mathcal {R} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) \leq \Gamma_ {\mathrm {s q}} \left(\mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}} (h) - \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) + \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}} (\mathcal {H} _ {\mathrm {N N}})\right) - \mathcal {M} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {N N}}) \right. \tag {60} +$$ + +where $\Gamma_{\mathrm{sq}}(t) = \Phi_{\mathrm{sq}}^{-1}(t) = \left\{ \begin{array}{ll}\sqrt{t}, & t\leq (\Lambda W\gamma)^2\\ \frac{t}{2\Lambda W\gamma} +\frac{\Lambda W\gamma}{2}, & t > (\Lambda W\gamma)^2 \end{array} \right. = \max \Bigl \{\sqrt{t},\frac{t}{2\Lambda W\gamma} +\frac{\Lambda W\gamma}{2}\Bigr \} .$ + +# K.2.6. DERIVATION FOR $\mathsf{L}_{\Phi_{\mathrm{sig}}}$ + +For the sigmoid loss function $\Phi_{\mathrm{sig}}(u)\coloneqq 1 - \tanh (ku),k > 0$ , for all $h\in \mathcal{H}_{\mathrm{NN}}$ and $(x,x^{\prime})$ such that $\| x - x^{\prime}\| _p > \gamma$ + +$$ +\begin{array}{l} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}} \left(h, x, x ^ {\prime}\right) \\ = \eta (x, x ^ {\prime}) \mathrm {L} _ {\Phi_ {\mathrm {s i g}}} \left(h \left(x ^ {\prime}\right) - h (x)\right) + \left(1 - \eta (x, x ^ {\prime})\right) \mathrm {L} _ {\Phi_ {\mathrm {s i g}}} \left(h (x) - h \left(x ^ {\prime}\right)\right) \\ = \eta \left(x, x ^ {\prime}\right) \left(1 - \tanh \left(k \left[ h \left(x ^ {\prime}\right) - h (x) \right]\right)\right) + \left(1 - \eta \left(x, x ^ {\prime}\right)\right) \left(1 + \tanh \left(k \left[ h \left(x ^ {\prime}\right) - h (x) \right]\right)\right). \\ \end{array} +$$ + +Then, + +$$ +\mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) = \inf _ {h \in \mathcal {H} _ {\mathrm {N N}}} \mathcal {C} _ {\Phi_ {\mathrm {s i g}}} (h, x, x ^ {\prime}) = 1 - | 1 - 2 \eta (x, x ^ {\prime}) | \tanh \big (k \Lambda W \| x - x ^ {\prime} \| _ {p} \big). +$$ + +The $(\Phi_{\mathrm{sig}},\mathcal{H}_{\mathrm{NN}})$ -minimizability gap is: + +$$ +\begin{array}{l} \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}} \left(\mathcal {H} _ {\mathrm {N N}}\right) = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} \left(x, x ^ {\prime}\right) \right] \tag {61} \\ = \mathcal {R} _ {\mathsf {L} _ {\Phi_ {\mathrm {s i g}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \Big [ 1 - | 1 - 2 \eta (x, x ^ {\prime}) | \tanh \big (k \Lambda W \| x - x ^ {\prime} \| _ {p} \big) \Big ]. \\ \end{array} +$$ + +Therefore, $\forall h\in \overline{\mathcal{H}}_{\mathrm{NN}}(x,x')$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \geq \inf _ {h \in \overline {{\mathcal {H}}} _ {\mathrm {N N}} (x, x ^ {\prime})} \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}} (h, x, x ^ {\prime}) - \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = 1 - \left| 1 - 2 \eta (x, x ^ {\prime}) \right| \tanh (0) - \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = \left| 1 - 2 \eta \left(x, x ^ {\prime}\right) \right| \tanh \left(k \Lambda W \| x - x ^ {\prime} \| _ {p}\right) \\ \geq \left| 1 - 2 \eta \left(x, x ^ {\prime}\right) \right| \tanh (k \Lambda W \gamma) \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{NN}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \geq \tanh (k \Lambda W \gamma) \Delta \mathcal {C} _ {\mathsf {L} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {N N}}) (h, x, x ^ {\prime}). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{NN}}$ -consistency bound for $\mathsf{L}_{\Phi_{\mathrm{sig}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{NN}}$ : + +$$ +\mathcal {R} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) \leq \frac {\mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}} (h) - \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) + \mathcal {M} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}} (\mathcal {H} _ {\mathrm {N N}})}{\tanh (k \Lambda W \gamma)} - \mathcal {M} _ {\mathrm {L} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {N N}}). \tag {62} +$$ + +# L. $\mathcal{H}$ - consistency bounds for bipartite abstention losses + +We first characterize the minimal conditional $\widetilde{\mathsf{L}}_{0 - 1}^{\mathrm{abs}}$ -risk and the calibration gap of $\widetilde{\mathsf{L}}_{0 - 1}^{\mathrm{abs}}$ for a broad class of hypothesis sets. We let $\widetilde{\mathcal{H}}(x, x') = \{h \in \mathcal{H} : (h(x) - h(x'))(\eta(x) - \eta(x')) < 0\}$ and $\mathring{\mathcal{H}}(x, x') = \{h \in \mathcal{H} : h(x) = h(x')\}$ for convenience. + +Lemma L.1. Assume that $\mathcal{H}$ is regular for bipartite ranking. Then, the minimal conditional $\widehat{\mathsf{L}}_{0 - 1}^{\mathrm{abs}}$ -risk is + +$$ +\mathcal{C}^{*}_{\widetilde{\mathrm{L}}_{0 - 1}^{\mathrm{abs}}}\big(\mathcal{H},x,x^{\prime}\big) = \min \{\eta (x)(1 - \eta (x^{\prime})),\eta (x^{\prime})(1 - \eta (x))\} \mathbb{1}_{\| x - x^{\prime}\| >\gamma} + c\mathbb{1}_{|x - x^{\prime}|\leq \gamma}. +$$ + +The calibration gap of $\widetilde{\mathsf{L}}_{0 - 1}^{\mathrm{abs}}$ can be characterized as + +$$ +\Delta \mathcal {C} _ {\widetilde {\mathbb {L}} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H}} (h, x, x ^ {\prime}) = | \eta (x) - \eta (x ^ {\prime}) | \mathbb {1} _ {h \in \widetilde {\mathcal {H}} (x, x ^ {\prime})} \mathbb {1} _ {\| x - x ^ {\prime} \| > \gamma} + \frac {1}{2} | \eta (x) - \eta (x ^ {\prime}) | \mathbb {1} _ {h \in \hat {\mathcal {H}} (x, x ^ {\prime})} \mathbb {1} _ {\| x - x ^ {\prime} \| > \gamma}. +$$ + +Proof. By the definition, the conditional $\widetilde{\mathsf{L}}_{0 - 1}^{\mathrm{abs}}$ -risk is + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} \left(h, x, x ^ {\prime}\right) \\ = \Bigg(\eta (x)(1 - \eta (x^{\prime}))\bigg[ \mathbb{1}_{h(x) - h(x^{\prime}) < 0} + \frac{1}{2}\mathbb{1}_{h(x) = h(x^{\prime})}\bigg] + \eta (x^{\prime})(1 - \eta (x))\bigg[ \mathbb{1}_{h(x) - h(x^{\prime}) > 0} + \frac{1}{2}\mathbb{1}_{h(x) = h(x^{\prime})}\bigg]\Bigg)\mathbb{1}_{\| x - x^{\prime}\| >\gamma} + c\mathbb{1}_{|x - x^{\prime}|\leq \gamma}. \\ \end{array} +$$ + +For any $(x,x^{\prime})$ such that $\| x - x^{\prime}\| \leq \gamma$ and $h\in \mathcal{H},\mathcal{C}_{\widetilde{\mathsf{L}}_{0 - 1}^{\mathrm{abs}}}^{*}(h,x,x) = \mathcal{C}_{\widetilde{\mathsf{L}}_{0 - 1}^{\mathrm{abs}}}^{*}(\mathcal{H},x,x) = c.$ For any $(x,x^{\prime})$ such that $\| x - x^{\prime}\| >\gamma$ by the assumption, there exists $h^\star \in \mathcal{H}$ such that + +$$ +\left(h ^ {*} (x) - h ^ {*} \left(x ^ {\prime}\right)\right) \left(\eta (x) - \eta \left(x ^ {\prime}\right)\right) \mathbb {1} _ {\eta (x) \neq \eta \left(x ^ {\prime}\right)} > 0. +$$ + +Therefore, the optimal conditional $\widetilde{\mathsf{L}}_{0 - 1}^{\mathrm{abs}}$ risk can be characterized as for any $x,x^{\prime}\in \mathcal{X}$ + +$$ +\mathcal{C}^{\ast}_{\widetilde{\mathsf{L}}_{0 - 1}^{\mathrm{abs}}}\big(\mathcal{H},x,x^{\prime}\big) = \mathcal{C}^{\ast}_{\widetilde{\mathsf{L}}_{0 - 1}^{\mathrm{abs}}}\big(h^{*},x,x^{\prime}\big) = \min \{\eta (x)\big(1 - \eta (x^{\prime})\big),\eta (x^{\prime})\big(1 - \eta (x)\big)\} \mathbb{1}_{\| x - x^{\prime}\| >\gamma} + c\mathbb{1}_{|x - x^{\prime}|\leq \gamma}. +$$ + +which proves the first part of lemma. By the definition, for any $(x,x^{\prime})$ such that $\| x - x^{\prime}\| \leq \gamma$ and $h\in \mathcal{H}$ $\Delta \mathcal{C}_{\widetilde{\mathsf{L}}_{0 - 1}^{\mathrm{abs}},\mathcal{H}}(h,x,x^{\prime}) = \mathcal{C}_{\widetilde{\mathsf{L}}_{0 - 1}^{\mathrm{abs}}}(h,x,x^{\prime}) - \mathcal{C}_{\widetilde{\mathsf{L}}_{0 - 1}^{\mathrm{abs}}}^{*}(\mathcal{H},x,x^{\prime}) = 0.$ . For any $(x,x^{\prime})$ such that $\| x - x^{\prime}\| >\gamma$ and $h\in \mathcal{H}$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H}} (h, x, x ^ {\prime}) = \mathcal {C} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} (h, x, x ^ {\prime}) - \mathcal {C} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H}, x, x ^ {\prime}) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \left[ \mathbb {1} _ {h (x) - h \left(x ^ {\prime}\right) < 0} + \frac {1}{2} \mathbb {1} _ {h (x) = h \left(x ^ {\prime}\right)} \right] \\ + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \left[ \mathbb {1} _ {h (x) - h \left(x ^ {\prime}\right) > 0} + \frac {1}{2} \mathbb {1} _ {h (x) = h \left(x ^ {\prime}\right)} \right] \\ - \min \left\{\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right), \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \right\} \\ = \left\{ \begin{array}{l l} | \eta (x) (1 - \eta (x ^ {\prime})) - \eta (x ^ {\prime}) (1 - \eta (x)) |, & h \in \widetilde {\mathcal {H}} (x, x ^ {\prime}), \\ \frac {1}{2} | \eta (x) (1 - \eta (x ^ {\prime})) - \eta (x ^ {\prime}) (1 - \eta (x)) |, & h \in \mathring {\mathcal {H}} (x, x ^ {\prime}), \\ 0, & \text {o t h e r w i s e}. \end{array} \right. \\ = \left\{ \begin{array}{l l} | \eta (x) - \eta (x ^ {\prime}) |, & h \in \widetilde {\mathcal {H}} (x, x ^ {\prime}), \\ \frac {1}{2} | \eta (x) - \eta (x ^ {\prime}) |, & h \in \mathring {\mathcal {H}} (x, x ^ {\prime}), \\ 0, & \text {o t h e r w i s e}. \end{array} \right. \\ \end{array} +$$ + +This leads to + +$$ +\left\langle \Delta \mathcal {C} _ {\widehat {\mathbb {L}} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H}} (h, x, x ^ {\prime}) \right\rangle_ {\epsilon} = \langle | \eta (x) - \eta (x ^ {\prime}) | \rangle_ {\epsilon} \mathbb {1} _ {h \in \widehat {\mathcal {H}} (x, x ^ {\prime})} \mathbb {1} _ {\| x - x ^ {\prime} \| > \gamma} + \left\langle \frac {1}{2} | \eta (x) - \eta (x ^ {\prime}) | \right\rangle_ {\epsilon} \mathbb {1} _ {h \in \widehat {\mathcal {H}} (x, x ^ {\prime})} \mathbb {1} _ {\| x - x ^ {\prime} \| > \gamma}. +$$ + +# L.1. Linear Hypotheses + +Since $\mathcal{H}_{\mathrm{lin}}$ satisfies the condition of Lemma L.1, by Lemma L.1 the $(\overline{\mathsf{L}}_{0 - 1}^{\mathrm{abs}},\mathcal{H}_{\mathrm{lin}})$ -minimizability gap can be expressed as follows: + +$$ +\mathcal {M} _ {\widetilde {\mathbf {L}} _ {0 - 1} ^ {\mathrm {a b s}}} \left(\mathcal {H} _ {\mathrm {l i n}}\right) = \mathcal {R} _ {\mathbf {L} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} \left(\mathcal {H} _ {\mathrm {l i n}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \min \left\{\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right), \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \right\} \mathbb {1} _ {\| x - x ^ {\prime} \| > \gamma} + c \mathbb {1} _ {| x - x ^ {\prime} | \leq \gamma} \right]. \tag {63} +$$ + +By the definition of $\mathcal{H}_{\mathrm{lin}}$ , for any $(x,x^{\prime})\in \mathcal{X}\times \mathcal{X},\left\{h(x^{\prime}) - h(x)\mid h\in \mathcal{H}_{\mathrm{lin}}\right\} = \left[-W\| x - x^{\prime}\|_{p},W\| x - x^{\prime}\|_{p}\right]$ . + +# L.1.1. DERIVATION FOR $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{hinge}}}$ + +For the hinge loss function $\Phi_{\mathrm{hinge}}(u)\coloneqq \max \{0,1 - u\}$ , for all $h\in \mathcal{H}_{\mathrm{lin}}$ and $(x,x^{\prime})$ such that $\| x - x^{\prime}\| _p > \gamma$ + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {h i n g e}}}} \left(h, x, x ^ {\prime}\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \Phi_ {\text {h i n g e}} \left(h (x) - h \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \Phi_ {\text {h i n g e}} \left(h \left(x ^ {\prime}\right) - h (x)\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \max \left\{0, 1 - h (x) + h \left(x ^ {\prime}\right) \right\} + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \max \left\{0, 1 + h (x) - h \left(x ^ {\prime}\right) \right\}. \\ \end{array} +$$ + +Then, + +$$ +\mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) = \inf _ {h \in \mathcal {H} _ {\mathrm {l i n}}} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {h i n g e}}}} (h, x, x ^ {\prime}) = \eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x)) - | \eta (x) - \eta (x ^ {\prime}) | \min \big \{W \| x - x ^ {\prime} \| _ {p}, 1 \big \}. +$$ + +The $\left(\widetilde{\mathsf{L}}_{\Phi_{\mathrm{hinge}}},\mathcal{H}_{\mathrm{lin}}\right)$ -minimizability gap is + +$$ +\begin{array}{l} \mathcal {M} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\text {h i n g e}}}} \left(\mathcal {H} _ {\text {l i n}}\right) \\ = \mathcal {R} _ {\mathbb {L} _ {\Phi_ {\text {h i n g e}}}} ^ {*} \left(\mathcal {H} _ {\operatorname {l i n}}\right) - \mathbb {E} _ {\left(X, X ^ {\prime}\right)} \left[ \mathcal {C} _ {\mathbb {L} _ {\Phi_ {\text {h i n g e}}}, \mathcal {H} _ {\operatorname {l i n}}} ^ {*} \left(x, x ^ {\prime}\right) \right] \tag {64} \\ = \mathcal {R} _ {\mathsf {L} _ {\Phi_ {\mathrm {h i n g e}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {l i n}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \big [ \eta (x) \big (1 - \eta \big (x ^ {\prime} \big) \big) + \eta \big (x ^ {\prime} \big) \big (1 - \eta (x) \big) - | \eta (x) - \eta \big (x ^ {\prime} \big) | \min \big \{W \| x - x ^ {\prime} \| _ {p}, 1 \big \} \big ]. \\ \end{array} +$$ + +Therefore, $\forall h\in \widetilde{\mathcal{H}}_{\mathrm{lin}}(x,x^{\prime})\cup \mathring{\mathcal{H}}_{\mathrm{lin}}(x,x^{\prime})$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \widetilde {\mathcal {H}} _ {\mathrm {l i n}} (x, x ^ {\prime}) \cup \widetilde {\mathcal {H}} _ {\mathrm {l i n}} (x, x ^ {\prime})} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {h i n g e}}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x) \big (1 - \eta (x ^ {\prime}) \big) \max \{0, 1 - 0 \} + \eta (x ^ {\prime}) \big (1 - \eta (x) \big) \max \{0, 1 + 0 \} - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {h i n g e}}, \mathcal {H} _ {\mathrm {l i n}}}} ^ {*} (x, x ^ {\prime}) \\ = \left| \eta (x) - \eta \left(x ^ {\prime}\right) \right| \min \left\{W \| x - x ^ {\prime} \| _ {p}, 1 \right\} \\ \geq | \eta (x) - \eta \left(x ^ {\prime}\right) | \min \{W \gamma , 1 \} \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{lin}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \geq \min \{W \gamma , 1 \} \Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H}} (h, x, x ^ {\prime}). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{lin}}$ -consistency bound for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{hinge}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{lin}}$ : + +$$ +\mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) \leq \frac {\mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\text {h i n g e}}} (h)} - \mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\text {h i n g e}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}})} + \mathcal {M} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\text {h i n g e}}} (\mathcal {H} _ {\mathrm {l i n}})}}{\min \{W \gamma , 1 \}} - \mathcal {M} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}} (\mathcal {H} _ {\mathrm {l i n}})}. \tag {65} +$$ + +# L.1.2. DERIVATION FOR $\widetilde{\mathcal{L}}_{\Phi_p}$ + +For the $\rho$ -margin loss function $\Phi_{\rho}(u) := \min \left\{1, \max \left\{0, 1 - \frac{u}{\rho}\right\} \right\}$ , $\rho > 0$ , for all $h \in \mathcal{H}_{\mathrm{lin}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}} \left(h, x, x ^ {\prime}\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \Phi_ {\rho} \left(h (x) - h \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \Phi_ {\rho} \left(h \left(x ^ {\prime}\right) - h (x)\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \min \left\{1, \max \left\{0, 1 - \frac {h (x) - h \left(x ^ {\prime}\right)}{\rho} \right\} \right\} \\ + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \min \left\{1, \max \left\{0, 1 + \frac {h (x) - h \left(x ^ {\prime}\right)}{\rho} \right\} \right\} \\ \end{array} +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \\ = \inf _ {h \in \mathcal {H} _ {\mathrm {l i n}}} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}} \left(h, x, x ^ {\prime}\right) \\ = \left[ \min \left\{\eta (x), \eta \left(x ^ {\prime}\right) \right\} - \eta (x) \eta \left(x ^ {\prime}\right) \right] + \left[ \max \left\{\eta (x), \eta \left(x ^ {\prime}\right) \right\} - \eta (x) \eta \left(x ^ {\prime}\right) \right] \left(1 - \frac {\min \left\{W \| x - x ^ {\prime} \| _ {p} , \rho \right\}}{\rho}\right). \\ \end{array} +$$ + +The $(\widetilde{\mathsf{L}}_{\Phi_{\rho}},\mathcal{H}_{\mathrm{lin}})$ -minimizability gap is + +$$ +\begin{array}{l} \mathcal {M} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}} \left(\mathcal {H} _ {\mathrm {l i n}}\right) \\ = \mathcal {R} _ {\widetilde {\mathcal {L}} _ {\Phi_ {\rho}}} ^ {*} \left(\mathcal {H} _ {\mathrm {l i n}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\widetilde {\mathcal {L}} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \right]. \\ = \mathcal {R} _ {\widetilde {L} _ {\Phi_ {\rho}}} ^ {*} \left(\mathcal {H} _ {\operatorname {l i n}}\right) - \mathbb {E} _ {\left(X, X ^ {\prime}\right)} \left[ \left[ \min \left\{\eta (x), \eta \left(x ^ {\prime}\right) \right\} - \eta (x) \eta \left(x ^ {\prime}\right) \right] \right. \tag {66} \\ \left. + \left[ \max \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \right] \left(1 - \frac {\min \left\{W \| x - x ^ {\prime} \| _ {p} , \rho \right\}}{\rho}\right) \right]. \\ \end{array} +$$ + +Therefore, $\forall h\in \widetilde{\mathcal{H}}_{\mathrm{lin}}(x,x^{\prime})\cup \mathring{\mathcal{H}}_{\mathrm{lin}}(x,x^{\prime})$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {\widetilde {\mathcal {H}} _ {\mathrm {l i n}} (x, x ^ {\prime})} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \left[ \max \left\{\eta (x), \eta \left(x ^ {\prime}\right) \right\} - \eta (x) \eta \left(x ^ {\prime}\right) \right] + \left[ \min \left\{\eta (x), \eta \left(x ^ {\prime}\right) \right\} - \eta (x) \eta \left(x ^ {\prime}\right) \right] \left(1 - \frac {\min \left\{W \| x - x ^ {\prime} \| _ {p} , \rho \right\}}{\rho}\right) - \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \\ = | \eta (x) - \eta \left(x ^ {\prime}\right) | \frac {\operatorname* {m i n} \left\{W \| x - x ^ {\prime} \| _ {p} , \rho \right\}}{\rho} \\ \geq | \eta (x) - \eta \left(x ^ {\prime}\right) | \frac {\operatorname* {m i n} \{W \gamma , \rho \}}{\rho} \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{lin}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\widetilde {\Lambda} _ {\Phi_ {\rho}, \mathcal {H} _ {\mathrm {l i n}}}} (h, x, x ^ {\prime}) \geq \frac {\min \{W \gamma , \rho \}}{\rho} \Delta \mathcal {C} _ {\widetilde {\Lambda} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H}} (h, x, x ^ {\prime}). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{lin}}$ -consistency bound for $\widetilde{\mathsf{L}}_{\Phi_{\rho}}$ , valid for $\operatorname{lin} h \in \mathcal{H}_{\mathrm{lin}}$ : + +$$ +\mathcal {R} _ {\widetilde {\mathsf {L}} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\widetilde {\mathsf {L}} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) \leq \frac {\rho \left(\mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}} (h) - \mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) + \mathcal {M} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}} (\mathcal {H} _ {\mathrm {l i n}})\right)}{\min \{W \gamma , \rho \}} - \mathcal {M} _ {\widetilde {\mathsf {L}} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {l i n}}). \tag {67} +$$ + +# L.1.3. DERIVATION FOR $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{exp}}}$ + +For the exponential loss function $\Phi_{\mathrm{exp}}(u)\coloneqq e^{-u}$ , for all $h\in \mathcal{H}_{\mathrm{lin}}$ and $(x,x^{\prime})$ such that $\| x - x^{\prime}\| _p > \gamma$ + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {L} _ {\Phi_ {\exp}}} \left(h, x, x ^ {\prime}\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \Phi_ {\exp} \left(h (x) - h \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \Phi_ {\exp} \left(h \left(x ^ {\prime}\right) - h (x)\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) e ^ {- h (x) + h \left(x ^ {\prime}\right)} + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) e ^ {h (x) - h \left(x ^ {\prime}\right)}. \\ \end{array} +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {e x p}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \\ = \inf _ {h \in \mathcal {H} _ {\mathrm {l i n}}} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\exp}}} \left(h, x, x ^ {\prime}\right) \\ = \left\{ \begin{array}{l} 2 \sqrt {\eta (x) \eta (x ^ {\prime}) (1 - \eta (x)) (1 - \eta (x ^ {\prime}))} \\ \text {i f} \frac {1}{2} \Big | \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \Big | \leq W \| x - x ^ {\prime} \| _ {p} \\ \max \{\eta (x) (1 - \eta (x ^ {\prime})), \eta (x ^ {\prime}) (1 - \eta (x)) \} e ^ {- W \| x - x ^ {\prime} \| _ {p}} + \min \{\eta (x) (1 - \eta (x ^ {\prime})), \eta (x ^ {\prime}) (1 - \eta (x)) \} e ^ {W \| x - x ^ {\prime} \| _ {p}} \\ \text {i f} \frac {1}{2} \Big | \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \Big | > W \| x - x ^ {\prime} \| _ {p}. \end{array} \right. \\ \end{array} +$$ + +The $\left(\widetilde{\mathsf{L}}_{\Phi_{\mathrm{exp}}},\mathcal{H}_{\mathrm{lin}}\right)$ -minimizability gap is: + +$$ +\begin{array}{l} \mathcal {M} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\exp}}} \left(\mathcal {H} _ {\mathrm {l i n}}\right) = \mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\exp}}} ^ {*} \left(\mathcal {H} _ {\mathrm {l i n}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\exp}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} \left(x, x ^ {\prime}\right) \right] \\ = \mathcal {R} _ {\widetilde {L} _ {\Phi_ {\exp}}} ^ {*} \left(\mathcal {H} _ {\mathrm {l i n}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ 2 \sqrt {\eta (x) \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \left(1 - \eta \left(x ^ {\prime}\right)\right)} \mathbb {1} _ {\frac {1}{2} \left| \log \frac {\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right)}{\eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)} \right| \leq W \| x - x ^ {\prime} \| _ {p}} \right] \tag {68} \\ - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \left[ \max \left\{\eta (x), \eta \left(x ^ {\prime}\right) \right\} - \eta (x) \eta \left(x ^ {\prime}\right) \right] e ^ {- W \| x - x ^ {\prime} \| _ {p}} \mathbb {1} _ {\frac {1}{2} \left| \log \frac {\eta (x) (1 - \eta \left(x ^ {\prime}\right))}{\eta \left(x ^ {\prime}\right) (1 - \eta (x))} \right| > W \| x - x ^ {\prime} \| _ {p}} \right] \\ - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \left[ \min \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \right] e ^ {W \| x - x ^ {\prime} \| _ {p}} \mathbf {1} _ {\frac {1}{2} \left| \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \right| > W \| x - x ^ {\prime} \| _ {p}} \right]. \\ \end{array} +$$ + +Therefore, $\forall h\in \widetilde{\mathcal{H}}_{\mathrm{lin}}(x,x^{\prime})\cup \mathring{\mathcal{H}}_{\mathrm{lin}}(x,x^{\prime})$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {e x p}}}, \mathcal {H} _ {\mathrm {l i n}}} \left(h, x, x ^ {\prime}\right) \\ \geq \inf _ {h \in \widetilde {\mathcal {H}} _ {\mathrm {l i n}} (x, x ^ {\prime}) \cup \mathcal {H} _ {\mathrm {l i n}} (x, x ^ {\prime})} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {e x p}}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {e x p}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x) (1 - \eta (x ^ {\prime})) e ^ {- 0} + \eta (x ^ {\prime}) (1 - \eta (x)) e ^ {0} - \mathcal {C} _ {\widetilde {L} _ {\Phi_ {\exp}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \\ = \left\{ \begin{array}{l} \eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x)) - 2 \sqrt {\eta (x) \eta (x ^ {\prime}) (1 - \eta (x)) (1 - \eta (x ^ {\prime}))} \\ \text {i f} \frac {1}{2} \Big | \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \Big | \leq W \| x - x ^ {\prime} \| _ {p} \\ \big [ \max \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \big ] \Big (1 - e ^ {- W \| x - x ^ {\prime} \| _ {p}} \Big) + \big [ \min \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \big ] \Big (1 - e ^ {W \| x - x ^ {\prime} \| _ {p}} \Big) \\ \text {i f} \frac {1}{2} \Big | \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \Big | > W \| x - x ^ {\prime} \| _ {p} \end{array} \right. \\ \geq \left\{ \begin{array}{l} \eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x)) - 2 \sqrt {\eta (x) \eta (x ^ {\prime}) (1 - \eta (x)) (1 - \eta (x ^ {\prime}))} \\ \text {i f} \frac {1}{2} \Big | \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \Big | \leq W \gamma \\ \big [ \max \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \big ] \big (1 - e ^ {- W \gamma} \big) + \big [ \min \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \big ] \big (1 - e ^ {W \gamma} \big) \\ \text {i f} \frac {1}{2} \Big | \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \Big | > W \gamma \end{array} \right. \\ = \left\{ \begin{array}{l l} \left(\frac {\eta (x) (1 - \eta (x ^ {\prime})) - \eta (x ^ {\prime}) (1 - \eta (x))}{\sqrt {\eta (x) (1 - \eta (x ^ {\prime}))} + \sqrt {\eta (x ^ {\prime}) (1 - \eta (x))}}\right) ^ {2} & \text {i f} \frac {1}{2} \Big | \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \Big | \leq W \gamma \\ \frac {\eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x))}{2} (2 - e ^ {- W \gamma} - e ^ {W \gamma}) + \frac {1}{2} | \eta (x) - \eta (x ^ {\prime}) | \big (e ^ {W \gamma} - e ^ {- W \gamma} \big) & \text {i f} \frac {1}{2} \Big | \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \Big | > W \gamma \end{array} \right. \\ \geq \min \left\{\left(\eta (x) - \eta \left(x ^ {\prime}\right)\right) ^ {2}, \left(\frac {e ^ {2 W \gamma} + 1}{e ^ {2 W \gamma} - 1}\right) | \eta (x) - \eta \left(x ^ {\prime}\right) | \right\} \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{lin}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\widetilde {\Lambda} _ {\Phi_ {\mathrm {e x p}}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \geq \Psi_ {\mathrm {e x p}} \Big (\Delta \mathcal {C} _ {\widetilde {\Lambda} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H}} (h, x, x ^ {\prime}) \Big). +$$ + +where $\Psi_{\mathrm{exp}}$ is the increasing function on $[0,2]$ defined by + +$$ +\forall t \in [ 0, 1 ], \quad \Psi_ {\exp} (t) = \min \left\{t ^ {2}, \left(\frac {e ^ {2 W \gamma} + 1}{e ^ {2 W \gamma} - 1}\right) t \right\}. +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{lin}}$ -consistency bound for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{exp}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{lin}}$ : + +$$ +\left. \mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) \leq \Gamma_ {\exp} \left(\mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {e x p}}}} (h) - \mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {e x p}}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) + \mathcal {M} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {e x p}}}} (\mathcal {H} _ {\mathrm {l i n}})\right) - \mathcal {M} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {l i n}}). \right. \tag {69} +$$ + +where $\Gamma_{\exp}(t) = \max \left\{\sqrt{t},\left(\frac{e^{2W\gamma} - 1}{e^{2W\gamma} + 1}\right)t\right\} .$ + +# L.1.4. DERIVATION FOR $\widetilde{\mathsf{L}}_{\Phi_{\log}}$ + +For the logistic loss function $\Phi_{\log}(u) \coloneqq \log_2(1 + e^{-u})$ , for all $h \in \mathcal{H}_{\mathrm{lin}}$ and $(x, x')$ such that $\|x - x'\|_p > \gamma$ , + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {L} _ {\Phi_ {\log}}} (h, x, x ^ {\prime}) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \Phi_ {\log} \left(h (x) - h \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \Phi_ {\log} \left(h \left(x ^ {\prime}\right) - h (x)\right) \\ = \eta (x) (1 - \eta (x ^ {\prime})) \log_ {2} \left(1 + e ^ {- h (x) + h (x ^ {\prime})}\right) + \eta (x ^ {\prime}) (1 - \eta (x)) \log_ {2} \left(1 + e ^ {h (x) - h (x ^ {\prime})}\right). \\ \end{array} +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \\ = \inf _ {h \in \mathcal {H} _ {\mathrm {l i n}}} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}} \left(h, x, x ^ {\prime}\right) \\ \left(- \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \log_ {2} (\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right)) - \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \log_ {2} (\eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)) \right. \\ = \left\{ \begin{array}{l} - \eta (x) (1 - \eta (x)) \log_ {2} (\eta (x) (1 - \eta (x))) - \eta (x) (1 - \eta (x)) \log_ {2} (\eta (x) (1 - \eta (x))) \\ \text {i f} \left| \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \right| \leq W \| x - x ^ {\prime} \| _ {p} \\ \left[ \max \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \right] \log_ {2} \left(1 + e ^ {- W \| x - x ^ {\prime} \| _ {p}}\right) + \left[ \min \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \right] \log_ {2} \left(1 + e ^ {W \| x - x ^ {\prime} \| _ {p}}\right) \\ \text {i f} \left| \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \right| > W \| x - x ^ {\prime} \| _ {p} \end{array} \right. \\ \end{array} +$$ + +The $\left(\widetilde{\mathsf{L}}_{\Phi_{\mathrm{log}}},\mathcal{H}_{\mathrm{lin}}\right)$ -minimizability gap is + +$$ +\begin{array}{l} \mathcal {M} _ {\widetilde {L} _ {\Phi_ {\log}}} \left(\mathcal {H} _ {\mathrm {l i n}}\right) \\ = \mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}} ^ {*} \left(\mathcal {H} _ {\mathrm {l i n}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \right] \\ = \mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}} ^ {*} \left(\mathcal {H} _ {\mathrm {l i n}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ - \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \log_ {2} \left(\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right)\right) \right. \\ \left. - \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \log_ {2} \left(\eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)\right) \mathbb {1} _ {\left| \log \frac {\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right)}{\eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)} \right| \leq W \| x - x ^ {\prime} \| _ {p} \right] \tag {70} \\ - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \left[ \max \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \right] \log_ {2} \left(1 + e ^ {- W \| x - x ^ {\prime} \| _ {p}}\right) \mathbb {1} _ {\left| \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \right| > W \| x - x ^ {\prime} \| _ {p}} \right] \\ - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \left[ \min \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \right] \log_ {2} \left(1 + e ^ {W \| x - x ^ {\prime} \| _ {p}}\right) \mathbb {1} _ {\left| \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \right| > W \| x - x ^ {\prime} \| _ {p}} \right]. \\ \end{array} +$$ + +Therefore, $\forall h\in \widetilde{\mathcal{H}}_{\mathrm{lin}}(x,x^{\prime})\cup \mathring{\mathcal{H}}_{\mathrm{lin}}(x,x^{\prime})$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \widetilde {\mathcal {H}} _ {\mathrm {l i n}} (x, x ^ {\prime}) \cup \widehat {\mathcal {H}} _ {\mathrm {l i n}} (x, x ^ {\prime})} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x) \big (1 - \eta (x ^ {\prime}) \big) \log_ {2} \big (1 + e ^ {- 0} \big) + \eta (x ^ {\prime}) \big (1 - \eta (x) \big) \log_ {2} \big (1 + e ^ {0} \big) - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \\ \left. \right.\left\{\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right)\left[ 1 - \log_ {2} (\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right)) \right] + \eta \left(x ^ {\prime}\right)\left(1 - \eta (x)\right)\left[ 1 - \log_ {2} (\eta \left(x ^ {\prime}\right)\left(1 - \eta (x)\right)) \right]\right. \\ \geq \left\{ \begin{array}{l} \text {i f} \left| \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \right| \leq W \gamma \\ \left[ \max \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \right] \big (1 - \log_ {2} \big (1 + e ^ {- W \gamma} \big) \big) + \left[ \min \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \right] \big (1 - \log_ {2} \big (1 + e ^ {W \gamma} \big) \big) \\ \text {i f} \left| \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \right| > W \gamma \end{array} \right. \\ \geq \min \left\{\left(\eta (x) - \eta \left(x ^ {\prime}\right)\right) ^ {2}, \left(\frac {e ^ {W \gamma} + 1}{e ^ {W \gamma} - 1}\right) | \eta (x) - \eta \left(x ^ {\prime}\right) | \right\} \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{lin}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \geq \Psi_ {\log} \Big (\Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H}} (h, x, x ^ {\prime}) \Big). +$$ + +where $\Psi_{\mathrm{log}}$ is the increasing function on $[0,2]$ defined by + +$$ +\forall t \in [ 0, 1 ], \quad \Psi_ {\log} (t) = \min \left\{t ^ {2}, \left(\frac {e ^ {W \gamma} + 1}{e ^ {W \gamma} - 1}\right) t \right\}. +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{lin}}$ -consistency bound for $\widetilde{\mathsf{L}}_{\Phi_{\log}}$ , valid for all $h \in \mathcal{H}_{\mathrm{lin}}$ : + +$$ +\left. \mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) \leq \Gamma_ {\log} \left(\mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\log}}} (h) - \mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\log}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) + \mathcal {M} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\log}}} (\mathcal {H} _ {\mathrm {l i n}})\right) - \mathcal {M} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {l i n}}). \right. \tag {71} +$$ + +where $\Gamma_{\log}(t) = \max \left\{\sqrt{t},\left(\frac{e^{W\gamma} - 1}{e^{W\gamma} + 1}\right)t\right\} .$ + +# L.1.5. DERIVATION FOR $\widetilde{\mathcal{L}}_{\Phi_{\mathrm{sq}}}$ + +For the squared hinge loss function $\Phi_{\mathrm{sq}}(u)\coloneqq (1 - u)^2\mathbb{1}_{u\leq 1}$ , for all $h\in \mathcal{H}_{\mathrm{lin}}$ and $(x,x^{\prime})$ such that $\| x - x^{\prime}\| _p > \gamma$ + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s q}}}} \left(h, x, x ^ {\prime}\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \Phi_ {\mathrm {s q}} \left(h (x) - h \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \Phi_ {\mathrm {s q}} \left(h \left(x ^ {\prime}\right) - h (x)\right) \\ = \eta (x) (1 - \eta (x ^ {\prime})) (1 - h (x) + h (x ^ {\prime})) ^ {2} \mathbb {1} _ {h (x) - h (x ^ {\prime}) \leq 1} + \eta (x ^ {\prime}) (1 - \eta (x)) (1 + h (x) - h (x ^ {\prime})) ^ {2} \mathbb {1} _ {h (x) - h (x ^ {\prime}) \geq - 1}. \\ \end{array} +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \\ = \inf _ {h \in \mathcal {H} _ {\mathrm {l i n}}} \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s q}}}} \left(h, x, x ^ {\prime}\right) \\ = \left\{ \begin{array}{l} 4 \frac {\eta (x) \eta (x ^ {\prime}) (1 - \eta (x)) (1 - \eta (x ^ {\prime}))}{\eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x))} \\ \text {i f} \frac {| \eta (x) - \eta (x ^ {\prime}) |}{\eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x))} \leq W \| x - x ^ {\prime} \| _ {p} \\ [ \max \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) ] \big (1 - W \| x - x ^ {\prime} \| _ {p} \big) ^ {2} + [ \min \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) ] \big (1 + W \| x - x ^ {\prime} \| _ {p} \big) ^ {2} \\ \text {i f} \frac {| \eta (x) - \eta (x ^ {\prime}) |}{\eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x))} > W \| x - x ^ {\prime} \| _ {p}. \end{array} \right. \\ \end{array} +$$ + +The $(\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sq}}},\mathcal{H}_{\mathrm{lin}})$ -minimizability gap is + +$$ +\begin{array}{l} \mathcal {M} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}} \left(\mathcal {H} _ {\mathrm {l i n}}\right) \\ = \mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {l i n}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} \left(x, x ^ {\prime}\right) \right] \\ = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}} ^ {*} \left(\mathcal {H} _ {\operatorname {l i n}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ 4 \frac {\eta (x) \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \left(1 - \eta \left(x ^ {\prime}\right)\right)}{\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)} \mathbb {1} _ {\frac {| \eta (x) - \eta \left(x ^ {\prime}\right) |}{\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)} \leq W \| x - x ^ {\prime} \| _ {p}} \right] \tag {72} \\ \left. - \mathbb {E} _ {\left(X, X ^ {\prime}\right)} \left[ \left[ \max \left\{\eta (x), \eta \left(x ^ {\prime}\right) \right\} - \eta (x) \eta \left(x ^ {\prime}\right) \right] \left(1 - W \| x - x ^ {\prime} \| _ {p}\right) ^ {2} \mathbb {1} _ {\frac {\left| \eta (x) - \eta \left(x ^ {\prime}\right) \right|}{\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)} > W \| x - x ^ {\prime} \| _ {p}} \right] \right. \\ \left. \right.\left. - \mathbb {E} _ {(X, X ^ {\prime})} \left[\left[ \min \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \right] \big (1 + W \| x - x ^ {\prime} \| _ {p} \big) ^ {2} \mathbb {1} _ {\frac {| \eta (x) - \eta (x ^ {\prime}) |}{\eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x))}} > W \| x - x ^ {\prime} \| _ {p} \right]\right]. \\ \end{array} +$$ + +Therefore, $\forall h\in \widetilde{\mathcal{H}}_{\mathrm{lin}}(x,x^{\prime})\cup \mathring{\mathcal{H}}_{\mathrm{lin}}(x,x^{\prime})$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathcal {L}} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \widetilde {\mathcal {H}} _ {\mathrm {l i n}} (x, x ^ {\prime}) \cup \hat {\mathcal {H}} _ {\mathrm {l i n}} (x, x ^ {\prime})} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x) (1 - \eta \left(x ^ {\prime}\right)) + \eta \left(x ^ {\prime}\right) (1 - \eta (x)) - \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \\ \geq \left\{ \begin{array}{l} \eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x)) - 4 \frac {\eta (x) \eta (x ^ {\prime}) (1 - \eta (x)) (1 - \eta (x ^ {\prime}))}{\eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x))} \\ \text {i f} \frac {| \eta (x) - \eta (x ^ {\prime}) |}{\eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x))} \leq W \gamma \\ \left[ \max \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \right] \left[ 1 - (1 - W \gamma) ^ {2} \right] + \left[ \min \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \right] \left[ 1 - (1 + W \gamma) ^ {2} \right] \\ \text {i f} \frac {| \eta (x) - \eta (x ^ {\prime}) |}{\eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x))} > W \gamma . \end{array} \right. \\ \geq \min \left\{\left(\eta (x) - \eta \left(x ^ {\prime}\right)\right) ^ {2}, 2 W \gamma | \eta (x) - \eta \left(x ^ {\prime}\right) | - \left(W \gamma\right) ^ {2} \right\} \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{lin}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\widetilde {\Lambda} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \geq \Psi_ {\mathrm {s q}} \Big (\Delta \mathcal {C} _ {\widetilde {\Lambda} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H}} (h, x, x ^ {\prime}) \Big). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{lin}}$ -consistency bound for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sq}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{lin}}$ : + +$$ +\left. \mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) \leq \Gamma_ {\mathrm {s q}} \left(\mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s q}}}} (h) - \mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s q}}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) + \mathcal {M} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s q}}}} (\mathcal {H} _ {\mathrm {l i n}})\right) - \mathcal {M} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {l i n}}). \right. \tag {73} +$$ + +where $\Gamma_{\mathrm{sq}} = \max \left\{\sqrt{t}, \frac{t}{2W\gamma} + \frac{W\gamma}{2}\right\}$ . + +# L.1.6. DERIVATION FOR $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sig}}}$ + +For the sigmoid loss function $\Phi_{\mathrm{sig}}(u)\coloneqq 1 - \tanh (ku),k > 0$ , for all $h\in \mathcal{H}_{\mathrm{lin}}$ and $(x,x^{\prime})$ such that $\| x - x^{\prime}\| _p > \gamma$ + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s i g}}}} \left(h, x, x ^ {\prime}\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \Phi_ {\mathrm {s i g}} \left(h (x) - h \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \Phi_ {\mathrm {s i g}} \left(h \left(x ^ {\prime}\right) - h (x)\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \left(1 - \tanh \left(k [ h (x) - h \left(x ^ {\prime}\right) ]\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \left(1 + \tanh \left(k [ h (x) - h \left(x ^ {\prime}\right) ]\right)\right) \\ \end{array} +$$ + +Then, + +$$ +\mathcal{C}_{\widetilde{\mathsf{L}}_{\Phi_{\text{sig}}},\mathcal{H}_{\text{lin}}}^{*}\big(x,x^{\prime}\big) = \inf_{h\in \mathcal{H}_{\text{lin}}}\mathcal{C}_{\widetilde{\mathsf{L}}_{\Phi_{\text{sig}}}}\big(h,x,x^{\prime}\big)\\ = \eta (x)\big(1 - \eta (x^{\prime})\big) + \eta (x^{\prime})\big(1 - \eta (x)\big) - |\eta (x) - \eta (x^{\prime})|\tanh \Big(kW\| x - x^{\prime}\|_{p}\Big). +$$ + +The $(\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sig}}},\mathcal{H}_{\mathrm{lin}})$ -minimizability gap is + +$$ +\mathcal {M} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s i g}}}} \left(\mathcal {H} _ {\mathrm {l i n}}\right) = \mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s i g}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {l i n}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) - | \eta (x) - \eta \left(x ^ {\prime}\right) | \tanh \left(k W \| x - x ^ {\prime} \| _ {p}\right) \right]. \tag {74} +$$ + +Therefore, $\forall h\in \widetilde{\mathcal{H}}_{\mathrm{lin}}(x,x^{\prime})\cup \mathring{\mathcal{H}}_{\mathrm{lin}}(x,x^{\prime})$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathcal {L}} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {l i n}}} \left(h, x, x ^ {\prime}\right) \\ \geq \inf _ {h \in \widetilde {\mathcal {H}} _ {\mathrm {l i n}} (x, x ^ {\prime}) \cup \widehat {\mathcal {H}} _ {\mathrm {l i n}} (x, x ^ {\prime})} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s i g}}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x)) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {l i n}}} ^ {*} (x, x ^ {\prime}) \\ = \left| \eta (x) - \eta \left(x ^ {\prime}\right) \right| \tanh \left(k W \| x - x ^ {\prime} \| _ {p}\right) \\ \geq \left| \eta (x) - \eta \left(x ^ {\prime}\right) \right| \tanh (k W \gamma) \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{lin}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {l i n}}} (h, x, x ^ {\prime}) \geq \tanh (k W \gamma) \Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H}} (h, x, x ^ {\prime}). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{lin}}$ -consistency bound for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sig}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{lin}}$ : + +$$ +\mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) \leq \frac {\mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s i g}}}} (h) - \mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s i g}}}} ^ {*} (\mathcal {H} _ {\mathrm {l i n}}) + \mathcal {M} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s i g}}}} (\mathcal {H} _ {\mathrm {l i n}})}{\tanh (k W \gamma)} - \mathcal {M} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {l i n}}). \tag {75} +$$ + +# L.2. One-Hidden-Layer ReLU Neural Networks + +Since $\mathcal{H}_{\mathrm{NN}}$ satisfies the condition of Lemma L.1, by Lemma L.1 the $(\widetilde{\mathsf{L}}_{0 - 1}^{\mathrm{abs}},\mathcal{H}_{\mathrm{NN}})$ -minimizability gap can be expressed as follows: + +$$ +\mathcal {M} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} \left(\mathcal {H} _ {\mathrm {N N}}\right) = \mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \min \left\{\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right), \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \right\} \mathbb {1} _ {\| x - x ^ {\prime} \| > \gamma} + c \mathbb {1} _ {| x - x ^ {\prime} | \leq \gamma} \right]. \tag {76} +$$ + +By the definition of $\mathcal{H}_{\mathrm{NN}}$ , for any $(x,x^{\prime})\in \mathcal{X}\times \mathcal{X}$ , $\left\{h(x^{\prime}) - h(x)\mid h\in \mathcal{H}_{\mathrm{NN}}\right\} = \left[-\Lambda W\| x - x^{\prime}\|_{p},\Lambda W\| x - x^{\prime}\|_{p}\right]$ . + +# L.2.1. DERIVATION FOR $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{hinge}}}$ + +For the hinge loss function $\Phi_{\mathrm{hinge}}(u)\coloneqq \max \{0,1 - u\}$ , for all $h\in \mathcal{H}_{\mathrm{NN}}$ and $(x,x^{\prime})$ such that $\| x - x^{\prime}\| _p > \gamma$ + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {h i n g e}}}} (h, x, x ^ {\prime}) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \Phi_ {\text {h i n g e}} \left(h (x) - h \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \Phi_ {\text {h i n g e}} \left(h \left(x ^ {\prime}\right) - h (x)\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \max \left\{0, 1 - h (x) + h \left(x ^ {\prime}\right) \right\} + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \max \left\{0, 1 + h (x) - h \left(x ^ {\prime}\right) \right\}. \\ \end{array} +$$ + +Then, + +$$ +\mathcal {C} _ {\mathtt {L} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) = \eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x)) - | \eta (x) - \eta (x ^ {\prime}) | \min \bigl \{\Lambda W \| x - x ^ {\prime} \| _ {p}, 1 \bigr \}. +$$ + +The $(\widetilde{\mathsf{L}}_{\Phi_{\mathrm{hinge}}},\mathcal{H}_{\mathrm{NN}})$ -minimizability gap is + +$$ +\begin{array}{l} \mathcal {M} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {h i n g e}}}} \left(\mathcal {H} _ {\mathrm {N N}}\right) \\ = \mathcal {R} _ {\mathbb {L} _ {\Phi_ {\text {h i n g e}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\mathbb {L} _ {\Phi_ {\text {h i n g e}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \right] \tag {77} \\ = \mathcal {R} _ {\mathsf {L} _ {\Phi_ {\mathrm {h i n g e}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \big [ \eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x)) - | \eta (x) - \eta (x ^ {\prime}) | \min \big \{\Lambda W \| x - x ^ {\prime} \| _ {p}, 1 \big \} \big ]. \\ \end{array} +$$ + +Therefore, $\forall h\in \widetilde{\mathcal{H}}_{\mathrm{NN}}(x,x^{\prime})\cup \mathring{\mathcal{H}}_{\mathrm{NN}}(x,x^{\prime})$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \widetilde {\mathcal {H}} _ {\mathrm {N N}} (x, x ^ {\prime}) \cup \mathring {\mathcal {H}} _ {\mathrm {N N}} (x, x ^ {\prime})} \mathcal {C} _ {\mathbb {L} _ {\Phi_ {\mathrm {h i n g e}}}} (h, x, x ^ {\prime}) - \mathcal {C} _ {\mathbb {L} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = \eta (x) (1 - \eta (x ^ {\prime})) \max \{0, 1 - 0 \} + \eta (x ^ {\prime}) (1 - \eta (x)) \max \{0, 1 + 0 \} - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = | \eta (x) - \eta \left(x ^ {\prime}\right) | \min \left\{\Lambda W \| x - x ^ {\prime} \| _ {p}, 1 \right\} \\ \geq \left| \eta (x) - \eta \left(x ^ {\prime}\right) \right| \min \left\{\Lambda W \gamma , 1 \right\} \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{NN}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\widetilde {\Lambda} _ {\Phi_ {\mathrm {h i n g e}}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \geq \min \{\Lambda W \gamma , 1 \} \Delta \mathcal {C} _ {\widetilde {\Lambda} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H}} (h, x, x ^ {\prime}). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{NN}}$ -consistency bound for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{hinge}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{NN}}$ : + +$$ +\mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) \leq \frac {\mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {h i n g e}}} (h)} - \mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {h i n g e}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}})} + \mathcal {M} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {h i n g e}}} (\mathcal {H} _ {\mathrm {N N}})}}{\min \left\{\Lambda W \gamma , 1 \right\}} - \mathcal {M} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {N N}}). \tag {78} +$$ + +# L.2.2. DERIVATION FOR $\widetilde{\mathsf{L}}_{\Phi_{\rho}}$ + +For the $\rho$ -margin loss function $\Phi_{\rho}(u) := \min \left\{1, \max \left\{0, 1 - \frac{u}{\rho}\right\} \right\}$ , $\rho > 0$ , for all $h \in \mathcal{H}_{\mathrm{NN}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\rho}}} \left(h, x, x ^ {\prime}\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \Phi_ {\rho} \left(h (x) - h \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \Phi_ {\rho} \left(h \left(x ^ {\prime}\right) - h (x)\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \min \left\{1, \max \left\{0, 1 - \frac {h (x) - h \left(x ^ {\prime}\right)}{\rho} \right\} \right\} \\ + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \min \left\{1, \max \left\{0, 1 + \frac {h (x) - h \left(x ^ {\prime}\right)}{\rho} \right\} \right\} \\ \end{array} +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {C} _ {\mathbb {L} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \inf _ {h \in \mathcal {H} _ {\mathrm {N N}}} \mathcal {C} _ {\widetilde {L} _ {\Phi_ {\rho}}} \left(h, x, x ^ {\prime}\right) \\ = \min \{\eta (x) (1 - \eta (x ^ {\prime})), \eta (x ^ {\prime}) (1 - \eta (x)) \} + \max \{\eta (x) (1 - \eta (x ^ {\prime})), \eta (x ^ {\prime}) (1 - \eta (x)) \} \left(1 - \frac {\min \left\{\Lambda W \| x - x ^ {\prime} \| _ {p} , \rho \right\}}{\rho}\right). \\ \end{array} +$$ + +The $(\widetilde{\mathsf{L}}_{\Phi_\rho},\mathcal{H}_{\mathrm{NN}})$ -minimizability gap is + +$$ +\begin{array}{l} \mathcal {M} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}} \left(\mathcal {H} _ {\mathrm {N N}}\right) \\ = \mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \right]. \\ = \mathcal {R} _ {\widetilde {L} _ {\Phi_ {\rho}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) \tag {79} \\ - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \left[ \min \{\eta (x), \eta \left(x ^ {\prime}\right) \} - \eta (x) \eta \left(x ^ {\prime}\right) \right] + \left[ \max \{\eta (x), \eta \left(x ^ {\prime}\right) \} - \eta (x) \eta \left(x ^ {\prime}\right) \right] \left(1 - \frac {\min \left\{\Lambda W \| x - x ^ {\prime} \| _ {p} , \rho \right\}}{\rho}\right) \right]. \\ \end{array} +$$ + +Therefore, $\forall h\in \widetilde{\mathcal{H}}_{\mathrm{NN}}(x,x^{\prime})\cup \mathring{\mathcal{H}}_{\mathrm{NN}}(x,x^{\prime})$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {\widetilde {\mathcal {H}} _ {\mathrm {N N}} (x, x ^ {\prime})} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}} (h, x, x ^ {\prime}) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = \left[ \max \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \right] + \left[ \min \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \right] \left(1 - \frac {\min \left\{\Lambda W \| x - x ^ {\prime} \| _ {p} , \rho \right\}}{\rho}\right) - \mathcal {C} _ {\mathrm {L} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = | \eta (x) - \eta \left(x ^ {\prime}\right) | \frac {\operatorname* {m i n} \left\{\Lambda W \| x - x ^ {\prime} \| _ {p} , \rho \right\}}{\rho} \\ \geq | \eta (x) - \eta \left(x ^ {\prime}\right) | \frac {\operatorname* {m i n} \left\{\Lambda W \gamma , \rho \right\}}{\rho} \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{NN}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\widetilde {\Gamma} _ {\Phi_ {\rho}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \geq \frac {\min \{\Lambda W \gamma , \rho \}}{\rho} \Delta \mathcal {C} _ {\widetilde {\Gamma} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H}} (h, x, x ^ {\prime}). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{NN}}$ -consistency bound for $\widetilde{\mathcal{L}}_{\Phi_{\rho}}$ , valid for NN $h \in \mathcal{H}_{\mathrm{NN}}$ : + +$$ +\mathcal {R} _ {\widetilde {\mathsf {L}} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\widetilde {\mathsf {L}} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) \leq \frac {\rho \left(\mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}} (h) - \mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) + \mathcal {M} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\rho}}} (\mathcal {H} _ {\mathrm {N N}})\right)}{\min \left\{\Lambda W \gamma , \rho \right\}} - \mathcal {M} _ {\widetilde {\mathsf {L}} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {N N}}). \tag {80} +$$ + +# L.2.3. DERIVATION FOR $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{exp}}}$ + +For the exponential loss function $\Phi_{\mathrm{exp}}(u)\coloneqq e^{-u}$ , for all $h\in \mathcal{H}_{\mathrm{NN}}$ and $(x,x^{\prime})$ such that $\| x - x^{\prime}\| _p > \gamma$ + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {L} _ {\Phi_ {\mathrm {e x p}}}} \left(h, x, x ^ {\prime}\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \Phi_ {\exp} \left(h (x) - h \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \Phi_ {\exp} \left(h \left(x ^ {\prime}\right) - h (x)\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) e ^ {- h (x) + h \left(x ^ {\prime}\right)} + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) e ^ {h (x) - h \left(x ^ {\prime}\right)}. \\ \end{array} +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {e x p}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = \inf _ {h \in \mathcal {H} _ {\mathrm {N N}}} \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\exp}}} \left(h, x, x ^ {\prime}\right) \\ = \left\{ \begin{array}{l} 2 \sqrt {\eta (x) \eta (x ^ {\prime}) (1 - \eta (x)) (1 - \eta (x ^ {\prime}))} \\ \text {i f} \frac {1}{2} \Big | \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \Big | \leq \Lambda W \| x - x ^ {\prime} \| _ {p} \\ \max \{\eta (x) (1 - \eta (x ^ {\prime})), \eta (x ^ {\prime}) (1 - \eta (x)) \} e ^ {- \Lambda W \| x - x ^ {\prime} \| _ {p}} + \min \{\eta (x) (1 - \eta (x ^ {\prime})), \eta (x ^ {\prime}) (1 - \eta (x)) \} e ^ {\Lambda W \| x - x ^ {\prime} \| _ {p}} \\ \text {i f} \frac {1}{2} \Big | \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \Big | > \Lambda W \| x - x ^ {\prime} \| _ {p}. \end{array} \right. \\ \end{array} +$$ + +The $\left(\widetilde{\mathsf{L}}_{\Phi_{\mathrm{exp}}},\mathcal{H}_{\mathrm{NN}}\right)$ -minimizability gap is: + +$$ +\begin{array}{l} \mathcal {M} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {e x p}}}} \left(\mathcal {H} _ {\mathrm {N N}}\right) = \mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {e x p}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {e x p}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \right] \\ = \mathcal {R} _ {\widetilde {L} _ {\Phi_ {\exp}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ 2 \sqrt {\eta (x) \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \left(1 - \eta \left(x ^ {\prime}\right)\right)} \mathbb {1} _ {\frac {1}{2} \left| \log \frac {\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right)}{\eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)} \right| \leq \Lambda W \| x - x ^ {\prime} \| _ {p}} \right] \tag {81} \\ \left. \right. - \mathbb {E} _ {(X, X ^ {\prime})} \left[\left[ \max \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \right] e ^ {- \Lambda W \| x - x ^ {\prime} \| _ {p}} \mathbb {1} _ {\frac {1}{2} \left| \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \right| > \Lambda W \| x - x ^ {\prime} \| _ {p}} \right] \\ - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \left[ \min \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \right] e ^ {\Lambda W \| x - x ^ {\prime} \| _ {p}} \mathbf {1} _ {\frac {1}{2} \left| \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \right| > \Lambda W \| x - x ^ {\prime} \| _ {p}} \right]. \\ \end{array} +$$ + +Therefore, $\forall h\in \widetilde{\mathcal{H}}_{\mathrm{NN}}(x,x^{\prime})\cup \mathring{\mathcal{H}}_{\mathrm{NN}}(x,x^{\prime})$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathcal {L}} _ {\Phi_ {\exp}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \widetilde {\mathcal {H}} _ {\mathrm {N N}} (x, x ^ {\prime}) \cup \mathring {\mathcal {H}} _ {\mathrm {N N}} (x, x ^ {\prime})} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\exp}}} (h, x, x ^ {\prime}) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\exp}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = \eta (x) (1 - \eta \left(x ^ {\prime}\right)) e ^ {- 0} + \eta \left(x ^ {\prime}\right) (1 - \eta (x)) e ^ {0} - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\exp}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \left\{ \begin{array}{l} \eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x)) - 2 \sqrt {\eta (x) \eta (x ^ {\prime}) (1 - \eta (x)) (1 - \eta (x ^ {\prime}))} \\ \text {i f} \frac {1}{2} \Big | \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \Big | \leq \Lambda W \| x - x ^ {\prime} \| _ {p} \\ \big [ \max \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \big ] \Big (1 - e ^ {- \Lambda W \| x - x ^ {\prime} \| _ {p}} \Big) + \big [ \min \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \big ] \Big (1 - e ^ {\Lambda W \| x - x ^ {\prime} \| _ {p}} \Big) \\ \text {i f} \frac {1}{2} \Big | \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \Big | > \Lambda W \| x - x ^ {\prime} \| _ {p} \end{array} \right. \\ \geq \left\{ \begin{array}{l} \eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x)) - 2 \sqrt {\eta (x) \eta (x ^ {\prime}) (1 - \eta (x)) (1 - \eta (x ^ {\prime}))} \\ \text {i f} \frac {1}{2} \Big | \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \Big | \leq \Lambda W \gamma \\ \big [ \max \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \big ] \big (1 - e ^ {- \Lambda W \gamma} \big) + \big [ \min \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \big ] \big (1 - e ^ {\Lambda W \gamma} \big) \\ \text {i f} \frac {1}{2} \Big | \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \Big | > \Lambda W \gamma \end{array} \right. \\ = \left\{ \begin{array}{l l} \left(\frac {\eta (x) (1 - \eta (x ^ {\prime})) - \eta (x ^ {\prime}) (1 - \eta (x))}{\sqrt {\eta (x) (1 - \eta (x ^ {\prime}))} + \sqrt {\eta (x ^ {\prime}) (1 - \eta (x))}}\right) ^ {2} & \text {i f} \frac {1}{2} \Big | \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \Big | \leq \Lambda W \gamma \\ \frac {\eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x))}{2} (2 - e ^ {- \Lambda W \gamma} - e ^ {\Lambda W \gamma}) + \frac {1}{2} | \eta (x) - \eta (x ^ {\prime}) | \big (e ^ {\Lambda W \gamma} - e ^ {- \Lambda W \gamma} \big) & \text {i f} \frac {1}{2} \Big | \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \Big | > \Lambda W \gamma \end{array} \right. \\ \geq \min \left\{\left(\eta (x) - \eta \left(x ^ {\prime}\right)\right) ^ {2}, \left(\frac {e ^ {2 \Lambda W \gamma} + 1}{e ^ {2 \Lambda W \gamma} - 1}\right) | \eta (x) - \eta \left(x ^ {\prime}\right) | \right\} \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{NN}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\tilde {\mathbb {L}} _ {\Phi_ {\exp}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \geq \Psi_ {\exp} \Big (\Delta \mathcal {C} _ {\tilde {\mathbb {L}} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H}} (h, x, x ^ {\prime}) \Big). +$$ + +where $\Psi_{\mathrm{exp}}$ is the increasing function on $[0,2]$ defined by + +$$ +\forall t \in [ 0, 1 ], \quad \Psi_ {\exp} (t) = \min \Biggl \{t ^ {2}, \left(\frac {e ^ {2 \Lambda W \gamma} + 1}{e ^ {2 \Lambda W \gamma} - 1}\right) t \Biggr \}. +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{NN}}$ -consistency bound for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{exp}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{NN}}$ : + +$$ +\mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) \leq \Gamma_ {\exp} \left(\mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\exp}}} (h) - \mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\exp}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) + \mathcal {M} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\exp}}} (\mathcal {H} _ {\mathrm {N N}})\right) - \mathcal {M} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {N N}}). \tag {82} +$$ + +where $\Gamma_{\mathrm{exp}}(t) = \max \left\{\sqrt{t},\left(\frac{e^{2\Lambda W\gamma} - 1}{e^{2\Lambda W\gamma} + 1}\right)t\right\} .$ + +# L.2.4. DERIVATION FOR $\widetilde{\mathsf{L}}_{\Phi_{\log}}$ + +For the logistic loss function $\Phi_{\log}(u) \coloneqq \log_2(1 + e^{-u})$ , for all $h \in \mathcal{H}_{\mathrm{NN}}$ and $(x, x')$ such that $\|x - x'\|_p > \gamma$ , + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {L} _ {\Phi_ {\log}}} (h, x, x ^ {\prime}) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \Phi_ {\log} \left(h (x) - h \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \Phi_ {\log} \left(h \left(x ^ {\prime}\right) - h (x)\right) \\ = \eta (x) (1 - \eta (x ^ {\prime})) \log_ {2} \left(1 + e ^ {- h (x) + h (x ^ {\prime})}\right) + \eta (x ^ {\prime}) (1 - \eta (x)) \log_ {2} \left(1 + e ^ {h (x) - h (x ^ {\prime})}\right). \\ \end{array} +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = \inf _ {h \in \mathcal {H} _ {\mathrm {N N}}} \mathcal {C} _ {\widetilde {\mathcal {L}} _ {\Phi_ {\log}}} \left(h, x, x ^ {\prime}\right) \\ = \left\{ \begin{array}{l} - \eta (x) (1 - \eta (x ^ {\prime})) \log_ {2} (\eta (x) (1 - \eta (x ^ {\prime}))) - \eta (x ^ {\prime}) (1 - \eta (x)) \log_ {2} (\eta (x ^ {\prime}) (1 - \eta (x))) \\ \text {i f} \left| \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \right| \leq \Lambda W \| x - x ^ {\prime} \| _ {p} \\ \left[ \max \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \right] \log_ {2} \left(1 + e ^ {- \Lambda W \| x - x ^ {\prime} \| _ {p}}\right) + \left[ \min \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \right] \log_ {2} \left(1 + e ^ {\Lambda W \| x - x ^ {\prime} \| _ {p}}\right) \\ \text {i f} \left| \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \right| > \Lambda W \| x - x ^ {\prime} \| _ {p} \end{array} \right. \\ \end{array} +$$ + +The $(\widetilde{\mathsf{L}}_{\Phi_{\log}},\mathcal{H}_{\mathrm{NN}})$ -minimizability gap is + +$$ +\begin{array}{l} \mathcal {M} _ {\widetilde {\mathcal {L}} _ {\Phi_ {\mathrm {l o g}}}} (\mathcal {H} _ {\mathrm {N N}}) \\ = \mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \right] \\ = \mathcal {R} _ {\widetilde {L} _ {\Phi_ {\log}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {\left(X, X ^ {\prime}\right)} \left[ - \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \log_ {2} \left(\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right)\right) \right. (83) \\ \left. - \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \log_ {2} \left(\eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)\right) \mathbb {1} _ {\left| \log \frac {\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right)}{\eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)} \right| \leq \Lambda W \| x - x ^ {\prime} \| _ {p}} \right] (83) \\ - \mathbb {E} _ {(X, X ^ {\prime})} \bigg [ \big [ \max \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \big ] \log_ {2} \Big (1 + e ^ {- \Lambda W \| x - x ^ {\prime} \| _ {p}} \Big) \mathbb {1} _ {\Big | \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \Big | > \Lambda W \| x - x ^ {\prime} \| _ {p} \Big ] \\ \left. \right. - \mathbb {E} _ {(X, X ^ {\prime})} \left[\left[ \min \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \right] \log_ {2} \left(1 + e ^ {\Lambda W \| x - x ^ {\prime} \| _ {p}}\right) \mathbb {1} _ {\left| \log \frac {\eta (x) (1 - \eta (x ^ {\prime}))}{\eta (x ^ {\prime}) (1 - \eta (x))} \right| > \Lambda W \| x - x ^ {\prime} \| _ {p}} \right]. \\ \end{array} +$$ + +Therefore, $\forall h\in \widetilde{\mathcal{H}}_{\mathrm{NN}}(x,x^{\prime})\cup \mathring{\mathcal{H}}_{\mathrm{NN}}(x,x^{\prime})$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \widetilde {\mathcal {H}} _ {\mathrm {N N}} (x, x ^ {\prime}) \cup \mathring {\mathcal {H}} _ {\mathrm {N N}} (x, x ^ {\prime})} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}} (h, x, x ^ {\prime}) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = \eta (x) \big (1 - \eta (x ^ {\prime}) \big) \log_ {2} \big (1 + e ^ {- 0} \big) + \eta (x ^ {\prime}) \big (1 - \eta (x) \big) \log_ {2} \big (1 + e ^ {0} \big) - \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ \int_ {\text {i f}} \eta (x) (1 - \eta \left(x ^ {\prime}\right)) \left[ 1 - \log_ {2} (\eta (x) (1 - \eta \left(x ^ {\prime}\right))) \right] + \eta \left(x ^ {\prime}\right) (1 - \eta (x)) \left[ 1 - \log_ {2} (\eta \left(x ^ {\prime}\right) (1 - \eta (x))) \right] \\ \overset {\leq}{\left[ \begin{array}{l}\max \{\eta (x),\eta (x^{\prime})\} -\eta (x)\eta (x^{\prime})\big]\big(1 - \log_{2}\big(1 + e^{-\Lambda W\gamma}\big)\big) + \big[\min \{\eta (x),\eta (x^{\prime})\} -\eta (x)\eta (x^{\prime})\big]\big(1 - \log_{2}\big(1 + e^{\Lambda W\gamma}\big)\big)\\ \text{if}\Big|\log \frac{\eta(x)(1 - \eta(x^{\prime}))}{\eta(x^{\prime})(1 - \eta(x))}\Big| > \Lambda W\gamma \end{array} \right]} \\ \geq \min \left\{\left(\eta (x) - \eta \left(x ^ {\prime}\right)\right) ^ {2}, \left(\frac {e ^ {\Lambda W \gamma} + 1}{e ^ {\Lambda W \gamma} - 1}\right) | \eta (x) - \eta \left(x ^ {\prime}\right) | \right\} \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{NN}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\log}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \geq \Psi_ {\log} \Big (\Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H}} (h, x, x ^ {\prime}) \Big). +$$ + +where $\Psi_{\mathrm{log}}$ is the increasing function on $[0,2]$ defined by + +$$ +\forall t \in [ 0, 1 ], \quad \Psi_ {\log} (t) = \min \left\{t ^ {2}, \left(\frac {e ^ {\Lambda W \gamma} + 1}{e ^ {\Lambda W \gamma} - 1}\right) t \right\}. +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{NN}}$ -consistency bound for $\widetilde{\mathsf{L}}_{\Phi_{\log}}$ , valid for all $h \in \mathcal{H}_{\mathrm{NN}}$ : + +$$ +\mathcal {R} _ {\widetilde {L} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\widetilde {L} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) \leq \Gamma_ {\log} \left(\mathcal {R} _ {\widetilde {L} _ {\Phi_ {\log}}} (h) - \mathcal {R} _ {\widetilde {L} _ {\Phi_ {\log}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) + \mathcal {M} _ {\widetilde {L} _ {\Phi_ {\log}}} (\mathcal {H} _ {\mathrm {N N}})\right) - \mathcal {M} _ {\widetilde {L} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {N N}}). \tag {84} +$$ + +where $\Gamma_{\log}(t) = \max \left\{\sqrt{t},\left(\frac{e^{\Lambda W\gamma} - 1}{e^{\Lambda W\gamma} + 1}\right)t\right\} .$ + +# L.2.5. DERIVATION FOR $\widetilde{\mathcal{L}}_{\Phi_{\mathrm{sq}}}$ + +For the squared hinge loss function $\Phi_{\mathrm{sq}}(u)\coloneqq (1 - u)^2\mathbb{1}_{u\leq 1}$ , for all $h\in \mathcal{H}_{\mathrm{NN}}$ and $(x,x^{\prime})$ such that $\| x - x^{\prime}\| _p > \gamma$ + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s q}}}} \left(h, x, x ^ {\prime}\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \Phi_ {\mathrm {s q}} \left(h (x) - h \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \Phi_ {\mathrm {s q}} \left(h \left(x ^ {\prime}\right) - h (x)\right) \\ = \eta (x) (1 - \eta (x ^ {\prime})) (1 - h (x) + h (x ^ {\prime})) ^ {2} \mathbb {1} _ {h (x) - h (x ^ {\prime}) \leq 1} + \eta (x ^ {\prime}) (1 - \eta (x)) (1 + h (x) - h (x ^ {\prime})) ^ {2} \mathbb {1} _ {h (x) - h (x ^ {\prime}) \geq - 1}. \\ \end{array} +$$ + +Then, + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {L} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = \inf _ {h \in \mathcal {H} _ {\mathrm {N N}}} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}} \left(h, x, x ^ {\prime}\right) \\ = \left\{ \begin{array}{l} 4 \frac {\eta (x) \eta (x ^ {\prime}) (1 - \eta (x)) (1 - \eta (x ^ {\prime}))}{\eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x))} \\ \text {i f} \frac {| \eta (x) - \eta (x ^ {\prime}) |}{\eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x))} \leq \Lambda W \| x - x ^ {\prime} \| _ {p} \\ [ \max \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) ] \big (1 - \Lambda W \| x - x ^ {\prime} \| _ {p} \big) ^ {2} + [ \min \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) ] \big (1 + \Lambda W \| x - x ^ {\prime} \| _ {p} \big) ^ {2} \\ \text {i f} \frac {| \eta (x) - \eta (x ^ {\prime}) |}{\eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x))} > \Lambda W \| x - x ^ {\prime} \| _ {p}. \end{array} \right. \\ \end{array} +$$ + +The $(\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sq}}},\mathcal{H}_{\mathrm{NN}})$ -minimizability gap is + +$$ +\begin{array}{l} \mathcal {M} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}} \left(\mathcal {H} _ {\mathrm {N N}}\right) \\ = \mathcal {R} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \right] \\ = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {s q}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ 4 \frac {\eta (x) \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \left(1 - \eta \left(x ^ {\prime}\right)\right)}{\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)} \mathbb {1} _ {\frac {| \eta (x) - \eta \left(x ^ {\prime}\right) |}{\eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right)} \leq \Lambda W \| x - x ^ {\prime} \| _ {p}} \right] \tag {85} \\ \left. - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \left[ \max \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \right] \left(1 - \Lambda W \| x - x ^ {\prime} \| _ {p}\right) ^ {2} \mathbb {1} _ {\frac {| \eta (x) - \eta (x ^ {\prime}) |}{\eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x))} > \Lambda W \| x - x ^ {\prime} \| _ {p} \right] \right. \\ \left. \right. - \mathbb {E} _ {(X, X ^ {\prime})} \left[\left[ \min \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) \right] \big (1 + \Lambda W \| x - x ^ {\prime} \| _ {p} \big) ^ {2} \mathbb {1} _ {\frac {| \eta (x) - \eta (x ^ {\prime}) |}{\eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x))}} > \Lambda W \| x - x ^ {\prime} \| _ {p} \right]. \\ \end{array} +$$ + +Therefore, $\forall h\in \widetilde{\mathcal{H}}_{\mathrm{NN}}(x,x^{\prime})\cup \mathring{\mathcal{H}}_{\mathrm{NN}}(x,x^{\prime}),$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \widetilde {\mathcal {H}} _ {\mathrm {N N}} (x, x ^ {\prime}) \cup \tilde {\mathcal {H}} _ {\mathrm {N N}} (x, x ^ {\prime})} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x)) - \mathcal {C} _ {\mathsf {L} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ \geq \left\{ \begin{array}{l} \eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x)) - 4 \frac {\eta (x) \eta (x ^ {\prime}) (1 - \eta (x)) (1 - \eta (x ^ {\prime}))}{\eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x))} \\ \text {i f} \frac {| \eta (x) - \eta (x ^ {\prime}) |}{\eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x))} \leq \Lambda W \gamma \\ [ \max \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) ] [ 1 - (1 - \Lambda W \gamma) ^ {2} ] + [ \min \{\eta (x), \eta (x ^ {\prime}) \} - \eta (x) \eta (x ^ {\prime}) ] [ 1 - (1 + \Lambda W \gamma) ^ {2} ] \\ \text {i f} \frac {| \eta (x) - \eta (x ^ {\prime}) |}{\eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x))} > \Lambda W \gamma . \end{array} \right. \\ \geq \min \left\{\left(\eta (x) - \eta \left(x ^ {\prime}\right)\right) ^ {2}, 2 \Lambda W \gamma | \eta (x) - \eta \left(x ^ {\prime}\right) | - (\Lambda W \gamma) ^ {2} \right\} \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{NN}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s q}}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \geq \Psi_ {\mathrm {s q}} \Big (\Delta \mathcal {C} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H}} (h, x, x ^ {\prime}) \Big). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{NN}}$ -consistency bound for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sq}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{NN}}$ : + +$$ +\left. \mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) \leq \Gamma_ {\mathrm {s q}} \left(\mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s q}}}} (h) - \mathcal {R} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s q}}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) + \mathcal {M} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s q}}}} (\mathcal {H} _ {\mathrm {N N}})\right) - \mathcal {M} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {N N}}). \right. \tag {86} +$$ + +where $\Gamma_{\mathrm{sq}} = \max \left\{\sqrt{t}, \frac{t}{2\Lambda W\gamma} + \frac{\Lambda W\gamma}{2}\right\}$ . + +# L.2.6. DERIVATION FOR $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sig}}}$ . + +For the sigmoid loss function $\Phi_{\mathrm{sig}}(u)\coloneqq 1 - \tanh (ku),k > 0$ , for all $h\in \mathcal{H}_{\mathrm{NN}}$ and $(x,x^{\prime})$ such that $\| x - x^{\prime}\| _p > \gamma$ + +$$ +\begin{array}{l} \mathcal {C} _ {\widetilde {L} _ {\Phi_ {\mathrm {s i g}}}} (h, x, x ^ {\prime}) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \Phi_ {\mathrm {s i g}} \left(h (x) - h \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \Phi_ {\mathrm {s i g}} \left(h \left(x ^ {\prime}\right) - h (x)\right) \\ = \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) \left(1 - \tanh \left(k [ h (x) - h \left(x ^ {\prime}\right) ]\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) \left(1 + \tanh \left(k [ h (x) - h \left(x ^ {\prime}\right) ]\right)\right) \\ \end{array} +$$ + +Then, + +$$ +\mathcal{C}^{*}_{\widetilde{\mathsf{L}}_{\Phi_{\text{sig}}},\mathcal{H}_{\text{NN}}}\left(x,x^{\prime}\right) = \inf_{h\in \mathcal{H}_{\text{NN}}}\mathcal{C}^{*}_{\widetilde{\mathsf{L}}_{\Phi_{\text{sig}}}}\left(h,x,x^{\prime}\right) = \eta (x)\bigl(1 - \eta (x^{\prime})\bigr) + \eta (x^{\prime})\bigl(1 - \eta (x)\bigr) - |\eta (x) - \eta (x^{\prime})|\tanh \bigl(k\Lambda W\| x - x^{\prime}\|_{p}\bigr). +$$ + +The $(\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sig}}},\mathcal{H}_{\mathrm{NN}})$ -minimizability gap is + +$$ +\mathcal {M} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s i g}}}} \left(\mathcal {H} _ {\mathrm {N N}}\right) = \mathcal {R} _ {\mathrm {L} _ {\Phi_ {\mathrm {s i g}}}} ^ {*} \left(\mathcal {H} _ {\mathrm {N N}}\right) - \mathbb {E} _ {(X, X ^ {\prime})} \left[ \eta (x) \left(1 - \eta \left(x ^ {\prime}\right)\right) + \eta \left(x ^ {\prime}\right) \left(1 - \eta (x)\right) - \left| \eta (x) - \eta \left(x ^ {\prime}\right) \right| \tanh \left(k \Lambda W \| x - x ^ {\prime} \| _ {p}\right) \right]. \tag {87} +$$ + +Therefore, $\forall h\in \widetilde{\mathcal{H}}_{\mathrm{NN}}(x,x^{\prime})\cup \mathring{\mathcal{H}}_{\mathrm{NN}}(x,x^{\prime}),$ + +$$ +\begin{array}{l} \Delta \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \\ \geq \inf _ {h \in \widetilde {\mathcal {H}} _ {\mathrm {N N}} (x, x ^ {\prime}) \cup \hat {\mathcal {H}} _ {\mathrm {N N}} (x, x ^ {\prime})} \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s i g}}}} \left(h, x, x ^ {\prime}\right) - \mathcal {C} _ {\widetilde {\mathsf {L}} _ {\Phi_ {\mathrm {s i g}}}; \mathcal {H} _ {\mathrm {N N}}} ^ {*} \left(x, x ^ {\prime}\right) \\ = \eta (x) (1 - \eta (x ^ {\prime})) + \eta (x ^ {\prime}) (1 - \eta (x)) - \mathcal {C} _ {\mathbb {L} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {N N}}} ^ {*} (x, x ^ {\prime}) \\ = \left| \eta (x) - \eta \left(x ^ {\prime}\right) \right| \tanh \left(k \Lambda W \| x - x ^ {\prime} \| _ {p}\right) \\ \geq \left| \eta (x) - \eta \left(x ^ {\prime}\right) \right| \tanh (k \Lambda W \gamma) \\ \end{array} +$$ + +which implies that for any $h \in \mathcal{H}_{\mathrm{NN}}$ and $(x, x')$ such that $\| x - x' \|_p > \gamma$ , + +$$ +\Delta \mathcal {C} _ {\widetilde {\mathrm {L}} _ {\Phi_ {\mathrm {s i g}}}, \mathcal {H} _ {\mathrm {N N}}} (h, x, x ^ {\prime}) \geq \tanh (k \Lambda W \gamma) \Delta \mathcal {C} _ {\widetilde {\mathrm {L}} _ {0 - 1} ^ {\mathrm {a b s}}, \mathcal {H}} (h, x, x ^ {\prime}). +$$ + +Thus, by Theorem C.1 or Theorem C.2, setting $\epsilon = 0$ yields the $\mathcal{H}_{\mathrm{NN}}$ -consistency bound for $\widetilde{\mathsf{L}}_{\Phi_{\mathrm{sig}}}$ , valid for all $h \in \mathcal{H}_{\mathrm{NN}}$ : + +$$ +\mathcal {R} _ {\widetilde {L} _ {0 - 1} ^ {\mathrm {a b s}}} (h) - \mathcal {R} _ {\widetilde {L} _ {0 - 1} ^ {\mathrm {a b s}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) \leq \frac {\mathcal {R} _ {\widetilde {L} _ {\Phi_ {\mathrm {s i g}}}} (h) - \mathcal {R} _ {\widetilde {L} _ {\Phi_ {\mathrm {s i g}}}} ^ {*} (\mathcal {H} _ {\mathrm {N N}}) + \mathcal {M} _ {\widetilde {L} _ {\Phi_ {\mathrm {s i g}}}} (\mathcal {H} _ {\mathrm {N N}})}{\tanh (k \Lambda W \gamma)} - \mathcal {M} _ {\widetilde {L} _ {0 - 1} ^ {\mathrm {a b s}}} (\mathcal {H} _ {\mathrm {N N}}). \tag {88} +$$ \ No newline at end of file diff --git a/hconsistencyboundsforpairwisemisrankinglosssurrogates/images.zip b/hconsistencyboundsforpairwisemisrankinglosssurrogates/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..bf1cc1e306654b6050251c058811a3afd334bf7f --- /dev/null +++ b/hconsistencyboundsforpairwisemisrankinglosssurrogates/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:878aaec0834b128bf579c83d2af959ecdc22a2797916a771a307ee4b5d9a5aed +size 6131528 diff --git a/hconsistencyboundsforpairwisemisrankinglosssurrogates/layout.json b/hconsistencyboundsforpairwisemisrankinglosssurrogates/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2007c44736ebc206f59bd430507bd1bf00056271 --- /dev/null +++ b/hconsistencyboundsforpairwisemisrankinglosssurrogates/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f9d4bbcea71dfce823f2b67bf9eac6981ba4beb9ca62d776b152a6ced37f215 +size 2894178 diff --git a/pituningtransferringmultimodalfoundationmodelswithoptimalmultitaskinterpolation/91816fdf-b77f-40c7-a756-1df57229d0b1_content_list.json b/pituningtransferringmultimodalfoundationmodelswithoptimalmultitaskinterpolation/91816fdf-b77f-40c7-a756-1df57229d0b1_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..45973793d62aa6ec6ad35f9392dd3c1ec21d27be --- /dev/null +++ b/pituningtransferringmultimodalfoundationmodelswithoptimalmultitaskinterpolation/91816fdf-b77f-40c7-a756-1df57229d0b1_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:85368b96955f6b71effe25a641d0201009f75bcdfe9ce43a55239cdd9e85c66c +size 116276 diff --git a/pituningtransferringmultimodalfoundationmodelswithoptimalmultitaskinterpolation/91816fdf-b77f-40c7-a756-1df57229d0b1_model.json b/pituningtransferringmultimodalfoundationmodelswithoptimalmultitaskinterpolation/91816fdf-b77f-40c7-a756-1df57229d0b1_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d49cfb48d1232badddbd72024caf594cb107ec1f --- /dev/null +++ b/pituningtransferringmultimodalfoundationmodelswithoptimalmultitaskinterpolation/91816fdf-b77f-40c7-a756-1df57229d0b1_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:40f883ee9012208eb70d26fc11d15417426a7b015d80e8a574991c4444cbd708 +size 138897 diff --git a/pituningtransferringmultimodalfoundationmodelswithoptimalmultitaskinterpolation/91816fdf-b77f-40c7-a756-1df57229d0b1_origin.pdf b/pituningtransferringmultimodalfoundationmodelswithoptimalmultitaskinterpolation/91816fdf-b77f-40c7-a756-1df57229d0b1_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4e98c5fdd2e2eb35c996778fc1b8af7264b82699 --- /dev/null +++ b/pituningtransferringmultimodalfoundationmodelswithoptimalmultitaskinterpolation/91816fdf-b77f-40c7-a756-1df57229d0b1_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:18795c5da12e066d77df0e3373ac915d779c5f4469afd9a3f7298956ad986462 +size 935444 diff --git a/pituningtransferringmultimodalfoundationmodelswithoptimalmultitaskinterpolation/full.md b/pituningtransferringmultimodalfoundationmodelswithoptimalmultitaskinterpolation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d77554e4afbddcb7cf34387fed6c2c65570b6209 --- /dev/null +++ b/pituningtransferringmultimodalfoundationmodelswithoptimalmultitaskinterpolation/full.md @@ -0,0 +1,430 @@ +# $\pi$ -Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation + +Chengyue Wu $^{12}$ Teng Wang $^{12}$ Yixiao Ge $^{2}$ Zeyu Lu $^{3}$ Ruisong Zhou $^{4}$ Ying Shan $^{2}$ Ping Luo $^{1}$ + +# Abstract + +Foundation models have achieved great advances in multi-task learning with a unified interface of unimodal and multimodal tasks. However, the potential of such multi-task learners has not been exploited during transfer learning. In this work, we present a universal parameter-efficient transfer learning method, termed Predict-Interpolate Tuning ( $\pi$ -Tuning), for vision, language, and vision-language tasks. It aggregates the parameters of lightweight task-specific experts learned from similar tasks to aid the target downstream task. The task similarities are predicted in a unified modality-independent space, yielding a scalable graph to demonstrate task relationships. $\pi$ -Tuning has several appealing benefits. First, it flexibly explores both intra- and inter-modal transferability between similar tasks to improve the accuracy and robustness of transfer learning, especially in data-scarce scenarios. Second, it offers a systematical solution for transfer learning with multi-task prediction-and-then-interpolation, compatible with diverse types of parameter-efficient experts, such as prompt and adapter. Third, an extensive study of task-level mutual benefits on 14 unimodal and 6 multimodal datasets shows that $\pi$ -Tuning surpasses fine-tuning and other parameter-efficient transfer learning methods both in full-shot and low-shot regimes. The task graph also enables an in-depth interpretable analysis of task transferability across modalities. The code will be available at https://github.com/TencentARC/pi-Tuning. + +$^{1}$ Department of Computer Science, The University of Hong Kong $^{2}$ ARC Lab, Tencent PCG $^{3}$ Shanghai Jiao Tong University $^{4}$ School of Mathematical Sciences, Fudan University. Correspondence to: Yixiao Ge . + +Proceedings of the $40^{th}$ International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). + +![](images/b93c581cc6a99de17093f25b1f1b17eb6a61eb01ffaf625a8eb34dd6b8184f7c.jpg) +Figure 1. Heatmap of the predicted task similarities, composed of both unimodal and multimodal tasks. Vision-language tasks are more similar to vision tasks compared to language tasks. Best viewed in color. + +# 1. Introduction + +With the development of Transformer architectures (Dosovitskiy et al., 2021; Devlin et al., 2018; Brown et al., 2020), foundation models (Cho et al., 2021; Lu et al., 2022; Wang et al., 2022) pre-trained with large-scale data are capable of multiple tasks across modalities in a unified sequence-to-sequence manner, taking one more step toward mimicking the human brain. These foundation models are natural multitask learners with universal representation and I/O interfaces for both unimodal and multimodal tasks. But unfortunately, these properties have not been fully exploited in downstream tasks, as few studies investigated how to properly transfer these models. + +In this work, we tackle the problem of transfer learning of multimodal foundation models with unified sequence-to-sequence interfaces. Most of our experiments are based on OFA (Wang et al., 2022), an open-source model, without + +loss of generality. Some previous attempts (Pruksachatkun et al., 2020) empirically observed that pre-finetuning with similar tasks may be beneficial while dissimilar tasks are harmful. It is intuitive to leverage auxiliary tasks that are similar to the target domain to boost transfer learning. The measurement of task similarities turns out to be a critical problem. Rather than the brute-force probing as in Pruksachatkun et al. (2020), we embed tasks into a unified space with Fisher Information Matrix (Achille et al., 2019), yielding task graphs across modalities (see Fig. 1). We are the first to explore task relationships among computer vision (CV), natural language processing (NLP), and vision-language (VL) tasks. The graph is computationally efficient and easily scalable for new tasks, which is especially suitable for recent arts that unify increasingly more tasks in one model. + +Given the similar tasks predicted from the computed graph, naive multi-task fine-tuning is effective for achieving satisfactory performance but inefficient for training, especially with increasing tasks and model sizes. Inspired by the parameter-efficient transfer learning methods (Houlsby et al., 2019; Hu et al., 2021; Li & Liang, 2021) with only a few trainable parameters, we propose Predict-Interpolate Tuning ( $\pi$ -Tuning), which interpolates between predicted parameter-efficient experts (e.g., adapter, prompt) for transfer learning. To be specific, as demonstrated in Fig. 2, the parameter-efficient experts are trained individually on each task before being selected according to the predicted task similarities. The weights of selected experts are then efficiently tuned to be ensembled for the downstream task. We empirically and theoretically found that those task-specific experts trained on similar tasks may lie in the basin of the loss landscape for the target task, which makes a simple interpolation of different experts possibly yielding strong performance, with no need for elaborately designed and highly parameterized fusion modules (Pfeiffer et al., 2020). + +In addition to the newly proposed transfer learning method, several interesting findings were observed by macroscopically analyzing the relationship between CV, NLP, and VL tasks. 1) VL tasks are closer to CV tasks than NLP tasks, interpreting the state-of-the-art performance VL models achieved in vision benchmarks. 2) Image captioning sits in the central position among all the tasks, i.e., being close to many tasks, demonstrating its importance in VL pre-training. These findings may well inspire future multimodal studies. + +Some pioneer works (Vu et al., 2020; Poth et al., 2021; Vu et al., 2022) in NLP attempted to aggregate prompt embeddings from several source tasks to the target domain. However, they focus on limited language tasks and did not systematically come up with the philosophy of task integration, rendering limited scalability for a broader domain. + +Our key contributions are three-fold: + +- We for the first time investigate the task relationships among vision, language, and vision-language tasks in a unified task space. Building upon the task graph, interesting phenomena are demonstrated across modalities. +- We introduce a new parameter-efficient transfer learning method, namely $\pi$ -Tuning, which effectively and efficiently aggregates unimodal and multimodal knowledge across tasks. +- Empirically, we conduct extensive experiments on vision, language, and vision-language downstream tasks to demonstrate the superiority, generalization, and scalability of our approach. Theoretically, we indicate the plausibility of similar task interpolation. + +# 2. Method + +# 2.1. Preliminary + +We use a unified sequence-to-sequence (Seq2Seq) model (e.g., OFA (Wang et al., 2022)) as it can process heterogeneous data via a universal interface, which is helpful for us to analyze multimodal tasks. We specify this model as $y = f_{\theta_0}(x)$ , where $\theta_0$ is the pre-treated weight of OFA. Here, we refer a task to a specific dataset as $\tau_i = (X_i, Y_i)$ , where $i \in \{1, 2, \dots, n\}$ represents the $i$ -th task from all of $n$ tasks. $X_i = \{x_i^1, x_i^2, \dots, x_i^{s_i}\}$ is the input data, which can be CV, NLP, or VL data and $s_i$ is the dataset size of $\tau_i$ . $Y_i = \{y_i^1, y_i^2, \dots, y_i^{s_i}\}$ is the label set, which later will be processed by a verbalizer (Schick & Schütze, 2020) in OFA. We utilize parameter-efficient tuning methods, which keep the pre-trained weight $\theta_0$ frozen and only tune extra added parameters $\varphi$ to adapt to multiple downstream tasks. Given a task $\tau_i$ , we represent the corresponding optimized parameters as $\varphi_i$ . + +Task Definition. We have a task set $T = \{\tau_1, \tau_2, \dots, \tau_n\}$ and the pre-trained model $f_{\theta_0}$ . For a specific task $\tau_i$ , the objective of traditional parameter-efficient tuning is of the following form: + +$$ +\varphi_ {i} \leftarrow \underset {\varphi_ {i}} {\arg \min } L \left(\tau_ {i}; \theta_ {0}, \varphi_ {i}\right), \tag {1} +$$ + +where $L$ represents the loss function. In the Seq2Seq model, it is typically the Cross-Entropy (CE) loss. + +For the target task $\tau_t \in T$ , we want to leverage all of the tasks and their corresponding trained parameters $\Phi = \{\varphi_1, \varphi_2, \dots, \varphi_n\}$ in pursuit of more accurate and robust generalization performance. + +Fisher Information Matrix (FIM). The Fisher information matrix (Amari, 1998) can indicate the sensitivity of the model towards a small perturbation and the curvature of the loss surface, which can be used as task embedding (Achille et al., 2019). Formally, it is the expected covariance of the gradients of the log-likelihood for model parameters $\theta$ , + +![](images/337e664b256e6ad425ad9cc4c956783da5bef169ec07b3dcd3ac265017d389aa.jpg) + +![](images/1e8911b5a49150e16cec2d6f09f0ec3b4592230757f96b21e6ac06d69b2f3a76.jpg) +Figure 2. Overview of the $\pi$ -Tuning method. Step 1 is the traditional parameter-efficient transfer learning (PETL) pipeline. Given the target task, $\pi$ -Tuning further adds Step 2 and Step 3 to utilize task relationships to enhance the target expert. Specifically, we find the most similar tasks in a large pool of tasks and interpolate those experts with our target expert on the target task. + +![](images/57564314ba9945bdde6c4dcd6522a8232855b9ebabff80c7cd249a2a3cec3515.jpg) + +![](images/e728cd327c4f72d17dee3bf308a422ee47329626a48840d4d99c2ea3955a26cc.jpg) + +where $P(x,y)$ represents the data distribution of the task: + +$$ +F _ {\theta} ^ {i} = \underset {(x, y) \sim P _ {\theta} (x, y)} {\mathbb {E}} \nabla_ {\theta} \log P _ {\theta} (y | x) \nabla_ {\theta} \log P _ {\theta} (y | x) ^ {\mathrm {T}}. \tag {2} +$$ + +Specifically, we use the empirical Fisher to compute task embedding for task $\tau_{i}$ : + +$$ +F _ {\theta} ^ {i} = \frac {1}{s _ {i}} \sum_ {j = 1} ^ {s _ {i}} \left[ \nabla_ {\theta} \log P _ {\theta} \left(y _ {i} ^ {j} \mid x _ {i} ^ {j}\right) \nabla_ {\theta} \log P _ {\theta} \left(y _ {i} ^ {j} \mid x _ {i} ^ {j}\right) ^ {\mathrm {T}} \right]. \tag {3} +$$ + +# 2.2. $\pi$ -Tuning + +Our core hypothesis is that similar tasks help enrich domain-relevant or task-relevant training data, while dissimilar tasks may be harmful to the model to perform well in the target task. Based on this hypothesis, we first retrieve a subset of similar tasks given the target task $\tau_{t}$ of the task set $T$ . Then we combine parameter-efficient experts trained for those similar tasks, which is considered to contain useful knowledge towards corresponding tasks, with the target task expert via interpolation to seek a transfer gain. We refer to this approach as Predict-Interpolate Tuning ( $\pi$ -Tuning for short). Fig. 2 shows the overall framework. + +Task similarity prediction. Since the combination number of the task set $T$ is exponential for $n$ , it is computationally expensive to identify the subset $R \subseteq T$ of similar tasks given a target task brutally. Therefore, we try to embed each task into a vector to compute task similarity. We choose the FIM to extract the task feature as mentioned in Sec. 2.1. We keep the pre-trained weights $\theta_0$ frozen and tune $\varphi_i$ for each task $\tau_i$ via parameter-efficient tuning. + +Next, given task $\tau_{i}$ , we compute the task embedding based on the Fisher of the task expert's weights $\varphi$ . As the dimension of $F_{\theta}$ is extremely large, we make an approximation by only considering its diagonal entries following the practice in Achille et al. (2019). The task embedding $F$ is therefore measured via $F = \mathrm{diag}(F_{\theta}) = [F_{\theta_1}, F_{\theta_2}, \dots, F_{\theta_n}]$ , where $\theta_i$ denotes the $i$ -th parameter of the expert. Intuitively, the approximation is made under the assumption that correlations between different parametric modules in the expert are not essential. + +We compute the cosine similarity between the embedding of the target task $\tau_{t}$ and the candidate source task. We then rank the similarity score in descending order to select Top-k similar tasks to form the subset $R = \{\tau_1^{\mathrm{sim}},\tau_2^{\mathrm{sim}},\dots ,\tau_k^{\mathrm{sim}}\}$ . Both the task set $T$ and the similar task subset $R$ are dy + +
MethodRefCOCORefCOCO+RefCOCOgSNLI-VEB@4COCO CaptionsVQA
valtestAtestBvaltestAtestBval-utest-udevtestMCStest-devtest-standard
Previous SOTAs
UNITER (Chen et al., 2019)81.4187.0474.1775.9081.4566.7074.8675.7779.4079.40----73.8074.00
VILLA (Gan et al., 2020)82.3987.4874.8476.1781.5466.8476.1876.7180.2080.00----74.7074.90
MDETR (Kamath et al., 2021)86.7589.5881.4179.5284.0970.6281.6480.8980.9081.20----77.7077.60
VL-T5 (Cho et al., 2021)------71.2071.30--34.5028.70116.521.90-70.30
UNICORN (Yang et al., 2021)88.2990.4283.0680.3085.0571.8883.4483.93--35.8028.40119.1021.50--
OFA-Base
Finetuning (Wang et al., 2022)88.4890.6783.3081.3987.1574.2982.2982.3189.3089.2041.0030.90138.224.2078.0078.10
BitFit (Zaken et al., 2021)76.3281.2172.8067.2974.1459.2168.7969.6184.8484.4839.8030.20134.623.8673.0373.26
LoRA (Hu et al., 2021)81.9185.8976.9072.2979.2262.2872.5573.2687.8387.9339.8030.20134.523.7375.5775.67
Prompt Tuning (Yang et al., 2022)84.5385.2177.3676.3481.4467.6875.6176.5788.1888.5939.7030.10134.223.5074.3174.47
Adapter (Houlsby et al., 2019)86.6390.0181.7179.4584.8971.3679.5880.3587.9087.6739.8030.60134.623.8075.5975.94
π-Adapter86.9889.9981.7380.1085.8771.3881.7281.7589.2389.4041.0030.90137.023.9075.8876.13
OFA-Large
Finetuning90.0592.9385.2684.60*89.99*77.71*85.8986.5590.36*89.91*41.90*31.40*141.8*24.50*80.4080.70
BitFit89.6192.2084.9182.6088.0875.1684.6684.6889.7089.4241.0230.92138.824.2378.2378.44
LoRA89.5692.5984.6383.0088.7075.4684.4885.0189.4989.1541.5031.10140.424.4078.2078.16
Prompt Tuning90.0592.3185.5984.5489.4077.7785.2785.8989.19*89.11*41.60*30.80*140.5*24.30*78.3078.53
Adapter90.0592.4284.8384.5089.6677.2685.4885.8890.0489.5941.8031.30140.624.5078.5578.62
π-Adapter90.4992.9385.9184.9290.0377.9186.6086.9290.1690.0141.7031.40140.724.5078.7878.82
+ +Table 1. Comparison with state-of-the-art (SOTA) PETL and full fine-tuning methods. We report the experimental results on RefCOCO, RefCOCO+, RefCOCOg, SNLI-VE, COCO Image Captioning, and VQA. The overall best result is underlined while bold signifies the best among parameter-efficient methods. * denotes the results of our re-trained models based on official codes. + +namic. If there is a new task arrives, we can add it to our task set and compute its task similarity given the target task to check if it is the top-k similar task. + +Interpolation of multiple lightweight experts. Given the similar task subset $R$ , we retrieve the expert subset corresponding to $R$ , i.e., $\phi = \{\varphi_1^{\mathrm{sim}}, \varphi_2^{\mathrm{sim}}, \dots, \varphi_k^{\mathrm{sim}}\} \subseteq \Phi$ . Then we can derive the combined parameters $\bar{\varphi}$ as follow: + +$$ +\bar {\varphi} = \operatorname {s o f t m a x} (\alpha) _ {0} \varphi_ {t} + \sum_ {i = 1} ^ {k} \operatorname {s o f t m a x} (\alpha) _ {i} \varphi_ {i} ^ {\text {s i m}}. \tag {4} +$$ + +Then, we may further tune $\bar{\varphi}$ given the target task $\tau_t^1$ : + +$$ +\varphi^ {*} \leftarrow \underset {\bar {\varphi}} {\arg \min } L \left(\tau_ {t}; \theta_ {0}, \bar {\varphi}\right). \tag {5} +$$ + +There will be no further inference latency introduced by $\pi$ -Tuning as the dimension of $\varphi^{*}$ is as same as before. + +Relationship between FIM and landscape. We will provide complementary analytical analysis into the effectiveness of $\pi$ -Tuning to show that tasks with similar FIM lead to close local optimums so that the similarity of FIM can be a nice indicator for the effectiveness of interpolation. For simplicity, we consider two tasks $\tau_{1}$ and $\tau_{2}$ + +with loss function of cross-entropy, which is formulated as $L = \mathbb{E}_{(x,y)\sim P}[-\log P_{\varphi}(y|x)]$ . Let $\varphi_{1}$ and $\varphi_{2}$ be the corresponding local minimum for $\tau_{1}$ and $\tau_{2}$ , optimized from the same initial parameters $\varphi_0$ , respectively. The FIM of each task can be seen as the negative Hessian matrix of loss, which serves as the second derivative of the loss function. + +We now state our main theorem in Theorem 2.1, demonstrating the distance between $\varphi_{1}$ and $\varphi_{2}$ can be bounded when they have similar non-singular FIM along the linear path and their gradient at $\varphi_0$ is close. The detailed derivation is presented in Appendix B. + +Theorem 2.1. Assume that the gradient of two tasks at the initial parameters $\varphi_0$ is close and we have similar nonsingular FIM of two tasks along the linear path. Then the gap between two local minimal can be controlled by a constant $C$ , i.e., $\| \varphi_1 - \varphi_2 \|_2 \leq C$ . + +We note that, as previous works revealed that fine-tuned checkpoints initialized from the same pre-train model lie in the same basin of the error landscape (Neyshabur et al., 2020), we further found that PETL methods trained for similar tasks also have this property. Our method extends weight averaging on fine-tuned checkpoints under varied hyperparameter configurations for a single task (Wortsman et al., 2022) to various PETL methods and various tasks. As combined from different domains, our method shows a more robust performance compared to fine-tuning and original PETL methods when distribution shifts. + +
MethodFood101Caltech101DTDEuroSATAircraftFlowers102PetsCarsAVG
Multimodal Pretrained Baseline Models
CLIP (Radford et al., 2021)85.4993.7673.4095.7040.0294.9479.6162.8478.22
FLAVA (Singh et al., 2022)88.5195.7477.2997.2647.3196.3784.8270.8782.27
16-shot on OFA-Base
Adapter69.2792.3354.3131.4931.3293.0677.8142.1061.46
π-Adapter69.85(+0.58)92.74(+0.41)57.33(+3.02)36.49(+5.00)43.71(+16.39)94.52(+1.46)79.39(+1.58)54.04(+11.94)66.01(+4.55)
full data on OFA-Base
Adapter85.7795.1772.7593.0145.2497.5289.2653.7179.05
π-Adapter86.16(+0.39)95.82(+0.65)73.70(+0.95)93.94(+0.93)52.42(+7.18)98.05(+0.53)90.35(+1.09)61.30(+7.59)81.47(+2.42)
+ +Table 2. Experimental results on eight common vision tasks. We evaluate $\pi$ -Tuning in both few-shot and full-data scenarios. The predicted auxiliary experts for vision tasks all contain image captioning, verifying the cross-modal transfer benefits. + +
MethodMNLIQQPMRPCQNLIRTESST2AVG
Multimodal Pretrained Baseline Models
VisualBERT (Li et al., 2019)81.689.471.987.056.689.479.3
UNITER (Chen et al., 2020)80.989.269.386.055.689.778.5
Uni-Perceiver (Zhu et al., 2022)81.787.186.689.964.390.283.3
zero shot on OFA-Large
OFA-L37.1237.3162.9949.5049.4655.8536.53
π-Adapter44.36(+7.24)61.68(+24.37)69.85(+6.86)55.98(+6.48)51.99(+2.53)56.77(+0.92)42.58(+6.05)
full data on OFA-Large
Adapter85.9990.7087.5092.4972.2093.8187.12
π-Adapter86.06(+0.07)91.16(+0.46)87.75(+0.25)92.66(+0.17)76.53(+4.33)93.81(+0.00)88.00(+0.88)
full data on T5-Base
Adapter86.0391.0289.7192.5173.5794.7287.93
π-Adapter86.19(+0.16)91.13(+0.11)90.20(+0.49)92.62(+0.11)82.86(+9.29)95.07(+0.35)89.68(+1.75)
+ +Table 3. Experimental results on natural language understanding tasks from the GLUE benchmark (Wang et al., 2018). We experiment with $\pi$ -Tuning in both zero-shot and full-data settings with respect to two backbone models, OFA and T5. + +# 3. Experiments + +This section presents our key experimental findings. We begin with experimental settings (described in Sec. 3.1), and then verify the effectiveness of $\pi$ -Tuning on both cross-modal and uni-modal tasks (described in Sec. 3.2). Next, we give the rationality of the similarity measurement and the interpolation operation (described in Sec. 3.3). Afterward, we study the task-level transferability of the model after $\pi$ -tuning (described in Sec. 3.4). Finally, ablation studies of the key design choices are presented (described in Sec. 3.5). + +# 3.1. Experimental Settings + +Implementation details. We choose the vision-language foundation model OFA (Wang et al., 2022) as the pretrained model, with the mostly-used base-size and large-size version, whose parameters are 180M and 470M, respectively. We set the default number of auxiliary experts $k = 2$ . We follow the few-shot data split used in Zhou et al. (2022a). More experimental details are provided in Appendix A. + +Multimodal task pool. We evaluate $\pi$ -Tuning on a task pool across vision, language, and VL tasks. For vision tasks, we choose 8 common vision recognition tasks. For language + +tasks, we evaluate the model on 8 tasks from GLUE (Wang et al., 2018). For VL tasks, we experiment on both understanding and generation tasks, including RefCOCO, RefCOCO+ (Yu et al., 2016), RefCOCOg (Mao et al., 2016), VQAv2 (Goyal et al., 2017), SNLI-VE (Xie et al., 2019) and COCO image captioning (Chen et al., 2015). We follow the evaluation metrics in Wang et al. (2022) for VL and language tasks and Yang et al. (2022) for vision tasks. + +Compared methods. We compare $\pi$ -tuning with finetuning and other four types of state-of-the-art PETL methods: + +- Bitfit (Zaken et al., 2021) optimizes bias terms in all linear layers at every Transformer layer. +- Adapter tuning (Houlsby et al., 2019) insert adapters between transformer layers, consists of a down-project $W_{\mathrm{down}} \in \mathbb{R}^{h \times r}$ , followed by a nonlinear activation function, and an up-project $W_{\mathrm{up}} \in \mathbb{R}^{r \times h}$ , where $h$ is the hidden size of the transformer model and $r$ is a hyperparameter of adapters as bottleneck dimension. We set $r = 128$ in all experiments. +- Prompt tuning (Li & Liang, 2021) prepends vectors + +
MethodTraining Time (GPU hours)Throughput (samples/sec)Deployment Params (%)Training Params (%)
Bitfit20319.040.04%0.04%
LoRA21517.780.99%0.99%
Prompt Tuning29218.451.03%1.03%
Adapter34418.602.60%2.60%
π-Adapter1718.602.60%7.42%
+ +to keys and values of the attention module at every layer. Specifically, there is a tunable prompt vector embedding $P \in \mathbb{R}^{L \times l \times h}$ , where $L$ is the number of layers and $l$ is a hyper-parameter to define prefix vector length, to retrieve each prefix vector at every layer. We follow the setting of $l$ in Yang et al. (2022). + +- LoRA (Hu et al., 2021) decomposes weight matrices of transformer layers as trainable parameters to approximate the weight updates. For a linear projection $h = Wx$ where $W \in \mathbb{R}^{d \times s}$ , LoRA adds two trainable matrices $B \in \mathbb{R}^{d \times r}$ and $A \in \mathbb{R}^{r \times s}$ , where the rank is a hyperparameter and $r \ll \min(d, s)$ so that the linear projection is modified as $h = Wx + BAx$ . We set $r = 16$ in all experiments. + +We refer $\pi$ -tuning applied on adapters to $\pi$ -Adapter, similarly for $\pi$ -Prompt and $\pi$ -LoRA. We primarily use $\pi$ -Adapter for experiments since its superior performance. + +# 3.2. Comparison with PETL Methods + +Multi-/uni-modal tasks. Table 1 shows the results on 6 multimodal datasets of $\pi$ -tuning as well as previous SOTA results. We can see that (1) $\pi$ -Tuning improves original adapters across all tasks consistently; (2) $\pi$ -Tuning outperforms all the other parameter-efficient tuning methods across all tasks, without introducing additional inference latency, indicating that $\pi$ -Tuning is stronger in storage-constrained scenarios; (3) $\pi$ -Tuning achieves better or comparable results towards fully finetuning on 5 out of 6 datasets for the large-size model, while using significantly fewer trainable parameters. Although inferior to finetuning for base size, $\pi$ -Tuning still offers non-trivial performance gain for PETL methods, similar to the results in Yang et al. (2022). + +We further show $\pi$ -Tuning is competitive on uni-modal tasks, including both vision and language tasks. The outcome is demonstrated in Table 2 and Table 3. $\pi$ -Tuning achieves a consistent performance gain. Specifically, we + +Table 4. Computational costs of different methods on RefCOCO, including the parameter proportion for tunable and deployment parts of the network, as well as wall time during training versus inference. Training time is measured by A100 GPU hours. The inference cost is indicated by the throughput, which is the samples processed per second by a single A100 GPU. + +
MethodRefCOCORefCOCO+RefCOCOg
valtestAtestBvaltestAtestBval-utest-u
Prompt Tuning84.5385.2177.3676.3481.4467.6875.6176.57
π-Prompt85.7588.8579.6777.8483.0969.6177.4178.06
LoRA81.9185.8976.9072.2979.2262.2872.5573.26
π-LoRA85.8288.9781.7778.4183.9569.2277.5378.38
+ +Table 5. $\pi$ -Tuning results based on Prompt Tuning and LoRA for the base size model on referring expression comprehension datasets, indicating that $\pi$ -Tuning is agnostic to PETL methods and can provide a consistent gain. + +observe cross-modal transfer benefits in these experiments, where experts for image captioning can help with vision tasks and language entailment tasks can be benefited from experts for visual entailment. We attribute this to the similar semantics of tasks, as captioning can be regarded as a more detailed classification task and some visual entailment samples may directly rely on language entailment. + +We list and discuss the computational costs of different PETL methods during training versus inference. As demonstrated in Table 4, our $\pi$ -Adapter shows the same throughput and parameters during inference as original adapters. $\pi$ -Adapter requires more parameters for training due to the interpolation of multiple experts from similar tasks, which is the core insight of our paper. However, the training time for such an interpolation is quite short as the experts have been well-trained. We would like to clarify that the performance improvement of $\pi$ -Adapter does not come from increasing tunable parameters. As shown in Table 7 of our paper, the performance decreases when solely scaling up the tunable parameters without initialization from experts of similar tasks, indicating the effectiveness of transferring knowledge from similar tasks' experts. + +Few/zero-shot setting. To have a better understanding of the benefit of knowledge transferred from similar task experts, we experiment under two low-data settings, zero-shot natural language understanding and few-shot image classification to validate the effectiveness of $\pi$ -Tuning when data is scarce. We show the 16-shot image classification results in Table 2, where $\pi$ -Tuning brings more improvement compared to the full-data setting, with an average of $4.55\%$ accuracy improvement across 8 tasks. For the zero-shot setting, as we can not derive the target expert, we test the zeros-shot transfer performance of the target task using the expert of its nearest neighbor (most similar task), The zero-shot natural language understanding results are presented in Table 3, where the average improvement across tasks is more than that under the full-data setting. + +![](images/aa943419f1096473ef4e569431d4212f39d2afbe3e1afea8c85177ef3246303f.jpg) +(a) linear mode connectivity analysis on Ref-COCOg. + +![](images/be13650f2c54693ca9e3569e6519c160f5b9c0ac55ac500e7a0c6b9933b33a09.jpg) +(b) relationship between similarity rank and interpolation accuracy. +Figure 3. (a) Accuracy when linearly interpolating between the parameter-efficient tuning checkpoint of RefCOCOg and checkpoints of other tasks (RefCOCO+ and RefCOCO). The advantage of interpolating is correlated with the FIM similarity between tasks. The interpolation of similar tasks' performance maximum is higher than dissimilar tasks and needs a greater interpolation weight. (b) Direct transfer performance (accuracy at $\alpha = 1$ ) also correlates to FIM similarity. (c) The test error surface on RefCOCOg (Mao et al., 2016), as a function of model weights in a two-dimensional subspace3. It shows that checkpoints of similar tasks lie in the same basin of the surface. Interpolation of those checkpoints could achieve lower test error on the surface. The visualization follows Garipov et al. (2018), which derives an orthonormal basis $\hat{u}, \hat{v}$ by three checkpoints. + +![](images/e2f89fe949c7bd9efa8c74e05612cc7e318a60a9204f57953acff10bfed7d5ec.jpg) +(c) RefCOCOg test error landscape. + +Pretrained foundation models. We also extend $\pi$ -Tuning on a uni-modal pretrained foundation model, T5 (Raffel et al., 2020), to solve natural language understanding tasks. We demonstrate the results in Table 3, where we find that RTE is the task that benefits most with an improvement of $9.29\%$ accuracy. + +Experts from PETL methods. In Table 5, experiments on RefCOCO, RefCOCO+, and RefCOCOg datasets with the base-size model show that $\pi$ -Tuning is agnostic to PETL methods, providing significant and consistent performance gain for LoRA and Prompt Tuning. $\pi$ -LoRA improves a large margin of $4.20\%$ on average accuracy across three datasets towards original LoRA and $1.94\%$ for $\pi$ -Prompt towards original Prompt Tuning. + +# 3.3. Task Relationships + +Linear mode connectivity. As defined in Frankle et al. (2020), two parameters $w_{1}$ and $w_{2}$ are linear mode connectivity if the error barrier (Garipov et al., 2018; Draxler et al., 2018) height $\approx 0$ along the linear path between them. To provide our intuition, we first study the performance along the linear path between the target task expert $\varphi_{t}$ and experts for other tasks from the task set to see whether they have linear mode connectivity. We use adapters as parameter-efficient experts to store task information and RegCOCOg as the target task. We vary interpolation coefficient $\alpha$ from 0 to 1, with an interval of 0.05, and the combined expert $\bar{\varphi}$ is given by $(1 - \alpha)\varphi_{t} + \alpha \varphi$ , $\varphi \in \Phi$ , which is a special case of $k = 1$ as we defined in Sec. 2.2. + +As shown in Fig. 3(a), the model can benefit from interpolation compared to the original target task expert $\varphi_t$ when $\alpha = 0$ and there is linear mode connectivity between the target task expert and experts of similar tasks, while for + +dissimilar tasks there is large error barrier along the linear path. Furthermore, these results suggest that (1) experts with higher similarity need a greater interpolation coefficient, and (2) direct transfer $(\alpha = 1)$ performance and maximum interpolation performance correlate with similarity. + +Task similarity measurement. To investigate the correlation between task similarity and interpolation performance, we consider interpolation performance on the validation and test split of RefCOCO, RefCOCO+, and RefCOCOg. We compare the average accuracy across those splits for the interpolated experts with different similarities. The results are illustrated in Fig. 3(b), where the interpolation accuracy increases when experts with higher similarity to the target task are interpolated. We also visualize the task similarity of each task pair from our task space, which consists of both unimodal and multimodal tasks, in Fig. 1. The results illustrate that task embedding of vision tasks is closer to vision&language tasks compared to language and the embedding of image captioning task is similar to almost all the vision and VL tasks. We assume that is one of the reasons why image captioning is important in image-text multimodal pretraining. + +Error landscape visualization. We also visualize two slices of the test error landscape when interpolating experts in Fig. 3(c), where the error contours are basin-shaped and none of the individual domain-specific experts is optimal. The results suggest that interpolation may reach a better point in the basin of the landscape. + +# 3.4. Cross-Task Transferability + +Distribution shift. The boxplot in Fig. 4 summarizes the relative performance drop of $\pi$ -Adapter due to test distri + +![](images/038b268b627bc8c475d5d25315b336dd6764a5d205b8afd04d7013437809d1b1.jpg) +Figure 4. Relative performance drop (\%) of $\pi$ -Adapter compared to domain-specific models on referring expression comprehension datasets (RefCOCO, RefCOCO+, RefCOCOg). FT represents finetuning and the green triangles denote the mean. + +
MethodRefCOCORefCOCO+RefCOCOg
valtestAtestBvaltestAtestBval-utest-u
OFA-Base
Adapter86.3489.6180.8274.6083.2069.7979.7480.65
π-Adapter87.1290.3082.1679.4684.6371.4380.8482.00
OFA-Large
Adapter90.0092.8885.2483.8189.0876.5485.9985.96
π-Adapter90.5593.1285.8584.7790.3177.6886.9186.88
+ +bution shift compared to domain-specific models, which are optimized directly on the test domain. Specifically, we conduct experiments on referring expression comprehension datasets. As combined with experts trained from other domains, $\pi$ -Adapter shows more robust performance than adapters and fine-tuning trained on the specific domain when the test distribution is shifted. Even though $\pi$ -Adapter is further tuned on another specific domain, it still can be benefited from the interpolation with experts trained from other domains, while direct optimization of adapters or fine-tuning all the parameters may fit on the specific domain, lacking domain generalization ability. + +Multi-task learning. As interpolation with experts of different tasks can be regarded as an implicit multitask learning, we experiment with $\pi$ -Tuning in a multitask setting, demonstrated in Table 6 shows. Compared to direct multitask learning on different tasks, we observe that $\pi$ -Tuning performs better in all task splits, even exceeding the task-specific model's performance. + +Table 6. $\pi$ -Tuning can improve the performance compared to directly optimization in a multitask setting, achieving task specific model's performance across all task splits. + +
MethodRefCOCORefCOCO+RefCOCOg
valtestAtestBvaltestAtestBval-utest-u
w/o init.89.9592.3684.7983.8189.3176.8785.3685.68
only scale90.6792.7585.5584.6489.7177.0386.3486.75
π-Adapter90.4992.9385.9184.9290.0377.9186.6086.92
+ +Table 7. Ablation of $\pi$ -Tuning. "w/o init." denotes we randomly initialize auxiliary experts and "only scale" means we only tune the interpolation weight + +![](images/f92eaeba5473b5bf9556c86e4e2f7be86f10e22c3c3814f21d88a65a4f89decf.jpg) +Figure 5. The ablation of the number of experts used in $\pi$ -Tuning. The performance first rises to a maximum when the number of auxiliary experts increases and then gradually falls. + +# 3.5. Ablation Study + +The ablation of $\pi$ -Tuning. The ablation study of the proposed method is shown in Table 7. "w/o init." denotes we randomly initialize auxiliary experts, which performs even worse than the original adapter. "Only scale" means we only tune the interpolation weight $\alpha$ , achieving better results than the original adapter while slightly worse than $\pi$ -Adapter, which is reasonable as it only updates a subspace of parameters in $\pi$ -Tuning. + +The ablation of the number of experts. We experimentally explored the relationship between performance on 3 target tasks and the number of auxiliary experts for each of them, demonstrated in Fig. 5. We observe that the performance first rises to a maximum when $k$ increases and then gradually falls, and the best value of $k$ is around 2. We conjecture that the reason may lie in that as interpolation is conducted in descending order of task similarity, early interpolation with highly similar tasks can improve performance while following mixing dissimilar experts has negative inference on the target task. + +# 4. Scope and Limitations + +While this work has so far demonstrated that interpolation of the experts learned from similar tasks is a useful technique for improving accuracy and robustness, this section explores the limitations of the approach which are expected to be addressed in future studies. + +Task similarity measurement. The task similarities in $\pi$ -Tuning are measured by a diagonal approximation of FIM following the practice in (Achille et al., 2019). While it works well for multi-task interpolation in our paper, there can be other solutions without the unmanageable computational overhead of a full FIM. Recently, Vu et al. (2022); Zhou et al. (2022b) demonstrate that the parameters of the expert itself can be regarded as the task embedding, which can be a substitute solution to be applied to our approach. + +Applicability. We verify the effectiveness of $\pi$ -Tuning on OFA-Base/Large and T5-Base foundation models due to the limitation of computational resources. We notice that parameter-efficient experts work well in the field of AIGC, like Mou et al. (2023), which combines several adapters tuned for different conditions to achieve controllable image synthesis. More evaluations on much larger backbones and more fields would make the work more practical. And the number of tasks used for interpolation is manually designated in the current version. It is promising to determine the number of tasks adaptively. + +# 5. Related Work + +Parameter efficient transfer learning. In recent years, large-scale models pre-training on huge datasets have shown great capability in downstream tasks. However, the cost of directly finetuning the large-scale models is expensive in memory and storage. To mitigate this problem, researchers proposed several PETL methods to adapt large-scale pretrained models with a few trainable parameters. Houlsby et al. (2019) proposed inserting adapter layers between transformer layers. Hu et al. (2021) proposed injecting trainable low-rank matrices into transformer layers to approximate the weight updates. Li & Liang (2021); Lester et al. (2021); Liu et al. (2021) proposed optimizing the input word embeddings. However, these PETL methods tend to be limited to the target downstream task, ignoring the potential complementarity between different tasks and modalities. + +Identifying beneficial task relationships. Poth et al. (2021); Aghajanyan et al. (2021); Lu et al. (2020) have shown that pre-finetuning, an additional large-scale multitask learning stage between pre-training and finetuning, can significantly improve performance on the downstream tasks. However, multitask learning may appear conflicts between different tasks, or even have an adverse impact on target tasks. To identify beneficial task relationships, Bingel & Søgaard (2017) rely on features derived from learning curves and Alonso & Plank (2017) proposed using characteristics of datasets. Zamir et al. (2018) proposed representing the CV task relationships as an affinity matrix and optimizing the affinity matrix to get the most training-efficient task set. Achille et al. (2019); Vu et al. (2020) proposed using Fisher information matrix to extract task relationships. + +Averaging model weights. Ensembling the outputs of multiple models is an important method for improving the performance of deep learning models (Dietterich, 2000; Lakshminarayanan et al., 2017). However, when the size of the model is huge, ensembling the outputs of multiple models would be prohibitively expensive. Unlike model ensembles, averaging the weights of models is a foundational technique in convex optimization and deep learning without any extra costs at inference time. Neyshabur et al. (2020) find that when two models are fine-tuned from the same pre-trained initialization, the interpolated model would get at least the accuracy of the endpoints. Wortsman et al. (2022) proposed to produce a better model by averaging the weights of models finetuned with different hyperparameter configurations. + +# 6. Conclusion + +This paper presents $\pi$ -Tuning, a new parameter-efficient transfer learning framework, to exploit universal representation across modalities by taking benefits from similar tasks. Given a target task, $\pi$ -tuning first retrieves several similar tasks in arbitrary modalities and obtains a more transferable model by the interpolation of cross-modal experts. We theoretically and experimentally demonstrate the retrieved tasks tend to locate around the same basin on the error landscape. Experiments on diverse unimodal and multimodal tasks verify the cross-task transferability of the model in both full-data and low-data scenarios. + +Acknowledgement. This paper is partially supported by the National Key R&D Program of China No.2022ZD0161000 and the General Research Fund of Hong Kong No.17200622. We thank Lehan Wang for her technical assistance, and Zhuoning Guo, Wenhao Lin, and Kaiyu Huang for their helpful comments. + +# References + +Achille, A., Lam, M., Tewari, R., Ravichandran, A., Maji, S., Fowlkes, C. C., Soatto, S., and Perona, P. Task2vec: Task embedding for meta-learning. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 6430-6439, 2019. +Aghajanyan, A., Gupta, A., Shrivastava, A., Chen, X., Zettlemoyer, L., and Gupta, S. Muppet: Massive multitask representations with pre-finetuning. In Moens, M., Huang, X., Specia, L., and Yih, S. W. (eds.), Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pp. 5799-5811. Association for Computational Linguistics, 2021. doi: 10.18653/v1/2021.emnlp-main.468. URL https://doi.org/10.18653/v1/ + +2021.emnlp-main.468. +Alonso, H. M. and Plank, B. When is multitask learning effective? semantic sequence prediction under varying data conditions. In Lapata, M., Blunsom, P., and Koller, A. (eds.), Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pp. 44-53. Association for Computational Linguistics, 2017. doi: 10.18653/v1/e17-1005. URL https://doi.org/10.18653/v1/e17-1005. +Amari, S.-I. Natural gradient works efficiently in learning. Neural computation, 10(2):251-276, 1998. +Bingel, J. and Søgaard, A. Identifying beneficial task relations for multi-task learning in deep neural networks. In Lapata, M., Blunsom, P., and Koller, A. (eds.), Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 2: Short Papers, pp. 164-169. Association for Computational Linguistics, 2017. doi: 10.18653/v1/e17-2026. URL https://doi.org/10.18653/v1/e17-2026. +Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language models are few-shot learners. Advances in neural information processing systems, 33: 1877-1901, 2020. +Chen, X., Fang, H., Lin, T.-Y., Vedantam, R., Gupta, S., Dollár, P., and Zitnick, C. L. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325, 2015. +Chen, Y., Li, L., Yu, L., Kholy, A. E., Ahmed, F., Gan, Z., Cheng, Y., and Liu, J. UNITER: universal image-text representation learning. In Vedaldi, A., Bischof, H., Brox, T., and Frahm, J. (eds.), Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXX, volume 12375 of Lecture Notes in Computer Science, pp. 104-120. Springer, 2020. doi: 10.1007/978-3-030-58577-8\7. URL https://doi.org/10.1007/978-3-030-58577-8_7. +Chen, Y.-C., Li, L., Yu, L., El Kholy, A., Ahmed, F., Gan, Z., Cheng, Y., and Liu, J. Uniter: Learning universal image-text representations. 2019. +Cho, J., Lei, J., Tan, H., and Bansal, M. Unifying vision-and-language tasks via text generation. In International Conference on Machine Learning, pp. 1931-1942. PMLR, 2021. +Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: + +Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. +Dietterich, T. G. Ensemble methods in machine learning. In Kittler, J. and Roli, F. (eds.), Multiple Classifier Systems, First International Workshop, MCS 2000, Cagliari, Italy, June 21-23, 2000, Proceedings, volume 1857 of Lecture Notes in Computer Science, pp. 1-15. Springer, 2000. doi: 10.1007/3-540-45014-9\_.1. URL https://doi.org/10.1007/3-540-45014-9_1. +Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. An image is worth 16x16 words: Transformers for image recognition at scale. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id=YicbFdNTTy. +Draxler, F., Veschgini, K., Salmhofer, M., and Hamprecht, F. Essentially no barriers in neural network energy landscape. In International conference on machine learning, pp. 1309-1318. PMLR, 2018. +Frankle, J., Dziugaite, G. K., Roy, D., and Carbin, M. Linear mode connectivity and the lottery ticket hypothesis. In International Conference on Machine Learning, pp. 3259-3269. PMLR, 2020. +Gan, Z., Chen, Y.-C., Li, L., Zhu, C., Cheng, Y., and Liu, J. Large-scale adversarial training for vision-and-language representation learning. Advances in Neural Information Processing Systems, 33:6616-6628, 2020. +Garipov, T., Izmailov, P., Podoprikhin, D., Vetrov, D. P., and Wilson, A. G. Loss surfaces, mode connectivity, and fast ensembling of dnns. Advances in neural information processing systems, 31, 2018. +Goyal, Y., Khot, T., Summers-Stay, D., Batra, D., and Parikh, D. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6904-6913, 2017. +Houlsby, N., Giurgiu, A., Jastrzebski, S., Morrone, B., De Laroussilhe, Q., Gesmundo, A., Attariyan, M., and Gelly, S. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pp. 2790-2799. PMLR, 2019. +Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., and Chen, W. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. + +Kamath, A., Singh, M., LeCun, Y., Synnaeve, G., Misra, I., and Carion, N. Mdetr-modulated detection for end-to-end multi-modal understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1780-1790, 2021. +Lakshminarayanan, B., Pritzel, A., and Blundell, C. Simple and scalable predictive uncertainty estimation using deep ensembles. In Guyon, I., von Luxburg, U., Bengio, S., Wallach, H. M., Fergus, R., Vishwanathan, S. V. N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 6402-6413, 2017. URL https://proceedings.neurips.cc/paper/2017混沌/9ef2ed4b7fd2c810847ffaf85bce38-Abstr.html. +Lester, B., Al-Rfou, R., and Constant, N. The power of scale for parameter-efficient prompt tuning. In Moens, M., Huang, X., Specia, L., and Yih, S. W. (eds.), Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pp. 3045-3059. Association for Computational Linguistics, 2021. doi: 10.18653/v1/2021.emnlp-main.243. URL https://doi.org/10.18653/v1/2021.emnlp-main.243. +Li, L. H., Yatskar, M., Yin, D., Hsieh, C., and Chang, K. Visualbert: A simple and performant baseline for vision and language. CoRR, abs/1908.03557, 2019. URL http://arxiv.org/abs/1908.03557. +Li, X. L. and Liang, P. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190, 2021. +Liu, X., Zheng, Y., Du, Z., Ding, M., Qian, Y., Yang, Z., and Tang, J. GPT understands, too. CoRR, abs/2103.10385, 2021. URL https://arxiv.org/abs/2103.10385. +Lu, J., Goswami, V., Rohrbach, M., Parikh, D., and Lee, S. 12-in-1: Multi-task vision and language representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10437-10446, 2020. +Lu, J., Clark, C., Zellers, R., Mottaghi, R., and Kembhavi, A. Unified-io: A unified model for vision, language, and multi-modal tasks. arXiv preprint arXiv:2206.08916, 2022. +Mao, J., Huang, J., Toshev, A., Camburu, O., Yuille, A. L., and Murphy, K. Generation and comprehension of unambiguous object descriptions. In Proceedings of the IEEE + +conference on computer vision and pattern recognition, pp. 11-20, 2016. +Mou, C., Wang, X., Xie, L., Zhang, J., Qi, Z., Shan, Y., and Qie, X. T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. arXiv preprint arXiv:2302.08453, 2023. +Neyshabur, B., Sedghi, H., and Zhang, C. What is being transferred in transfer learning? Advances in neural information processing systems, 33:512-523, 2020. +Pfeiffer, J., Kamath, A., Rückle, A., Cho, K., and Gurevych, I. Adapterfusion: Non-destructive task composition for transfer learning. arXiv preprint arXiv:2005.00247, 2020. +Poth, C., Pfeiffer, J., Rückle, A., and Gurevych, I. What to pre-train on? efficient intermediate task selection. arXiv preprint arXiv:2104.08247, 2021. +Pruksachatkun, Y., Phang, J., Liu, H., Htut, P. M., Zhang, X., Pang, R. Y., Vania, C., Kann, K., and Bowman, S. R. Intermediate-task transfer learning with pretrained models for natural language understanding: When and why does it work? arXiv preprint arXiv:2005.00628, 2020. +Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pp. 8748-8763. PMLR, 2021. +Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P. J., et al. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1-67, 2020. +Schick, T. and Schütze, H. It's not just size that matters: Small language models are also few-shot learners. arXiv preprint arXiv:2009.07118, 2020. +Singh, A., Hu, R., Goswami, V., Couairon, G., Galuba, W., Rohrbach, M., and Kiela, D. FLAVA: A foundational language and vision alignment model. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pp. 15617-15629. IEEE, 2022. doi: 10.1109/CVPR52688.2022.01519. URL https://doi.org/10.1109/CVPR52688.2022.01519. +Vu, T., Wang, T., Munkhdalai, T., Sordoni, A., Trischler, A., Mattarella-Micke, A., Maji, S., and Iyyer, M. Exploring and predicting transferability across nlp tasks. arXiv preprint arXiv:2005.00770, 2020. +Vu, T., Lester, B., Constant, N., Al-Rfou, R., and Cer, D. Spot: Better frozen model adaptation through soft prompt transfer. In Proceedings of the 60th Annual Meeting of + +the Association for Computational Linguistics (Volume 1: Long Papers), pp. 5039-5059, 2022. +Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461, 2018. +Wang, P., Yang, A., Men, R., Lin, J., Bai, S., Li, Z., Ma, J., Zhou, C., Zhou, J., and Yang, H. Ofa: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In International Conference on Machine Learning, pp. 23318-23340. PMLR, 2022. +Wortsman, M., Ilharco, G., Gadre, S. Y., Roelofs, R., Gontijo-Lopes, R., Morcos, A. S., Namkoong, H., Farhadi, A., Carmon, Y., Kornblith, S., et al. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. In International Conference on Machine Learning, pp. 23965-23998. PMLR, 2022. +Xie, N., Lai, F., Doran, D., and Kadav, A. Visual entailment: A novel task for fine-grained image understanding. arXiv preprint arXiv:1901.06706, 2019. +Yang, H., Lin, J., Yang, A., Wang, P., Zhou, C., and Yang, H. Prompt tuning for generative multimodal pretrained models. arXiv preprint arXiv:2208.02532, 2022. +Yang, Z., Gan, Z., Wang, J., Hu, X., Ahmed, F., Liu, Z., Lu, Y., and Wang, L. Crossing the format boundary of text and boxes: Towards unified vision-language modeling. arXiv preprint arXiv:2111.12085, 2021. +Yu, L., Poirson, P., Yang, S., Berg, A. C., and Berg, T. L. Modeling context in referring expressions. In European Conference on Computer Vision, pp. 69-85. Springer, 2016. +Zaken, E. B., Ravfogel, S., and Goldberg, Y. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. arXiv preprint arXiv:2106.10199, 2021. +Zamir, A. R., Sax, A., Shen, W. B., Guibas, L. J., Malik, J., and Savarese, S. Taskonomy: Disentangling task transfer learning. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 3712-3722. Computer Vision Foundation / IEEE Computer Society, 2018. doi: 10.1109/CVPR.2018.00391. URL http://openaccess.thecvf.com/content_cvpr_2018/html/Zamir_Taskonomy_Disentangling_Task_CVPR_2018_paper.html. + +Zhou, K., Yang, J., Loy, C. C., and Liu, Z. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9):2337-2348, 2022a. +Zhou, W., Xu, C., and McAuley, J. Efficiently tuned parameters are task embeddings. arXiv preprint arXiv:2210.11705, 2022b. +Zhu, X., Zhu, J., Li, H., Wu, X., Li, H., Wang, X., and Dai, J. Uni-perceiver: Pre-training unified architecture for generic perception for zero-shot and few-shot tasks. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pp. 16783-16794. IEEE, 2022. doi: 10.1109/CVPR52688.2022.01630. URL https://doi.org/10.1109/CVPR52688.2022.01630. + +# A. Experimentin Setups + +# A.1. Experimental Setting for VL Tasks + +Referring Expression Comprehension We report the standard metric ACC@0.5 on the validation and test sets. Epochs are set to 100, dropout is set to 0.1, warmup rate is set to 0.06, and label smoothing rate is set to 0.1. For prompt tuning, we follow the experimental setting used in Yang et al. (2022), where the batch size is set to 128, the learning rate is set to 0.03, and the prompt length is set to 100. For LoRA, the batch size is set to 256 for the base, and 1024 for the large, the learning rate is set to 1e-4, and the rank is set to 16. For adapters, the batch size is set to 1024, the learning rate is set to 1e-4, and the bottleneck dimension is set to 128. + +Visual Entailment We report accuracy on both dev and test sets. Dropout is set to 0.1, warmup rate is set to 0.06, and label smoothing rate is set to 0.1. For prompt tuning, we follow the experimental settings in Yang et al. (2022), where the batch size is set to 128, the learning rate is set to 0.03, epochs are set to 100 and the prompt length is set to 64. For LoRA, the batch size is set to 256 for the base, and 512 for the large, epochs are 100, the learning rate is set to 1e-4, and the rank is set to 16. For adapters, the batch size is set to 512, epochs are set to 10, the learning rate is set to 1e-4, and the bottleneck dimension is set to 128. + +Image Captioning We report BLEU@4, METEOR, CIDEr, and SPICE scores on the Karpathy test split. Dropout is set to 0.1, warmup rate is set to 0.06, and label smoothing rate is set to 0.1. For prompt tuning, we follow the experimental setting used in Yang et al. (2022), where the batch size is set to 256, the learning rate is set to 0.03, epochs are set to 100 and the prompt length is set to 64. For LoRA, the batch size is set to 128, epochs are 100, the learning rate is set to 1e-4, and the rank is set to 16. For adapters, the batch size is set to 512 for base and 128 for large, epochs are set to 10, the learning rate is set to 1e-4, and the bottleneck dimension is set to 128. + +Visual Question Answering We conduct experiments on VQA 2.0 and report the score on the test-dev and test-standard set. Dropout is set to 0.1, warmup rate is set to 0.04, Exponential Moving Average(EMA) with a decay rate is set to 0.9999, and label smoothing rate is set to 0.1. For prompt tuning, we follow the experimental setting used in Yang et al. (2022), where the batch size is set to 256, the learning rate is set to 0.03, epochs are set to 100 and the prompt length is set to 10. For LoRA, the batch size is set to 128, epochs are 100, the learning rate is set to 1e-4, and the rank is set to 16. For adapters, the batch size is set to 256, epochs are set to 30 for base and 15 for large, and the bottleneck dimension is set to 128. After finetuning, we use beam search to generate our answer. + +After we retrieve auxiliary experts, we apply the same optimization step as the initial experts to train the interpolation of experts. All of the experiments are conducted in A100 40G and V100 32G. + +# A.2. Experimental Setting for Vision Tasks + +We select 8 common vision tasks from COOP (Zhou et al., 2022a), following their splits. For few-shot image classification each adapter for 200 epochs, with a batch size of 64 and a learning rate of 1e-4. The ratio for label smoothing is 0.1. We follow the data augmentation used in Wang et al. (2022), where the same random resize cropping, random flipping, RandAug, and random erasing transformations are conducted. We use Mixup and CutMix with an overall 0.5 probability to be performed for each batch and alpha is set to 0.8 and 1.0. + +# A.3. Experimental Setting for Language Tasks + +We select 6 language understanding tasks from GLUE benchmark (Wang et al., 2018) as Wang et al. (2022), including both single-sentence classification tasks and sentence-pair classification tasks. We reuse the instructions of each task used in Wang et al. (2022) in our experiments. For the hyper-parameters of adapters, we tune the training epochs among \{5, 7, 10\}, learning rate among \{0.03, 1e-4, 5e-5\}, batch size among \{32, 64, 128\}. We report the best performance on the development set for each task following Wang et al. (2022). + +# B. Analysis For Relationship Between FIM and Landscape + +Here we will give a detailed analysis of the relationship between FIM and local optimums. We begin this by restating and adding to the notation used in Sec. 2.1. + +# B.1. Notation and preliminaries + +Let $\theta_0$ be the pre-trained weights and $\varphi_0$ be the initial extra added parameters. For a model with pre-trained weights $\theta_0$ , extra added parameters $\varphi$ and input vector $x$ , we let $P_{\varphi}(y|x)$ denote the model's output distribution. The two loss functions of tasks $\tau_i$ ( $i = 1,2$ ) can be formulated as $L = \underset{(x,y)\sim P_i}{\mathbb{E}}[-\log P_{\varphi}(y|x)]$ , where $P_i$ is the corresponding empirical distribution of task $\tau_i$ . The FIM $F_{\varphi}^i$ of each task $\tau_i$ can be seen as the negative of the Hessian matrix of loss, which is formulated as $F_{\varphi}^i = -\nabla_{\varphi}^2 L_{\varphi}^i = \mathbb{E}_{x,y\sim P_i}[\nabla_{\varphi}\log P_{\varphi}(y|x)\nabla_{\varphi}\log P_{\varphi}(y|x)^T]$ . Let $\varphi_1$ and $\varphi_2$ be the corresponding local minimum for $\tau_1$ and $\tau_2$ , optimized from the same initial parameters $\varphi_0$ , respectively. + +# B.2. Restatement of Assumption + +Before we start our proof of Theorem 2.1, we introduce the assumptions mentioned in Section 2.2 formally. + +Assumption B.1. The gradient of two loss functions are similar at the initial point $\varphi_0$ , i.e., there is a constant $C_1$ such that $\| \nabla_{\varphi}L_{1}(\varphi_{0}) - \nabla_{\varphi}L_{2}(\varphi_{0})\|_{2}\leq C_{1}$ . + +Assumption B.2. The Fisher information matrix in the linear line from $\varphi_0$ to $\varphi_{1}$ for two tasks are similar, i.e., there is a constant $C_2$ for $\varphi$ along the line to have $||F_{\varphi}^{1} - F_{\varphi}^{2}||_{2}\leq C_{2}$ . + +Assumption B.3. The Fisher information matrix for task $\tau_{2}$ in the interpolation line is similar to which at $\varphi_{1}$ , i.e., there is a constant $0 < C_3 < 1$ for $\varphi$ along the interpolation line to have the following property + +$$ +| | F _ {\varphi} ^ {2} - F _ {\varphi_ {1}} ^ {2} | | _ {1} \leq \frac {1 - C _ {3}}{n _ {0} | | [ F _ {\varphi_ {1}} ^ {2} ] ^ {- 1} | | _ {2}} = \frac {1 - C _ {3}}{n} \sqrt {\lambda_ {\min} ([ F _ {\varphi_ {1}} ^ {2} ] ^ {t} F _ {\varphi_ {1}} ^ {2})}, +$$ + +where $n_0$ is the dimension of extra added parameters, $\lambda_{\mathrm{min}}$ means to get the smallest one of eigenvalues. + +Assumption B.1 and Assumption B.2 are mild as Assumption B.1 only requires the norm of gradient difference of two tasks in the initial parameters $\varphi_0$ can be bounded and B.2 is the formal restatement of our requirement of two similar tasks. Assumption B.3 requires FIM in the interpolation line varies little for avoiding the Fisher information matrix degenerating since when $C_3 > 1$ the FIM in the interpolation line can degenerate. It corresponds to the non-singular requirement in Theorem 2.1. We add this condition for the reason that a quick change of FIM in the interpolation line will lead to difficulty in comparing FIM. + +# B.3. Proof of Theorem 2.1 + +We now begin our proof of Theorem 2.1: + +Proof. Since $\varphi_{1}$ and $\varphi_{2}$ are local minimums for two loss functions, the gradient of loss there will be zero, i.e. + +$$ +\nabla_ {\varphi} L _ {i} (\varphi_ {i}) = 0. +$$ + +Let $\varphi_t = \varphi_0 + t(\varphi_1 - \varphi_0)$ and $\hat{\varphi}_t = \varphi_1 + t(\varphi_2 - \varphi_1)$ for $0 \leq t \leq 1$ . $c = ||[F_{\varphi_1}^2]^{-1}||_2$ . + +By the Newton-Leibniz formula, we have + +$$ +\begin{array}{l} 0 = \nabla_ {\varphi} L _ {1} (\varphi_ {1}) - \nabla_ {\varphi} L _ {2} (\varphi_ {2}) \\ = \nabla_ {\varphi} L _ {1} (\varphi_ {0}) - \int_ {0} ^ {1} F _ {\varphi_ {t}} ^ {1} (\varphi_ {1} - \varphi_ {0}) d t - \nabla_ {\varphi} L _ {2} (\varphi_ {0}) + \int_ {0} ^ {1} F _ {\varphi_ {t}} ^ {2} (\varphi_ {1} - \varphi_ {0}) d t + \int_ {0} ^ {1} F _ {\hat {\varphi} _ {t}} ^ {2} (\varphi_ {2} - \varphi_ {1}) d t \\ = \nabla_ {\varphi} L _ {1} (\varphi_ {0}) - \nabla_ {\varphi} L _ {2} (\varphi_ {0}) - \int_ {0} ^ {1} [ F _ {\varphi_ {t}} ^ {1} - F _ {\varphi_ {t}} ^ {2} ] (\varphi_ {1} - \varphi_ {0}) d t + \int_ {0} ^ {1} F _ {\hat {\varphi} _ {t}} ^ {2} (\varphi_ {2} - \varphi_ {1}) d t. \\ \end{array} +$$ + +Moving the last term to the left, we will have the following statement + +$$ +\begin{array}{l} \left[ \int_ {0} ^ {1} F _ {\hat {\varphi} _ {t}} ^ {2} d t \right] (\varphi_ {2} - \varphi_ {1}) \\ = \nabla_ {\varphi} L _ {2} (\varphi_ {0}) - \nabla_ {\varphi} L _ {1} (\varphi_ {0}) + \left[ \int_ {0} ^ {1} [ F _ {\varphi_ {t}} ^ {1} - F _ {\varphi_ {t}} ^ {2} ] d t \right] (\varphi_ {1} - \varphi_ {0}). \\ \end{array} +$$ + +The formula on the left is the integration of FIM, in the interpolation line, multiplying the difference between two local minimums. Thus we will check the difference between the integration and the FIM at $\varphi_{1}$ . + +$$ +\left| \left| \int_ {0} ^ {1} F _ {\hat {\varphi} _ {t}} ^ {2} d t - F _ {\varphi_ {1}} ^ {2} \right| \right| _ {1} = \left| \left| \int_ {0} ^ {1} F _ {\hat {\varphi} _ {t}} ^ {2} - F _ {\varphi_ {1}} ^ {2} d t \right| \right| _ {1} \leq \int_ {0} ^ {1} | | F _ {\hat {\varphi} _ {t}} ^ {2} - F _ {\varphi_ {1}} ^ {2} | | _ {1} d t \leq \int_ {0} ^ {1} \frac {1 - C _ {3}}{n _ {0} | | [ F _ {\varphi_ {1}} ^ {2} ] ^ {- 1} | | _ {2}} d t = \frac {1 - C _ {3}}{n _ {0} c}. +$$ + +The last inequality rises from Assumption B.3. Let $H = \int_0^1 F_{\hat{\varphi}_t}^2 dt$ , $H_0 = F_{\varphi_1}^2$ . Then the statement above can be reformulated as + +$$ +| | H - H _ {0} | | _ {1} \leq \frac {1 - C _ {3}}{n _ {0} | | [ F _ {\varphi_ {1}} ^ {2} ] ^ {- 1} | | _ {2}} = \frac {1 - C _ {3}}{n _ {0} c}. +$$ + +Since $H_0 = F_{\varphi_1}^2$ is symmetric, $H_0$ can be diagonalized orthogonally. Thus we can assume $H_0 = P\Lambda P^{-1}$ , where $\Lambda$ is diagonal and $P$ is orthogonal. We have + +$$ +\left| \left| P ^ {- 1} H P - \Lambda \right| \right| _ {1} = \left| \left| P ^ {- 1} (H - H _ {0}) P \right| \right| _ {1} \leq \left| \left| P ^ {- 1} \right| \right| _ {1} \cdot \| P \| _ {1} \cdot \| H - H _ {0} \| _ {1}; +$$ + +Notice that for any square matrix $M$ of size $n_0$ , we have $||M||_1 \leq \sqrt{n_0} ||M||_2$ . We immediately obtain + +$$ +\begin{array}{l} \left| \left| P ^ {- 1} \right| \right| _ {1} \cdot | | P | | _ {1} \cdot | | H - H _ {0} | | _ {1} \leq \sqrt {n _ {0}} \left| \left| P ^ {- 1} \right| \right| _ {2} \cdot \sqrt {n _ {0}} | | P | | _ {2} \cdot | | H - H _ {0} | | _ {1} \\ = n _ {0} \left\| H - H _ {0} \right\| _ {1} \leq \frac {1 - C _ {3}}{\left\| H _ {0} ^ {- 1} \right\| _ {2}} = \frac {1 - C _ {3}}{\left\| \Lambda^ {- 1} \right\| _ {2}}. \\ \end{array} +$$ + +The first equality rises from the properties of orthogonality of $P$ and $P^{-1}$ . According to Gershgorin's circle theorem, the eigenvalues of $P^{-1}HP$ can be covered by $n_0$ circles with centers of the diagonal elements of $\Lambda$ and radius of all $\left\| P^{-1}HP - \Lambda \right\|_1$ . Since the diagonal elements of $\Lambda$ are of norms at least $\frac{1}{||\Lambda^{-1}||_2}$ and the radius is less than $\frac{1 - C_3}{||\Lambda^{-1}||_2}$ , the eigenvalues of $P^{-1}HP$ are of norms at least $\frac{1}{||\Lambda^{-1}||_2} - \frac{1 - C_3}{||\Lambda^{-1}||_2} = \frac{C_3}{||\Lambda^{-1}||_2}$ . Therefore we have + +$$ +\left\| H ^ {- 1} \right\| _ {2} = \left\| \left(P ^ {- 1} H P\right) ^ {- 1} \right\| _ {2} \leq \frac {\left\| \Lambda^ {- 1} \right\| _ {2}}{C _ {3}} = \frac {\left\| H _ {0} ^ {- 1} \right\| _ {2}}{C _ {3}} = \frac {c}{C _ {3}}. +$$ + +Since we already have + +$$ +H (\varphi_ {2} - \varphi_ {1}) = \nabla_ {\varphi} L _ {2} (\varphi_ {0}) - \nabla_ {\varphi} L _ {1} (\varphi_ {0}) + \left[ \int_ {0} ^ {1} [ F _ {\varphi_ {t}} ^ {1} - F _ {\varphi_ {t}} ^ {2} ] d t \right] (\varphi_ {1} - \varphi_ {0}), +$$ + +left multiplying the matrix $H^{-1}$ on both sides we will get the following statement + +$$ +\varphi_ {2} - \varphi_ {1} = H ^ {- 1} \left[ \nabla_ {\varphi} L _ {2} \left(\varphi_ {0}\right) - \nabla_ {\varphi} L _ {1} \left(\varphi_ {0}\right) + \left[ \int_ {0} ^ {1} \left[ F _ {\varphi_ {t}} ^ {1} - F _ {\varphi_ {t}} ^ {2} \right] d t \right] \left(\varphi_ {1} - \varphi_ {0}\right) \right]. +$$ + +Thus the distance between two local minimums can be further bounded as + +$$ +\begin{array}{l} \left. \right.\left|\left| \varphi_ {2} - \varphi_ {1} \right|\right| _ {2} = \left|\left| H ^ {- 1} \left[ \nabla_ {\varphi} L _ {1} \left(\varphi_ {0}\right) - \nabla_ {\varphi} L _ {2} \left(\varphi_ {0}\right) + \left[ \int_ {0} ^ {1} \left[ F _ {\varphi_ {t}} ^ {1} - F _ {\varphi_ {t}} ^ {2} \right] d t \right]\left(\varphi_ {1} - \varphi_ {0}\right)\right]\right|\right| _ {2} \\ \leq \frac {| | H _ {0} ^ {- 1} | | _ {2}}{C _ {3}} \left[ | | \nabla_ {\varphi} L _ {1} (\varphi_ {0}) - \nabla_ {\varphi} L _ {2} (\varphi_ {0}) | | _ {2} + \left| \left| \left[ \int_ {0} ^ {1} [ F _ {\varphi_ {t}} ^ {1} - F _ {\varphi_ {t}} ^ {2} ] d t \right] (\varphi_ {1} - \varphi_ {0}) \right| \right| _ {2} \right] \\ \leq \frac {\left| \left| H _ {0} ^ {- 1} \right| \right| _ {2}}{C _ {3}} \left[ \left| \left| \nabla_ {\varphi} L _ {1} \left(\varphi_ {0}\right) - \nabla_ {\varphi} L _ {2} \left(\varphi_ {0}\right) \right| \right| _ {2} + \left[ \int_ {0} ^ {1} \left| \left| \delta F _ {\varphi_ {t}} \right| \right| _ {2} d t \right] \cdot \left| \left| \varphi_ {1} - \varphi_ {0} \right| \right| _ {2} \right] \leq \frac {c}{C _ {3}} \left[ C _ {1} + C _ {2} R _ {0} \right], \tag {6} \\ \end{array} +$$ + +where $\delta F$ means $F^1 - F^2$ . The last inequality rises from Assumption B.1 and B.2. The last term is a constant, which finishes the proof. + +According to (6), we see that the distance between two local minimums can be bounded by the difference of the gradient $\nabla_{\varphi}L(\varphi_0)$ of two loss functions at $\varphi_0$ , and the difference between the two Fisher information matrices in the interpolation line. \ No newline at end of file diff --git a/pituningtransferringmultimodalfoundationmodelswithoptimalmultitaskinterpolation/images.zip b/pituningtransferringmultimodalfoundationmodelswithoptimalmultitaskinterpolation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3a6f17b80ee4aa9baaf3a54c9779be4790bdb108 --- /dev/null +++ b/pituningtransferringmultimodalfoundationmodelswithoptimalmultitaskinterpolation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:622b3135656019a18b0a0f8ddf903547d122f81f96d076ec92c29e514763e2c3 +size 779843 diff --git a/pituningtransferringmultimodalfoundationmodelswithoptimalmultitaskinterpolation/layout.json b/pituningtransferringmultimodalfoundationmodelswithoptimalmultitaskinterpolation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..06f6376be1268477ff1fb393a2ecb8162a4ad1f7 --- /dev/null +++ b/pituningtransferringmultimodalfoundationmodelswithoptimalmultitaskinterpolation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8eb87ec06b8f4ba29bb6e21ef65a76398407969ab72b3290e1804a0ab8642600 +size 628871